AI for image analysis. Using OpenAI’s CLIP model, I generated embeddings for c. 500 of my own photographs. Cosine similarities were calculated for every pair of images to generate sets of ‘similar’ images - these similarity scores are the basis for the network layout. Using UMAP, the embeddings were reduced to a smaller set of features and clustered with HDBSCAN, producing seven (mostly) coherent groups of photos. Lastly, BLIP-2 was used to generate image captions.