a forest is one tree in front of another
A series of cameraphone images processed with multiple passes of machine learning depth lens algorithms to confuse distance
C-type on dibond, 50cm x 50cm
What is photorealistic now?
From 2020 onwards, beta versions of lens blurring neural filters began appearing in photo manipulation programmes. I wanted to test the edges of the model on which they operate, and what incongruities might arise from feeding these filters images that most people wouldn’t choose to take, ones that set up awkward or playful relationships between figure and ground. I also figured that I wanted this sort of adversarial testing to be grounded in the physical act of wandering about and lining things out there in the world for the model to work with.
I began recording a series of photographs of felled or fallen trees that are taken from a point directly in front of other trees. Each composition lines up the decomposing stump in the foreground with the growing, composing tree in the background. What would these beta programs do with these particular forced images? How would the program read and calculate the images? And how, in turn would those manipulations affect how we read the image ourselves?
How do machine learning models compute distance and tree-ness?
Each photograph is processed via machine learning with the task of analysing the photograph and applying natural looking depth of field blurring effects. The machine learning models are trained on a forest of images. These inform the model through real word examples of what distance looks like.
This in turn enables the creation of depth mattes, placing one thing in front of another, before applying blurs of varying strengths to the image.
This odd materiality of tree-ness is grafted together from computational materiality’s attempt to create the look of a lens based materiality. Is computational photography limited to a limitless pastiche of all recorded styles?