A year or two ago, if you’d asked, I would have suggested that yes, eventually, software and robots will take all our jobs. The only jobs that were potentially safe were the purely creative ones: writers, painters, musicians, artists of one kind or another. But this year I started painting, and recently I began collaborating with deep neural networks as part of my artistic process. Now I can see that the future seldom runs in a straight line from the present; it usually ends up being far more nuanced, with twists and turns that are only apparent and obvious in hindsight. Integrating an AI into my artistic process has enhanced my work and made me a better painter too.
Part of my current artistic process is to work with photos. So far I’ve used family photos or found photos with compositions that I find appealing. I’m especially drawn to scenes with anonymous figures who have their backs turned or their faces fully or partially obscured, and those are the types of things that I’ve painted. Here are a couple of example paintings from early this year:
“Wildwood 66” is based on a photo taken of my Aunt and Uncle at the beach in Wildwood, New Jersey in 1966, before I was born. In the photo my Aunt and Uncle stand prominently in the foreground, and in the background there are numerous figures heading into and coming out of the waves.
Whe I started working with this image I removed my Aunt and Uncle entirely, and in that way removed any personal connection I had to it; it’s those background figures that I love. They’re uninhibited because they aren’t the subject of the photo or even aware it’s being taken; they’re anonymous and many of them are likely dead; each one had or has a life that’s probably just beyond our ability to know anything about. All we have is the photograph that gives us a tiny slice of them in that one moment. After a couple of test paintings I took the background figures that I loved the most and composed them into an idealistic, nostalgic and ultimately weird landscape where even when they’re together, they’re separate.
This is a self-portrait, based on a photo taken by my wife. “Haver” is my internal art-persona. He is a honey-badger who doesn’t stop and who only thinks in 100 painting increments. He’s also a way for me to anonymize the idea of myself as a subject (Haver’s beard is much fuller than my own). Again there’s a disconnect between the subject and the background or landscape; in this case I was exploring the idea of personal acceptance.
Part way through the year I bought a brown bag full of photos for $5. I didn’t pick and choose the photos, I just shovelled them into the bag from a giant bin in a junk store, happy with the idea that they would provide me with many new subjects and compositions. I was right; they were mostly vacation photos from the late 90s and early 2000s. Here’s a painting based on one of the compositions that really appealed to me:
I removed, simplified and flattened almost everything from the original photo. I only kept a small number of figures that I liked, again totally anonymous people, and put them in water that enhances their separation from each other, even when they’re together.
Enter the AI
Early in June I discovered style-transfer and neural-style. They’re both applications of a neural algorithm that analyzes an artistic style and then applies it to a photograph. I immediately wanted to use them to apply the style of some of my own paintings to the found photos that I’d planned to paint. Of the two, I’ve found neural-style to be more feature rich; it allows you to blend multiple style images and to produce photos at various points in the processing. Given that processing an image can take many hours, this is a big advantage because you can see relatively early whether something is going to work or not. Sometimes the earlier iterations of the process end up being more interesting as well. Here’s neural-style, doing it’s thing to a selfie based on the style of one of my paintings:
Once I got the algorithm software installed and tested, I took a beautiful found photo of two anonymous figures heading towards a bus in northern Canada and applied the style of “Haver in a pink lawn chair” to it.
Whoa! I loved the result that the neural process produced; especially the way it flattened and darkened the figures against the landscape. That’s what I painted:
What I really liked was that the AI took my own process further: it added an additional disconnect to the source material, and it injected some randomness of colour and texture that I wouldn’t have thought to add at this point on my own. And it did all this based on my own artistic style.
In my next neural-processing experiment, I used a painting I’d done a couple of weeks earlier called “Dyatlov Pass” as the style input.
In “Dyatlov Pass” I was experimenting with a looser, quicker painting style. It repeats the motifs that have been developing in my work so far: anonymous figures, obscured or with their backs to us, within a landscape that lacks or distorts depth or which is at odds with the figurative elements. I inserted a figure from one of my found vacation photos in the bottom right. She’s detatched, clearly not part of the scene, almost as if she’s viewing the same painting that she’s a part of.
I decided I wanted to create a second painting containing her, along with a couple of her companions from the original photo. I wanted this new painting to continue exploring the concept of viewership, and specifically ideas around viewing art. I sketched out a composition, painted an underlayer on the canvas, photographed it and then composited the figures from the photo into it. Then I used neural-transfer to apply the style of “Dyatlov Pass” to the composite photo.
I’m really pleased with this piece: with the uncomfortable stance of the figure on the left who’s little more than a black outline, the steam that rises from two of the figures’ heads as they contemplate the non-painting, and the figure on the right who is oblivious as she ties her shoe. Here again the AI took my own process further, suggesting more disconnect between the figures and the background and adding a strange depth where I probably would have opted for flatness.
Here’s a sneak peek at the neural-processed photo that I’m using as the basis for the painting I’m currently working on. It has some familiar motifs and throws in a dog too.
So, why add this step into my art, especially when the processing can take many hours and occasionally just doesn’t do much?
There are some things I really appreciate about experimenting with using an AI as part of my painting process. The first is the ability for it to create benevolent accidents — to continue to inject randomness into the process and to deploy colours or textures in ways I may not have considered but that add something interesting and powerful to the overall composition. This works in lock-step with the other aspects of my artistic process as it’s developing.
The second very practical benefit is the way that it has stretched me as a self-taught artist. My tendency has been to approach subjects in a reductionist manner: to flatten, remove or simplify things. The neural-processed photos have so far challenged me to undertake new complexities and broaden my skill; helping me to further develop my style while still being based on it.
Finally, I’ve always been interested in technology and the future. I wrote a doctoral dissertation on cinema as a technology and the way that various post-cinematic technologies have shaped and then taken over what “cinema” is. For me, it feels natural to mix neural processing into an activity that is already so heavily reliant on various technologies. The only difference is that all of those other technologies have now become so commonplace that we cease to see them as technologies at all. One day the same will be true of AIs.