Gathered here are some of the more interesting results from my first few days of experimenting with Google’s deepdream project. Several weeks ago a research team at Google captured the imagination of science and technology enthusiasts worldwide by sharing details of a new technique to visualize what’s going on inside neural networks trained to recognize images (with obvious applications for the search engine goliath). Last week they released the source code, inadvertently creating an entirely new class of generative art: deepdreaming. I jumped at the opportunity to involve myself in the burgeoning movement, puzzled out how to run the code on my local system, and began exploring the possibilities using my own original photography from Taiwan 台灣.
You may be wondering how these images are generated. I might not be the best person to describe the process in detail—my only first-hand experience with artificial intelligence is limited to a single cognitive science course in my undergrad—but I’ll do my best! The code that Google released is essentially a feedback loop: the user submits an image; that image is analyzed by a neural network trained to recognize certain categories (dogs, birds, buildings, cars, and so on); a new image is generated from the results of that analysis; and finally, that new image is fed back into the beginning of the process. There is a twist, of course: an element of random chance is introduced in the form of jitter and zoom to ensure the network has slightly different data to work with as the process unfolds. Repeat these steps ad infinitum and the program dreams entire worlds into being.
How can a neural network learn to recognize a dog, a building, or a car? My understanding is that neural networks are trained by subjecting them to an incredible number of images previously categorized by human beings. Iterate this process long enough and a network can learn to recognize the visual features of a given category—with varying levels of proficiency, of course. Google is by no means the only one experimenting with and training neural networks—many more examples can be found here on GitHub. The images featured in this post were generated with both the default BVLC GoogLeNet model as well as the Places205-GoogLeNet model from the MIT Computer Science and Artificial Intelligence Laboratory.
Interested in creating your own digital dreams? You may have to wait a bit—the source code is so new that nobody has yet had the time to create any quick, easy, and reliable way to craft your own without knowledge of the command line and a bit of programming. If you’re proficient with computers you can refer to the technical notes I published on my development blog, otherwise I suggest scanning r/deepdream on Reddit for links to the upload services that periodically go online (before being hugged to death by overwhelming demand).
You may be wondering: why Taiwan 台灣 of all places? I have spent most of the last two years exploring its many surreal landscapes and abandoned places so it was a natural fit for this first foray into deepdreaming. Many of the images presented here have already appeared on my blog. If you like what you see I encourage you to browse through my Taiwan 台灣 archives or check out my photography.
The dreams that appear here are lightly edited in Photoshop to reduce noise and discoloration. I have taken artistic license with a few of them, particularly this last image from the badlands of southern Taiwan, which looks much more interesting when applied to the original as a monochrome overlay.
Prints of several of these dreams are available for sale through Fine Art America: Fuxing Shell Temple, Daodong Academy Tiger, Technicolor Windowpane, Thirteen Levels Zoo, and Tianliao Moonworld. Commercial licenses are available through this same service or you are welcome to contact me to negotiate terms.