Around may 2015, a researcher at Google, Alexander Mordvintsev, took a deep network meant to recognize objects in an image, and instead used it to *generate new objects in an image. The internet quickly exploded after seeing one of the images it produced. Soon after, Google posted a blog entry on how to perform the technique they named “Inceptionism”, and tons of interesting outputs were soon created.

The Inception 5h model was trained on low resolution images, around 300 pixels. When using high resolution images, the Inception model will create only small patterns in the image. A solution would be to resize the image to 300 pixels, but the output will be low resolution and not usable for creating art.

I created a code that uses the following method. The idea is to downsize the image first, run the deep dream algorithm on the smaller sized image. The created patterns are now large with respect to the image size. Then resize the image to a slightly larger size, blend a bit of the original sized image to remain detail and run it again through the inception model. When repeating this over and over, it will result in a high resolution output with large patterns from the Deep Dream.

Below are some of the art works I created with my code.

Ameland, The Netherlands
Wall Street, New York, photography: Gilbert François
Original published in Be Magazine France (2012), photography Gilbert François, model: Nane @ Ford Models Paris
Original photo published in Elle, Paris (2012), photography Gilbert François.