I’m fascinated by computer generated art and I think it would be great if a computer would be able to learn how to paint. As an experiment, I’ve created a deep neural network that takes coordinates input X of 2 neurons, mapped to 5 hidden layers with 64, 128, 256, 128 and 64 neurons respectively. The final layer will output 3 neurons, the predicted output which is denoted as Y’. Writing out the network, gives:

$\begin{eqnarray} H_1 &=& \phi \left( X W_1 + b_1 \right) \\ H_2 &=& \phi \left( H_1 W_2 + b_2 \right) \\ H_3 &=& \phi \left( H_2 W_3 + b_3 \right) \\ H_4 &=& \phi \left( H_3 W_4 + b_4 \right) \\ H_5 &=& \phi \left( H_4 W_5 + b_5 \right) \\ Y’ &=& \phi \left( H_5 W_6 + b_6 \right) \\ \end{eqnarray}$

where $$\phi$$ is the non-linear activation function. The cost function is defined as

$cost(Y, Y’) = \frac{1}{B}\sum_{b=0}^{B} \sum_{c=0}^{C}(Y_c – Y’_c)^2.$

where B is the total amount of mini batches and C is the amount of color channels. By minimising the error and with that minimising the cost function with mini-batch stochastic gradient decent, we train the machine how to paint the given training example.

By carefully selecting the activation functions (per layer), batch size and amount of neurons in the hidden layer, the machine is able to create different painting styles. Personally, I like ‘art deco’ and ‘cubism’ very much. So let’s take this beautiful painting of Georges van den Bos and let our deep neural network learn how to paint it in its own style.

Personally, I think the result is quite amazing!

When plotting the cost over iterations, we see that the system of equation has converged in less than 50 iterations. The spikes in the line is because of the use of mini batches, which makes the function less smooth.

Below you can see how the image evolves per iteration. In the first iterations, the machine starts, like a human painter, with the background first. Then it builds up the shapes and adds more and more detail per iteration step. Enjoy!