After doing this I realized the value of scale. These images look much better on an 8'x8' display.

In the past I wanted to create music, It turns out that music is really hard: it's absurdly demanding computationally, and human ears aren't very forgiving. You can check out my attempt to compose music here at my soundcloud



The images here use a CPPN to generate images from random neural networks. CPPNs only generate one pixel at a time, so we can use them to generate images of arbitrary resolution. If you're generating 8K images, I highly recommend using something more powerful than a laptop. If you just want to make some of these yourself, I have a PyTorch implementation up on Github here. Its pretty easy to get started, and you can tweak pretty much anything in the code to get different results. All the images displayed here use the same program. Hardmaru (David Ha) has a really nice tutorial on CPPNs, I just extended the idea.





By using the same CPPN program we can generate a batch of images, and interpolate thousands of frames between them. If we compase the resulting images together we can generate a video of how the neural network responds to different inputs.





The most interesting thing we can do with a CPPN is to wire them up to a more powerful generative model. Below is a CPPN-GAN, where the generator is replaced by a CPPN. Modeling with a CPPN-GAN is hard, as the CPPN lacks the representational power to model complicated structures. We can leverage advances in GAN training techniques to some degree -- here I've used a WGAN-GP, which trains very reliably. Still, complicated datasets such as CIFAR-10/CELEB-A remain hard to generate. MNIST is pretty easy even with linear networks, but we really want to use convolutions to capture local relationships. Doing this is an open problem however, and I've failed a lot at doing this. CPPN-Autoencoding is my personal goal -- generating arbitrarily high resolution images from small ones without using something like ESRGAN.











Home