Deep Convolutional Generative Adversarial Networks

Those who can’t do, teach.

Those who can’t discriminate, generate.

For my final project for my Neural Network Self Study (NNOSS), I implemented ConvGAN which will take any set of images and attempt to generate more images similar to the ones in the set. Shouts out to Olin Alums Alec Radford and Luke Metz who authored the paper describing this architecture here. Code can be found here.

I trained the GAN on the CelebA dataset (celebrity faces), and the forest path and sea cliff categories of the MIT places database. Examples are shown below (I did not fully train the Sea Cliff one, so those are pretty noisy).





The first three videos show exploration of the latent spaces of the different models. In English, you feed the Neural Nets a bunch of numbers roughly symbolizing the “features” of the images. For these networks there are 100 features, which I changes from values -1 to 1 while keeping the rest at 0. You can see towards the middle all the images are the same (since it is all 0s for all of them). If you want, you can think of this as the “average” {forrest path, faces, sea cliff}. It wouldn’t be exactly accurate , but you can think of it that way if you want.

This video shows the output of a random set of features as the model trains


Nerd Alert: If you understand the long-winded title, read on. If you thought “Hey I recognize some of those ML buzzwords!”, checkout links on: Deep, Convolutional, Generative, Adversarial, Networks (actually just check out the links at the bottom of the post). If you thought “oh boy, well I’ll figure out what he was talking about” or “I probably can’t say that 3 times fast”,  then this may not be the section for you.

Architecture: I used a Generator and Discriminator architecture with 5 hidden layers all of 128 5×5 filters with stride 2 (except the last). The latent space was 100 dimensional and the generated images were around 150×150.

Training: I used Adam Optimizer with a learning rate of 1e-4 and a batch size of 32 to start, increasing to 64 later on in training.

I arrived at this architecture/process through some light experimentation and trial and error and it differed slightly on different training sets but was mostly constant.

N = 1 Observations:

  1. Batch size is important. When training anything below 32 would not get very far for me, and even at 32 it will bounce around a poor minimum. I needed to bump the batch size up to 64 to reach a good minimum. I assume this is due to the dimensionality (~128,~128) and the variance of the data, this could be lowered if either were better. This does have implications for training as some computers will not have enough RAM to run this efficiently.
  2. I found that batch normalization after each layer  in the Generator was essential for successful training.
  3. Pay a lot of attention to what activation functions you use. Leaky Relu was somewhat helpful to avoid dead nodes. Also I would think that  sigmoid would be ideal for the final output of the generator, but I found tanh gave better training results.
  4. If your data is supervised, append in your labels in multiple places. For a typical example, if you were to try and generate MNIST data points and you wanted to specify which number you want to generate, you (like me) might think that you can append it onto your latent space vector and be fine. Although this might work eventually, for any deep model the path of gradients between the discriminator outputs and the latent space is very long. It is much easier if at each layer you append (with appropriate dimensions) your class. That way there are multiple, shorter paths between the output and input for gradients to flow.
  5. For most other aspects there is some wiggle room. I found for this particular data set number of layers, depth of layers and learning rate were less important as normal compared to the aforementioned observations.

End Nerd Alert


Good Link to Learn about GANS

More code for Convolutional GANs

Github Code I used for ideas when I hit a block

Original Paper that was my inspiration

OG GAN Paper

My code

Finally as always if you have any questions on implementation or anything else feel free to reach out through the links below.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s