PyTorch/dcgan_faces_tutorial.ipynb

442 lines
3.2 MiB
Text
Raw Normal View History

2021-08-26 03:04:25 +00:00
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\nDCGAN Tutorial\n==============\n\n**Author**: `Nathan Inkawhich <https://github.com/inkawhich>`__\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Introduction\n------------\n\nThis tutorial will give an introduction to DCGANs through an example. We\nwill train a generative adversarial network (GAN) to generate new\ncelebrities after showing it pictures of many real celebrities. Most of\nthe code here is from the dcgan implementation in\n`pytorch/examples <https://github.com/pytorch/examples>`__, and this\ndocument will give a thorough explanation of the implementation and shed\nlight on how and why this model works. But dont worry, no prior\nknowledge of GANs is required, but it may require a first-timer to spend\nsome time reasoning about what is actually happening under the hood.\nAlso, for the sake of time it will help to have a GPU, or two. Lets\nstart from the beginning.\n\nGenerative Adversarial Networks\n-------------------------------\n\nWhat is a GAN?\n~~~~~~~~~~~~~~\n\nGANs are a framework for teaching a DL model to capture the training\ndatas distribution so we can generate new data from that same\ndistribution. GANs were invented by Ian Goodfellow in 2014 and first\ndescribed in the paper `Generative Adversarial\nNets <https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf>`__.\nThey are made of two distinct models, a *generator* and a\n*discriminator*. The job of the generator is to spawn fake images that\nlook like the training images. The job of the discriminator is to look\nat an image and output whether or not it is a real training image or a\nfake image from the generator. During training, the generator is\nconstantly trying to outsmart the discriminator by generating better and\nbetter fakes, while the discriminator is working to become a better\ndetective and correctly classify the real and fake images. The\nequilibrium of this game is when the generator is generating perfect\nfakes that look as if they came directly from the training data, and the\ndiscriminator is left to always guess at 50% confidence that the\ngenerator output is real or fake.\n\nNow, lets define some notation to be used throughout tutorial starting\nwith the discriminator. Let $x$ be data representing an image.\n$D(x)$ is the discriminator network which outputs the (scalar)\nprobability that $x$ came from training data rather than the\ngenerator. Here, since we are dealing with images, the input to\n$D(x)$ is an image of CHW size 3x64x64. Intuitively, $D(x)$\nshould be HIGH when $x$ comes from training data and LOW when\n$x$ comes from the generator. $D(x)$ can also be thought of\nas a traditional binary classifier.\n\nFor the generators notation, let $z$ be a latent space vector\nsampled from a standard normal distribution. $G(z)$ represents the\ngenerator function which maps the latent vector $z$ to data-space.\nThe goal of $G$ is to estimate the distribution that the training\ndata comes from ($p_{data}$) so it can generate fake samples from\nthat estimated distribution ($p_g$).\n\nSo, $D(G(z))$ is the probability (scalar) that the output of the\ngenerator $G$ is a real image. As described in `Goodfellows\npaper <https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf>`__,\n$D$ and $G$ play a minimax game in which $D$ tries to\nmaximize the probability it correctly classifies reals and fakes\n($logD(x)$), and $G$ tries to minimize the probability that\n$D$ will predict its outputs are fake ($log(1-D(G(z)))$).\nFrom the paper, the GAN loss function is\n\n\\begin{align}\\underset{G}{\\text{min}} \\underset{D}{\\text{max}}V(D,G) = \\mathbb{E}_{x\\sim p_{data}(x)}\\big[logD(x)\\big] + \\mathbb{E}_{z\\sim p_{z}(z)}\\big[log(1-D(G(z)))\\big]\\end{align}\n\nIn theory, the solution to this minimax game is where\n$p_g = p_{data}$, and the discriminator guesses randomly if the\ninputs are real or fake. However, the convergence theory of GANs is\nstill being actively researched and in reality models do not always\ntrain to this point.\n\nWhat is a DCGAN?\n~~~~~~~~~~~~~~~~\n\nA DCGAN is a direct extension of the GAN described above, except that it\nexplicitly uses convolutional and convolutional-transpose layers in the\ndiscriminator
]
},
{
"cell_type": "code",
2021-08-27 02:32:33 +00:00
"execution_count": 1,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Random Seed: 999\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
2021-08-27 02:51:03 +00:00
"<torch._C.Generator at 0x7f1934128f30>"
2021-08-26 03:04:25 +00:00
]
},
"metadata": {},
2021-08-27 02:32:33 +00:00
"execution_count": 1
2021-08-26 03:04:25 +00:00
}
],
"source": [
"from __future__ import print_function\n#%matplotlib inline\nimport argparse\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.nn.parallel\nimport torch.backends.cudnn as cudnn\nimport torch.optim as optim\nimport torch.utils.data\nimport torchvision.datasets as dset\nimport torchvision.transforms as transforms\nimport torchvision.utils as vutils\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nfrom IPython.display import HTML\n\n# Set random seed for reproducibility\nmanualSeed = 999\n#manualSeed = random.randint(1, 10000) # use if you want new results\nprint(\"Random Seed: \", manualSeed)\nrandom.seed(manualSeed)\ntorch.manual_seed(manualSeed)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Inputs\n------\n\nLets define some inputs for the run:\n\n- **dataroot** - the path to the root of the dataset folder. We will\n talk more about the dataset in the next section\n- **workers** - the number of worker threads for loading the data with\n the DataLoader\n- **batch_size** - the batch size used in training. The DCGAN paper\n uses a batch size of 128\n- **image_size** - the spatial size of the images used for training.\n This implementation defaults to 64x64. If another size is desired,\n the structures of D and G must be changed. See\n `here <https://github.com/pytorch/examples/issues/70>`__ for more\n details\n- **nc** - number of color channels in the input images. For color\n images this is 3\n- **nz** - length of latent vector\n- **ngf** - relates to the depth of feature maps carried through the\n generator\n- **ndf** - sets the depth of feature maps propagated through the\n discriminator\n- **num_epochs** - number of training epochs to run. Training for\n longer will probably lead to better results but will also take much\n longer\n- **lr** - learning rate for training. As described in the DCGAN paper,\n this number should be 0.0002\n- **beta1** - beta1 hyperparameter for Adam optimizers. As described in\n paper, this number should be 0.5\n- **ngpu** - number of GPUs available. If this is 0, code will run in\n CPU mode. If this number is greater than 0 it will run on that number\n of GPUs\n\n\n"
]
},
{
"cell_type": "code",
2021-08-27 02:51:03 +00:00
"execution_count": 12,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
2021-08-27 02:32:33 +00:00
"# Root directory for dataset\n",
"dataroot = \"data/celeba\"\n",
"\n",
"# Number of workers for dataloader\n",
"workers = 2\n",
"\n",
"# Batch size during training\n",
"batch_size = 128\n",
"\n",
"# Spatial size of training images. All images will be resized to this\n",
"# size using a transformer.\n",
"image_size = 64\n",
"\n",
"# Number of channels in the training images. For color images this is 3\n",
"nc = 3\n",
"\n",
"# Size of z latent vector (i.e. size of generator input)\n",
"nz = 100\n",
"\n",
"# Size of feature maps in generator\n",
"ngf = 64\n",
"\n",
"# Size of feature maps in discriminator\n",
"ndf = 64\n",
"\n",
"# Number of training epochs\n",
2021-08-27 02:51:03 +00:00
"num_epochs = 1\n",
2021-08-27 02:32:33 +00:00
"\n",
"# Learning rate for optimizers\n",
"lr = 0.0002\n",
"\n",
"# Beta1 hyperparam for Adam optimizers\n",
"beta1 = 0.5\n",
"\n",
"# Number of GPUs available. Use 0 for CPU mode.\n",
"ngpu = 0"
2021-08-26 03:04:25 +00:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Data\n----\n\nIn this tutorial we will use the `Celeb-A Faces\ndataset <http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html>`__ which can\nbe downloaded at the linked site, or in `Google\nDrive <https://drive.google.com/drive/folders/0B7EVK8r0v71pTUZsaXdaSnZBZzg>`__.\nThe dataset will download as a file named *img_align_celeba.zip*. Once\ndownloaded, create a directory named *celeba* and extract the zip file\ninto that directory. Then, set the *dataroot* input for this notebook to\nthe *celeba* directory you just created. The resulting directory\nstructure should be:\n\n::\n\n /path/to/celeba\n -> img_align_celeba \n -> 188242.jpg\n -> 173822.jpg\n -> 284702.jpg\n -> 537394.jpg\n ...\n\nThis is an important step because we will be using the ImageFolder\ndataset class, which requires there to be subdirectories in the\ndatasets root folder. Now, we can create the dataset, create the\ndataloader, set the device to run on, and finally visualize some of the\ntraining data.\n\n\n"
]
},
{
"cell_type": "code",
2021-08-27 02:51:03 +00:00
"execution_count": 4,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
2021-08-27 02:32:33 +00:00
"outputs": [
{
2021-08-27 02:51:03 +00:00
"output_type": "execute_result",
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7f189b5c1250>"
]
},
"metadata": {},
"execution_count": 4
},
{
"output_type": "display_data",
"data": {
"text/plain": "<Figure size 576x576 with 1 Axes>",
"image/svg+xml": "<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\"?>\n<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n<svg height=\"464.398125pt\" version=\"1.1\" viewBox=\"0 0 449.28 464.398125\" width=\"449.28pt\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n <metadata>\n <rdf:RDF xmlns:cc=\"http://creativecommons.org/ns#\" xmlns:dc=\"http://purl.org/dc/elements/1.1/\" xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\">\n <cc:Work>\n <dc:type rdf:resource=\"http://purl.org/dc/dcmitype/StillImage\"/>\n <dc:date>2021-08-26T21:41:20.809860</dc:date>\n <dc:format>image/svg+xml</dc:format>\n <dc:creator>\n <cc:Agent>\n <dc:title>Matplotlib v3.4.3, https://matplotlib.org/</dc:title>\n </cc:Agent>\n </dc:creator>\n </cc:Work>\n </rdf:RDF>\n </metadata>\n <defs>\n <style type=\"text/css\">*{stroke-linecap:butt;stroke-linejoin:round;}</style>\n </defs>\n <g id=\"figure_1\">\n <g id=\"patch_1\">\n <path d=\"M 0 464.398125 \nL 449.28 464.398125 \nL 449.28 0 \nL 0 0 \nz\n\" style=\"fill:none;\"/>\n </g>\n <g id=\"axes_1\">\n <g clip-path=\"url(#p0118bc1b75)\">\n <image height=\"435\" id=\"image42c89b3184\" transform=\"scale(1 -1)translate(0 -435)\" width=\"435\" x=\"7.2\" xlink:href=\"data:image/png;base64,\niVBORw0KGgoAAAANSUhEUgAAAbMAAAGzCAYAAACl7fmHAAEAAElEQVR4nOz9SbBlWZaeh327Pc1tX+v+vIs2m8rMygQqqwCIIAgQAElIRpkGKpppoJkmIk2mkUaayqSJhjKZSSZOpBkloySDYAIoA8EqtMXMRFVlnxmRkRHh4f1rb3e63Wmwz33unpVZgIYyqxP2wrvX3HPP3nut9f//+pcAEn9x/cX1F9dfXH9x/cX1/8eXBhDKoG0NJGIMQIKUEIg3PjWRUiKliJKKlCClBCJHQyEExMCk1CynE0IIxBRJMRFDQEqJNgahJEJKhFBAoht6ttuGrh9IQqNMgZAWITRKaWIMhGELKTIpKw5ljyQRpSEUBdEWSGXyz3/jevL5Z2gp+No7ZzgfiEKwbVu00RwfLdFaIAV0bctu17DdNiA0PklchIAgJUFR1EQSznli8KTgqArDwfKAwhYIpQkJUhI450EklFYMg8P5gBsGhr6l2a6wWqKlZBgG4v5rvCfldxeEgCRQUjKfT5nNaqq6REmBDwHnHE+fvqJte/7mt34LIQQpJWKMhOAJweNDIMaIQKC0REmFlAqtNcZalDVIKYkhIqRAG4OUGqS8fe9EyvlNDB7vBlKKeQWkRAwR7xzeOXrvGJynGzw75xgiJPL3iSmN6yj/Et/4HnnBvL0Qu7ajbTv+1r/7bcqi+JVlmhBJkEQiqMQudtwMO9ZDS588pMTUVphewjZyvDjAFia/kpC4eHHO8yfPCSHm1wIEI7FlCTEgUoQU0NqgtMENA24Y8poa15VIr1+wAKQAKQVKSISQCAFSCEiJFBNJgDaGxjlerrb8tb/yu9w5uTPejmd9fc7q6px2t6Hve3ZtoB0CSgpKLZlNJ+w6R+9iXiMxIkSiLAz3Tpcs5xOUUiil889HgJAgJAlJQiCEQihFEoKQEi4kUgy4vuPq6orNrmHX9jjncM5hlaS2iuVyxuHZA3S9oKznIBQg37h76Nod//wP/qs/93B58/P3z/HP/xzxxv/T7Z4W5MeglUQrhVaCwhqKwqK1xmqFMQopEolIjIHkPN45UopoqamrCmN0fk7j+ZaXpiSmhE/QDZ510/P45TWlLThaLIgx4kMgpUgICaklX//tb3B6egoIUgoM6wt2Tz7FBEdIifTGbSYSMeb9kFKiaXu23YAPEBLEcQ+TQJEQJIYk6WKkLBTzSUXvI7sh0AyB3sfxvMnfb/8z/sxbK37d2x2p6pp/72/9bYxP4AJDcKAEprBIBIlEEoL9ck9CkAAdE65v6PuG7foavMdISYyRsiiQUuKdRymB0jqfTUCMiRQjCZAq75UYA955BKC1IsaI84EkBdV0StsPdJ2jLEqm8wWmKBFScxts8h3fLps//uM/5sXLFzmYaTthcvgIRGQYGhARESIivv1WpDgQvMfaEhB470EmopAoKcE1PDiZ8a0vv4sbBvo+b5ShabGFZbpcoIsCbQu0Lkgy8erygo8+fcyzZ1ckVVLOT1HlIVJOqCczhr6nvfqEGFveOXnAX60uKWJgKOc0xye4w0PKeomU6q2A9l/8X/7PTA38T/67/w67psMrw4vVmiATv/u73+DooESEnt16w8effMbHHz0mxJJeVlx1jp0PDEFTzY5JStE0LQwdRgzcOZzxra99jaPFCbPDU9oocEFxc73FxQ5bFaw3W5p+4OL8gu31FR/95HuUDBzUFW3XsO16UjLcbHb46ElKIKSCoLBa8eUP3ufLH77DyemcsjSEELm+WfEP/uEf8uzZOf/z3/97KJUX0zA4XN8SfZcXis8JibEWawusNdRVxcHhAZPjEwpbEAaPKQyT2RxrKzAlSUoEAZESIklc19BsrhnaHSkGUkx0bcvN1RXXV9e8uL7m6c2Gp7uOi9bRRokLAilEDvwpIpIghEgIIW+nmH8vpSSlRAgBJSVPnz6nffqC/9X/8j/l7M7xG4suQQqAIgBRea7Z8qPVY/71+aechy1KSQ5NSXWVOHOH/O6H32RaV4gQSWvPP/wv/gH/5P/9Txi6gEySkBLtVHPnvQcUVrNZXRCGjqIoWRwec/78GbvNGqk0QmhSEogIEoEElJAYLbFGUhiDURolJFIAISdwATCTkpfblperLf+L/9l/xt/92/8BCYGMLY8//j4//KP/ms9+/ie8eP6Sz55v+fxVg7Wae7OCL733Dr98ds3VLnJ9cwPRQ/KcHM35H/z7v823v/Vl6rqmLCcoYREIhLYEDFEYoihIukJYixOSPgbWfcK3HVfPv+C73/0uP/j4Yz59ekHTNEQROZuVPDiq+eY3f4tv/0f/Q5YffpvD+x+CKBDCjMmtJAFPn3zGf/zvffU2xMXXYYfbDAbJ6yC4/zt4+5QVSClRSmGUwghBbTWTylIXhqLU1IVlWhcs6orZtKaqDLNpxcFyTlUUVFaiJTjf0LQbNutr3GZL1/a0g8Ng+ODRexwdzfJhK4AYSQhCUrgY2frEs5uGLy43/J/+y3/KycEBf+t3vk3btmx2O1zwbJqOxckh/+v/7f+G3/29vwJR4dsbnv/T/yc3/+r/hek29F0gInNyKMDHSNt7+j7QNC0vrje8uN6xbSONCzQx4iOkmDAEiuhZxYLnvePOScE3PnzARRP46NWOzy5bLltHNwS898SY3/V9oMw54ngGivRWwEMAceDk5JT/3f/+/0AxRHCeYRhAScppi
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAcEAAAHRCAYAAAASbQJzAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/MnkTPAAAACXBIWXMAAAsTAAALEwEAmpwYAAEAAElEQVR4nOz9d4xuaX7fiX2edNIbK97YN3Sa6ZnhJHLIYaZ2IVuiZNGBlmV4DRvwwsYChrGAFmtoAa8WwtowjF3DMrwGFrItEKACJUqUVktRK44SNYkTNDPk9ExPx9s3V643nvQE//Gct6ru7dvdt8lhEFm/mepbVe+p9z3PecIvfX/fnwghcC7nci7nci7n8sdR5B/0DZzLuZzLuZzLufxBybkSPJdzOZdzOZc/tnKuBM/lXM7lXM7lj62cK8FzOZdzOZdz+WMr50rwXM7lXM7lXP7YyrkSPJdzOZdzOZc/tnKuBM/lXM6IEOLXhBD/q+/3tedyLufyh1PEeZ3gufzbLkKI+ZkfC6AGXPfz/y6E8Dd+/+/qdy5CiJ8BfjGEcPUP+FbO5Vz+yIv+g76BczmX362EEPqr74UQt4B/P4TwucevE0LoEIL9/by3czmXc/nDLefh0HP5IytCiJ8RQtwVQvwfhRAPgb8uhFgTQvy3Qog9IcRR9/3VM3/zL4UQ/373/f9aCPF5IcR/0V37lhDiT/8Or70phPgNIcRMCPE5IcR/JYT4xaccx78UQvznQogvCiHmQoh/JITYEEL8DSHEVAjxVSHEjTPX/1UhxJ3uta8LIX7yzGu5EOIXunv8rhDiPxZC3D3z+mUhxN/rns9bQoj/w5nXflgI8bXufXeEEP/3Dzon53Iuf9jkXAmeyx91uQisA9eB/y1xzf/17udrQAn8v97j738E+B6wCfzfgP+vEEL8Dq79m8BXgA3gPwP+lx9wHH+h+5srwHPAl7pxrAPfBf7ymWu/Cnyye+1vAn9XCJF1r/1l4AbwLPAngX9v9UdCCAn8I+Bb3ef8u8B/KIT473eX/FXgr4YQht09/J0POIZzOZc/dHKuBM/lj7p44C+HEOoQQhlCOAgh/L0QwjKEMAP+z8BPv8ffvx1C+GshBAf8AnAJuPBBrhVCXAM+A/ynIYQmhPB54L/5gOP46yGEN0IIE+DXgDdCCJ/rwrt/F/jU6sIQwi9247QhhP8SSIEPdS//eeD/EkI4CiHcBf6fZz7jM8BWCOGvdPf5JvDXiAoYoAWeF0JshhDmIYQvf8AxnMu5/KGTcyV4Ln/UZS+EUK1+EEIUQoj/WgjxthBiCvwGMBZCqHf5+4erb0IIy+7b/ge89jJweOZ3AHc+4Dh2znxfPuHns3nR/6gLdU6EEMfAiOid0t3L2c8++/114LIQ4nj1BfwnnCr9/w3wIvBKF4L9sx9wDOdyLn/o5BwYcy5/1OVx+PNfJHpFPxJCeCiE+CTwDeDdQpzfD3kArAshijOK8Jnfiw/q8n//MTGU+XIIwQshjjgd3wPgKvCdJ9zHHeCtEMILT3rvEMJrwP+8C5v+j4FfFkJshBAWvwdDOZdz+X2Rc0/wXP64yYDoOR0LIdZ5NJf2eyIhhLeBrwH/mRAiEUL8KPA/+D36uAFggT1ACyH+U2B45vW/A/ylDiB0Bfjfn3ntK8CsAxLlQgglhPiYEOIzAEKIf08IsRVC8MBx9zf+92gc53Iuvy9yrgTP5Y+b/D+AHNgHvgz8k9+nz/1fAD8KHAD/OfBLxHrG77f8d8QxvQq8DVQ8GvL8K8Bd4C3gc8Avr+6jy2X+WSKo5i3iM/r/EMOpAH8KeLmry/yrwF8IIZS/B2M4l3P5fZPzYvlzOZc/ABFC/BLwSgjh99wTfZ/7+A+Iyuy9wEHnci5/ZOXcEzyXc/l9ECHEZ4QQzwkhpBDiTwE/B/yDP4D7uCSE+PHuPj5EzJH+yu/3fZzLufxhkXNgzLmcy++PXAT+PrFO8C7wH4QQvvEHcB8J8F8DN4l5vb8N/L//AO7jXM7lD4Wch0PP5VzO5VzO5Y+tnIdDz+VczuVczuWPrZwrwXM5l3M5l3P5YyvvmRMUQpzHSs/lXM7lXM7l32oJIbwrGcb7AmOSVDMYZQQCoiOdONGMYfWPgBAIQhC/Dd1rAYQgEACBEAEhQAhxQl8RujeJ6lZ0fycIwRNrcsXJ78PqnQRIGd9BCIGUdO/Lye8E3YWre1vdawjs78zxLvAIDfITOJGFePRvz14bL3/0X0Eca8yziu7eJFIIQghnntvpOE6IPEJ8VvGOT+/NB/AhEHw4+Wx58py79+iucdYiQuDPXC2QEoIwMNgiuXCZpqlZHhxRL47ojVv6YxDCg3Q4HEIrpNeEqSTMAgJD20jme1MyI8n6BikCwnfz7okd+3zonlGAIMBD8IALeBdwPuBDN2YfCC6Ow7Oaz/gsAqfze7ouBF4I3sRzOzj6/QIp5cnjWs2R6p5JCOAJ+BDw3XqV3TyE1d+Ebs2czGH8NK0kqTakxpAYg1Ry9RLOW5rWUjUtZWtprTtzp2fmkLNr5bFFI0AJQZFmDAcDBoM+87Lke6++yk/+5E9y6dKld6y/34l4PME22HpJsA0iBIKzEAJtXeOcQ8j4XLVJSYs+Os0IQiGkimue1WaSiJPZWY3zZNM/yrFz8uvAcrHkV//xr/HSSy/xoQ99OP6+ezArPvEQAvv7++zt7XHt2jUGg8j6Np8vcM7hg0MpxXA4JATY29uj3+uT5zkASqk4B0IgztzI6rZWvxOPzNTjN/0kOXP1Yxs/hMCv/MqvcPlSnx//se8z4c/JeenjOYrgURfk95LQCP7xP3md+bw5/TQh4jMO/uSxncydj3uMEBCAlKCEBC0IKiC68zj4gPBxJnynEoQQpEpwcX3EhfEQQcDaFiXi2m3bBoJHKYkQksZ6WutIjEZpQ1k1NE2LIN5fkiYkWUoIAdvtTQApQATHmw+mHC+ad4z3rLwnMEYIETYu9HnhY5dOFzECH3w8WHx3ZAdwK70nBTZ4gvdIAkiJ9Q4hJVoHpAwoJc+cG/FwlEEghcbb+OCcb3CuRQpNCArvAx6PFx6pBEZppBRoDSaR8X3FSkHKTvlIQOKcP7l/5wJf/NwbVMsWqU8V6dnNufqdlDIqX3/69whxclisPmf1veiUnXNxIrTWGGPQWuO9x3t/8hkhhHd87uqAV3iMjK9XNtBYj7WxDZ5SCqXUyWet/t5ay2K+QAXHzn/4PIkJNGYN8fxPI29+irvf+i12vvYd2ukdXvhJz/ZHAipxiLSkTVooUkyTEl5Pca+DtmN278Htf/k6l9YNF2+O0Nqj20CwHtEKRO0JdSBYS3Ce0EpoJb4NUHmaylHWLU0b8B5oA7ZqCD5SmlgRcHhc8ATA4bFAVGEBLwWt0vy10PK37IIXXrhGmmRxLfiAkBIlINcJiVS4EKi8pfaexjkkkkRqpJRRZ7uoqKVUpEoiZTS2hAj084wLgyHb4zFroyFJnoCQ4B1VueRoOuPB4YT7x3OmiyXWO4I8NdBW8xHn0p8ag2cOlZ4x3Lx8mY9+6EM898JzvHb7Fv/X/+K/5Fd/9Vf52Z/92ffcqE+S4D0rMxE6Q8gtObj3KtN7r1EdPaRdzAh1iQyBarlES0lW5Dit0cWYwdYzrF16BtXfJOkNEVIjlEYqDSJBCo/oDrtwYhSu9ggnn3469sDbb7/Nc8+/xF/6S/8Jf/Ev/kfx3rq1v1rj3nu+8pWv8LnPfY6f//mf56WXXkJKya1bt5jNZwgBzjs+8YlPEELgC5//IhcvXuLZZ5/Fe0+e5zgcUiqEUIh42hAIyM74kYAQAR9O992T9vrJ8wyPK0D/yM/OOTY2Nvizf/pZ/v7f+Z9+4Pl6VwmiM54syBYvRTRKvED4OBbgiYb690te+Oh/xetvHJ38nKYpaZpibdMpupXxGXDOYluH8KAUpBpSkyAKjcsdMg1IEfBVQFlQKGwAh0BpxaVhwU9/9EU+++JzaGWx9ZJMB4JsmS6OaauSLFEYk7JoYb4s6eUZRW/IZFays3uAa+Ia6I+HbFzcIs0yFvMlB8cz2tZhhEfYiv/fP
},
"metadata": {
"needs_background": "light"
}
2021-08-27 02:32:33 +00:00
}
],
2021-08-26 03:04:25 +00:00
"source": [
"# We can use an image folder dataset the way we have it setup.\n# Create the dataset\ndataset = dset.ImageFolder(root=dataroot,\n transform=transforms.Compose([\n transforms.Resize(image_size),\n transforms.CenterCrop(image_size),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ]))\n# Create the dataloader\ndataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,\n shuffle=True, num_workers=workers)\n\n# Decide which device we want to run on\ndevice = torch.device(\"cuda:0\" if (torch.cuda.is_available() and ngpu > 0) else \"cpu\")\n\n# Plot some training images\nreal_batch = next(iter(dataloader))\nplt.figure(figsize=(8,8))\nplt.axis(\"off\")\nplt.title(\"Training Images\")\nplt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Implementation\n--------------\n\nWith our input parameters set and the dataset prepared, we can now get\ninto the implementation. We will start with the weight initialization\nstrategy, then talk about the generator, discriminator, loss functions,\nand training loop in detail.\n\nWeight Initialization\n~~~~~~~~~~~~~~~~~~~~~\n\nFrom the DCGAN paper, the authors specify that all model weights shall\nbe randomly initialized from a Normal distribution with mean=0,\nstdev=0.02. The ``weights_init`` function takes an initialized model as\ninput and reinitializes all convolutional, convolutional-transpose, and\nbatch normalization layers to meet this criteria. This function is\napplied to the models immediately after initialization.\n\n\n"
]
},
{
"cell_type": "code",
2021-08-27 02:51:03 +00:00
"execution_count": 6,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# custom weights initialization called on netG and netD\ndef weights_init(m):\n classname = m.__class__.__name__\n if classname.find('Conv') != -1:\n nn.init.normal_(m.weight.data, 0.0, 0.02)\n elif classname.find('BatchNorm') != -1:\n nn.init.normal_(m.weight.data, 1.0, 0.02)\n nn.init.constant_(m.bias.data, 0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Generator\n~~~~~~~~~\n\nThe generator, $G$, is designed to map the latent space vector\n($z$) to data-space. Since our data are images, converting\n$z$ to data-space means ultimately creating a RGB image with the\nsame size as the training images (i.e. 3x64x64). In practice, this is\naccomplished through a series of strided two dimensional convolutional\ntranspose layers, each paired with a 2d batch norm layer and a relu\nactivation. The output of the generator is fed through a tanh function\nto return it to the input data range of $[-1,1]$. It is worth\nnoting the existence of the batch norm functions after the\nconv-transpose layers, as this is a critical contribution of the DCGAN\npaper. These layers help with the flow of gradients during training. An\nimage of the generator from the DCGAN paper is shown below.\n\n.. figure:: /_static/img/dcgan_generator.png\n :alt: dcgan_generator\n\nNotice, the how the inputs we set in the input section (*nz*, *ngf*, and\n*nc*) influence the generator architecture in code. *nz* is the length\nof the z input vector, *ngf* relates to the size of the feature maps\nthat are propagated through the generator, and *nc* is the number of\nchannels in the output image (set to 3 for RGB images). Below is the\ncode for the generator.\n\n\n"
]
},
{
"cell_type": "code",
2021-08-27 02:51:03 +00:00
"execution_count": 7,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Generator Code\n\nclass Generator(nn.Module):\n def __init__(self, ngpu):\n super(Generator, self).__init__()\n self.ngpu = ngpu\n self.main = nn.Sequential(\n # input is Z, going into a convolution\n nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),\n nn.BatchNorm2d(ngf * 8),\n nn.ReLU(True),\n # state size. (ngf*8) x 4 x 4\n nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ngf * 4),\n nn.ReLU(True),\n # state size. (ngf*4) x 8 x 8\n nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ngf * 2),\n nn.ReLU(True),\n # state size. (ngf*2) x 16 x 16\n nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ngf),\n nn.ReLU(True),\n # state size. (ngf) x 32 x 32\n nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),\n nn.Tanh()\n # state size. (nc) x 64 x 64\n )\n\n def forward(self, input):\n return self.main(input)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we can instantiate the generator and apply the ``weights_init``\nfunction. Check out the printed model to see how the generator object is\nstructured.\n\n\n"
]
},
{
"cell_type": "code",
2021-08-27 02:51:03 +00:00
"execution_count": 8,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
2021-08-27 02:51:03 +00:00
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Generator(\n (main): Sequential(\n (0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)\n (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (2): ReLU(inplace=True)\n (3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (5): ReLU(inplace=True)\n (6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (8): ReLU(inplace=True)\n (9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (11): ReLU(inplace=True)\n (12): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (13): Tanh()\n )\n)\n"
]
}
],
2021-08-26 03:04:25 +00:00
"source": [
"# Create the generator\nnetG = Generator(ngpu).to(device)\n\n# Handle multi-gpu if desired\nif (device.type == 'cuda') and (ngpu > 1):\n netG = nn.DataParallel(netG, list(range(ngpu)))\n\n# Apply the weights_init function to randomly initialize all weights\n# to mean=0, stdev=0.2.\nnetG.apply(weights_init)\n\n# Print the model\nprint(netG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Discriminator\n~~~~~~~~~~~~~\n\nAs mentioned, the discriminator, $D$, is a binary classification\nnetwork that takes an image as input and outputs a scalar probability\nthat the input image is real (as opposed to fake). Here, $D$ takes\na 3x64x64 input image, processes it through a series of Conv2d,\nBatchNorm2d, and LeakyReLU layers, and outputs the final probability\nthrough a Sigmoid activation function. This architecture can be extended\nwith more layers if necessary for the problem, but there is significance\nto the use of the strided convolution, BatchNorm, and LeakyReLUs. The\nDCGAN paper mentions it is a good practice to use strided convolution\nrather than pooling to downsample because it lets the network learn its\nown pooling function. Also batch norm and leaky relu functions promote\nhealthy gradient flow which is critical for the learning process of both\n$G$ and $D$.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Discriminator Code\n\n"
]
},
{
"cell_type": "code",
2021-08-27 02:51:03 +00:00
"execution_count": 9,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"class Discriminator(nn.Module):\n def __init__(self, ngpu):\n super(Discriminator, self).__init__()\n self.ngpu = ngpu\n self.main = nn.Sequential(\n # input is (nc) x 64 x 64\n nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),\n nn.LeakyReLU(0.2, inplace=True),\n # state size. (ndf) x 32 x 32\n nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ndf * 2),\n nn.LeakyReLU(0.2, inplace=True),\n # state size. (ndf*2) x 16 x 16\n nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ndf * 4),\n nn.LeakyReLU(0.2, inplace=True),\n # state size. (ndf*4) x 8 x 8\n nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ndf * 8),\n nn.LeakyReLU(0.2, inplace=True),\n # state size. (ndf*8) x 4 x 4\n nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),\n nn.Sigmoid()\n )\n\n def forward(self, input):\n return self.main(input)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, as with the generator, we can create the discriminator, apply the\n``weights_init`` function, and print the models structure.\n\n\n"
]
},
{
"cell_type": "code",
2021-08-27 02:51:03 +00:00
"execution_count": 10,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
2021-08-27 02:51:03 +00:00
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Discriminator(\n (main): Sequential(\n (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (1): LeakyReLU(negative_slope=0.2, inplace=True)\n (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (4): LeakyReLU(negative_slope=0.2, inplace=True)\n (5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (7): LeakyReLU(negative_slope=0.2, inplace=True)\n (8): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (10): LeakyReLU(negative_slope=0.2, inplace=True)\n (11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False)\n (12): Sigmoid()\n )\n)\n"
]
}
],
2021-08-26 03:04:25 +00:00
"source": [
"# Create the Discriminator\nnetD = Discriminator(ngpu).to(device)\n\n# Handle multi-gpu if desired\nif (device.type == 'cuda') and (ngpu > 1):\n netD = nn.DataParallel(netD, list(range(ngpu)))\n \n# Apply the weights_init function to randomly initialize all weights\n# to mean=0, stdev=0.2.\nnetD.apply(weights_init)\n\n# Print the model\nprint(netD)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Loss Functions and Optimizers\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWith $D$ and $G$ setup, we can specify how they learn\nthrough the loss functions and optimizers. We will use the Binary Cross\nEntropy loss\n(`BCELoss <https://pytorch.org/docs/stable/nn.html#torch.nn.BCELoss>`__)\nfunction which is defined in PyTorch as:\n\n\\begin{align}\\ell(x, y) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad l_n = - \\left[ y_n \\cdot \\log x_n + (1 - y_n) \\cdot \\log (1 - x_n) \\right]\\end{align}\n\nNotice how this function provides the calculation of both log components\nin the objective function (i.e. $log(D(x))$ and\n$log(1-D(G(z)))$). We can specify what part of the BCE equation to\nuse with the $y$ input. This is accomplished in the training loop\nwhich is coming up soon, but it is important to understand how we can\nchoose which component we wish to calculate just by changing $y$\n(i.e. GT labels).\n\nNext, we define our real label as 1 and the fake label as 0. These\nlabels will be used when calculating the losses of $D$ and\n$G$, and this is also the convention used in the original GAN\npaper. Finally, we set up two separate optimizers, one for $D$ and\none for $G$. As specified in the DCGAN paper, both are Adam\noptimizers with learning rate 0.0002 and Beta1 = 0.5. For keeping track\nof the generators learning progression, we will generate a fixed batch\nof latent vectors that are drawn from a Gaussian distribution\n(i.e. fixed_noise) . In the training loop, we will periodically input\nthis fixed_noise into $G$, and over the iterations we will see\nimages form out of the noise.\n\n\n"
]
},
{
"cell_type": "code",
2021-08-27 02:51:03 +00:00
"execution_count": 11,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Initialize BCELoss function\ncriterion = nn.BCELoss()\n\n# Create batch of latent vectors that we will use to visualize\n# the progression of the generator\nfixed_noise = torch.randn(64, nz, 1, 1, device=device)\n\n# Establish convention for real and fake labels during training\nreal_label = 1.\nfake_label = 0.\n\n# Setup Adam optimizers for both G and D\noptimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))\noptimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Training\n~~~~~~~~\n\nFinally, now that we have all of the parts of the GAN framework defined,\nwe can train it. Be mindful that training GANs is somewhat of an art\nform, as incorrect hyperparameter settings lead to mode collapse with\nlittle explanation of what went wrong. Here, we will closely follow\nAlgorithm 1 from Goodfellows paper, while abiding by some of the best\npractices shown in `ganhacks <https://github.com/soumith/ganhacks>`__.\nNamely, we will “construct different mini-batches for real and fake”\nimages, and also adjust Gs objective function to maximize\n$logD(G(z))$. Training is split up into two main parts. Part 1\nupdates the Discriminator and Part 2 updates the Generator.\n\n**Part 1 - Train the Discriminator**\n\nRecall, the goal of training the discriminator is to maximize the\nprobability of correctly classifying a given input as real or fake. In\nterms of Goodfellow, we wish to “update the discriminator by ascending\nits stochastic gradient”. Practically, we want to maximize\n$log(D(x)) + log(1-D(G(z)))$. Due to the separate mini-batch\nsuggestion from ganhacks, we will calculate this in two steps. First, we\nwill construct a batch of real samples from the training set, forward\npass through $D$, calculate the loss ($log(D(x))$), then\ncalculate the gradients in a backward pass. Secondly, we will construct\na batch of fake samples with the current generator, forward pass this\nbatch through $D$, calculate the loss ($log(1-D(G(z)))$),\nand *accumulate* the gradients with a backward pass. Now, with the\ngradients accumulated from both the all-real and all-fake batches, we\ncall a step of the Discriminators optimizer.\n\n**Part 2 - Train the Generator**\n\nAs stated in the original paper, we want to train the Generator by\nminimizing $log(1-D(G(z)))$ in an effort to generate better fakes.\nAs mentioned, this was shown by Goodfellow to not provide sufficient\ngradients, especially early in the learning process. As a fix, we\ninstead wish to maximize $log(D(G(z)))$. In the code we accomplish\nthis by: classifying the Generator output from Part 1 with the\nDiscriminator, computing Gs loss *using real labels as GT*, computing\nGs gradients in a backward pass, and finally updating Gs parameters\nwith an optimizer step. It may seem counter-intuitive to use the real\nlabels as GT labels for the loss function, but this allows us to use the\n$log(x)$ part of the BCELoss (rather than the $log(1-x)$\npart) which is exactly what we want.\n\nFinally, we will do some statistic reporting and at the end of each\nepoch we will push our fixed_noise batch through the generator to\nvisually track the progress of Gs training. The training statistics\nreported are:\n\n- **Loss_D** - discriminator loss calculated as the sum of losses for\n the all real and all fake batches ($log(D(x)) + log(1 - D(G(z)))$).\n- **Loss_G** - generator loss calculated as $log(D(G(z)))$\n- **D(x)** - the average output (across the batch) of the discriminator\n for the all real batch. This should start close to 1 then\n theoretically converge to 0.5 when G gets better. Think about why\n this is.\n- **D(G(z))** - average discriminator outputs for the all fake batch.\n The first number is before D is updated and the second number is\n after D is updated. These numbers should start near 0 and converge to\n 0.5 as G gets better. Think about why this is.\n\n**Note:** This step might take a while, depending on how many epochs you\nrun and if you removed some data from the dataset.\n\n\n"
]
},
{
"cell_type": "code",
2021-08-27 02:51:03 +00:00
"execution_count": 13,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
2021-08-27 02:51:03 +00:00
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Starting Training Loop...\n",
"[0/1][0/3166]\tLoss_D: 1.7393\tLoss_G: 5.1018\tD(x): 0.5396\tD(G(z)): 0.5840 / 0.0092\n",
"[0/1][50/3166]\tLoss_D: 0.1638\tLoss_G: 25.1662\tD(x): 0.9127\tD(G(z)): 0.0000 / 0.0000\n",
"[0/1][100/3166]\tLoss_D: 1.2016\tLoss_G: 6.1104\tD(x): 0.5242\tD(G(z)): 0.0021 / 0.0148\n",
"[0/1][150/3166]\tLoss_D: 1.0785\tLoss_G: 4.4232\tD(x): 0.4870\tD(G(z)): 0.0036 / 0.0299\n",
"[0/1][200/3166]\tLoss_D: 0.6674\tLoss_G: 7.4544\tD(x): 0.9111\tD(G(z)): 0.3666 / 0.0014\n",
"[0/1][250/3166]\tLoss_D: 0.4954\tLoss_G: 6.3850\tD(x): 0.9296\tD(G(z)): 0.2982 / 0.0039\n"
]
},
{
"output_type": "error",
"ename": "KeyboardInterrupt",
"evalue": "",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m/tmp/ipykernel_146954/3007358521.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 32\u001b[0m \u001b[0;31m## Train with all-fake batch\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 33\u001b[0m \u001b[0;31m# Generate batch of latent vectors\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 34\u001b[0;31m \u001b[0mnoise\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrandn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mb_size\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnz\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdevice\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mdevice\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 35\u001b[0m \u001b[0;31m# Generate fake image batch with G\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 36\u001b[0m \u001b[0mfake\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnetG\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnoise\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mKeyboardInterrupt\u001b[0m: "
]
}
],
2021-08-26 03:04:25 +00:00
"source": [
"# Training Loop\n\n# Lists to keep track of progress\nimg_list = []\nG_losses = []\nD_losses = []\niters = 0\n\nprint(\"Starting Training Loop...\")\n# For each epoch\nfor epoch in range(num_epochs):\n # For each batch in the dataloader\n for i, data in enumerate(dataloader, 0):\n \n ############################\n # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))\n ###########################\n ## Train with all-real batch\n netD.zero_grad()\n # Format batch\n real_cpu = data[0].to(device)\n b_size = real_cpu.size(0)\n label = torch.full((b_size,), real_label, dtype=torch.float, device=device)\n # Forward pass real batch through D\n output = netD(real_cpu).view(-1)\n # Calculate loss on all-real batch\n errD_real = criterion(output, label)\n # Calculate gradients for D in backward pass\n errD_real.backward()\n D_x = output.mean().item()\n\n ## Train with all-fake batch\n # Generate batch of latent vectors\n noise = torch.randn(b_size, nz, 1, 1, device=device)\n # Generate fake image batch with G\n fake = netG(noise)\n label.fill_(fake_label)\n # Classify all fake batch with D\n output = netD(fake.detach()).view(-1)\n # Calculate D's loss on the all-fake batch\n errD_fake = criterion(output, label)\n # Calculate the gradients for this batch, accumulated (summed) with previous gradients\n errD_fake.backward()\n D_G_z1 = output.mean().item()\n # Compute error of D as sum over the fake and the real batches\n errD = errD_real + errD_fake\n # Update D\n optimizerD.step()\n\n ############################\n # (2) Update G network: maximize log(D(G(z)))\n ###########################\n netG.zero_grad()\n label.fill_(real_label) # fake labels are real for generator cost\n # Since we just updated D, perform another forward pass of all-fake batch through D\n output = netD(fake).view(-1)\n # Calculate G's loss based on this output\n errG = criterion(output, label)\n # Calculate gradients for G\n errG.backward()\n D_G_z2 = output.mean().item()\n # Update G\n optimizerG.step()\n \n # Output training stats\n if i % 50 == 0:\n print('[%d/%d][%d/%d]\\tLoss_D: %.4f\\tLoss_G: %.4f\\tD(x): %.4f\\tD(G(z)): %.4f / %.4f'\n % (epoch, num_epochs, i, len(dataloader),\n errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))\n \n # Save Losses for plotting later\n G_losses.append(errG.item())\n D_losses.append(errD.item())\n \n # Check how the generator is doing by saving G's output on fixed_noise\n if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):\n with torch.no_grad():\n fake = netG(fixed_noise).detach().cpu()\n img_list.append(vutils.make_grid(fake, padding=2, normalize=True))\n \n iters += 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Results\n-------\n\nFinally, lets check out how we did. Here, we will look at three\ndifferent results. First, we will see how D and Gs losses changed\nduring training. Second, we will visualize Gs output on the fixed_noise\nbatch for every epoch. And third, we will look at a batch of real data\nnext to a batch of fake data from G.\n\n**Loss versus training iteration**\n\nBelow is a plot of D & Gs losses versus training iterations.\n\n\n"
]
},
{
"cell_type": "code",
2021-08-27 02:51:03 +00:00
"execution_count": 14,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
2021-08-27 02:51:03 +00:00
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": "<Figure size 720x360 with 1 Axes>",
"image/svg+xml": "<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\"?>\n<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n<svg height=\"331.674375pt\" version=\"1.1\" viewBox=\"0 0 605.803125 331.674375\" width=\"605.803125pt\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n <metadata>\n <rdf:RDF xmlns:cc=\"http://creativecommons.org/ns#\" xmlns:dc=\"http://purl.org/dc/elements/1.1/\" xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\">\n <cc:Work>\n <dc:type rdf:resource=\"http://purl.org/dc/dcmitype/StillImage\"/>\n <dc:date>2021-08-26T21:49:59.768960</dc:date>\n <dc:format>image/svg+xml</dc:format>\n <dc:creator>\n <cc:Agent>\n <dc:title>Matplotlib v3.4.3, https://matplotlib.org/</dc:title>\n </cc:Agent>\n </dc:creator>\n </cc:Work>\n </rdf:RDF>\n </metadata>\n <defs>\n <style type=\"text/css\">*{stroke-linecap:butt;stroke-linejoin:round;}</style>\n </defs>\n <g id=\"figure_1\">\n <g id=\"patch_1\">\n <path d=\"M 0 331.674375 \nL 605.803125 331.674375 \nL 605.803125 0 \nL 0 0 \nz\n\" style=\"fill:none;\"/>\n </g>\n <g id=\"axes_1\">\n <g id=\"patch_2\">\n <path d=\"M 40.603125 294.118125 \nL 598.603125 294.118125 \nL 598.603125 22.318125 \nL 40.603125 22.318125 \nz\n\" style=\"fill:#ffffff;\"/>\n </g>\n <g id=\"matplotlib.axis_1\">\n <g id=\"xtick_1\">\n <g id=\"line2d_1\">\n <defs>\n <path d=\"M 0 0 \nL 0 3.5 \n\" id=\"m644ba2e843\" style=\"stroke:#000000;stroke-width:0.8;\"/>\n </defs>\n <g>\n <use style=\"stroke:#000000;stroke-width:0.8;\" x=\"65.966761\" xlink:href=\"#m644ba2e843\" y=\"294.118125\"/>\n </g>\n </g>\n <g id=\"text_1\">\n <!-- 0 -->\n <g transform=\"translate(62.785511 308.716563)scale(0.1 -0.1)\">\n <defs>\n <path d=\"M 2034 4250 \nQ 1547 4250 1301 3770 \nQ 1056 3291 1056 2328 \nQ 1056 1369 1301 889 \nQ 1547 409 2034 409 \nQ 2525 409 2770 889 \nQ 3016 1369 3016 2328 \nQ 3016 3291 2770 3770 \nQ 2525 4250 2034 4250 \nz\nM 2034 4750 \nQ 2819 4750 3233 4129 \nQ 3647 3509 3647 2328 \nQ 3647 1150 3233 529 \nQ 2819 -91 2034 -91 \nQ 1250 -91 836 529 \nQ 422 1150 422 2328 \nQ 422 3509 836 4129 \nQ 1250 4750 2034 4750 \nz\n\" id=\"DejaVuSans-30\" transform=\"scale(0.015625)\"/>\n </defs>\n <use xlink:href=\"#DejaVuSans-30\"/>\n </g>\n </g>\n </g>\n <g id=\"xtick_2\">\n <g id=\"line2d_2\">\n <g>\n <use style=\"stroke:#000000;stroke-width:0.8;\" x=\"157.863995\" xlink:href=\"#m644ba2e843\" y=\"294.118125\"/>\n </g>\n </g>\n <g id=\"text_2\">\n <!-- 50 -->\n <g transform=\"translate(151.501495 308.716563)scale(0.1 -0.1)\">\n <defs>\n <path d=\"M 691 4666 \nL 3169 4666 \nL 3169 4134 \nL 1269 4134 \nL 1269 2991 \nQ 1406 3038 1543 3061 \nQ 1681 3084 1819 3084 \nQ 2600 3084 3056 2656 \nQ 3513 2228 3513 1497 \nQ 3513 744 3044 326 \nQ 2575 -91 1722 -91 \nQ 1428 -91 1123 -41 \nQ 819 9 494 109 \nL 494 744 \nQ 775 591 1075 516 \nQ 1375 441 1709 441 \nQ 2250 441 2565 725 \nQ 2881 1009 2881 1497 \nQ 2881 1984 2565 2268 \nQ 2250 2553 1709 2553 \nQ 1456 2553 1204 2497 \nQ 953 2441 691 2322 \nL 691 4666 \nz\n\" id=\"DejaVuSans-35\" transform=\"scale(0.015625)\"/>\n </defs>\n <use xlink:href=\"#DejaVuSans-35\"/>\n <use x=\"63.623047\" xlink:href=\"#DejaVuSans-30\"/>\n </g>\n </g>\n </g>\n <g id=\"xtick_3\">\n <g id=\"line2d_3\">\n <g>\n <use style=\"stroke:#000000;stroke-width:0.8;\" x=\"249.761228\" xlink:href=\"#m644ba2e843\" y=\"294.118125\"/>\n </g>\n </g>\n <g id=\"text_3\">\n <!-- 100 -->\n <g transform=\"translate(240.217478 308.716563)scale(0.1 -0.1)\">\n <defs>\n <path d=\"M 794 531 \nL 1825 531 \nL 1825 4091 \nL 703 3866 \nL 703 4441 \nL 1819 4666 \nL 2450 4666 \nL 2450 531 \nL 3481 531 \nL 3481 0 \nL 794 0 \nL 794 531 \nz\n\" id=\"DejaVuSans-31\" transform=\"scale(0.015625)\"/>\n </defs>\n <use xlink:href=\"#D
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAl4AAAFNCAYAAADRi2EuAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/MnkTPAAAACXBIWXMAAAsTAAALEwEAmpwYAADCM0lEQVR4nOy9Z5gkV3n+fZ+uqo7Tk2c2513lhCQkJIEEiJwx2CbLGJMMtgn232BM9GubYMBgkY1IJoNABAkEQiijvFplbc47OXbu6vN+OHVOnaqu6jAz3TOz+/yua6/Z6amu2N119/0kxjkHQRAEQRAE0Xoii70DBEEQBEEQJwokvAiCIAiCINoECS+CIAiCIIg2QcKLIAiCIAiiTZDwIgiCIAiCaBMkvAiCIAiCINoECS+COEFhjH2EMfZ/81zHLGNs80Ltk7PO6xhjV8zxuV9mjH1wIfeHCIYxtt65/sZi70sYjLF/YYz970IvSxDzgVEfL2I5wRh7FYB3AzgDQAbAXgDfAvAlvsRezIyxPwL4P875kvwwZ4x9BMBWzvnrAv72dAB/AJB1HpoEcDuAT3HO727PHi4ejLGNEK8ti3NeXqB1Ph3i9bB2IdbX5LY5xLXkAAoAtgP4Kuf8h+3el3owxq4D8DTn1xjEPhed3/+Pc/62RdkxglggyPEilg2MsfcC+ByATwFYCWAFgLcBuARAtM37YrZ4/YwxttjvzyOc8w4AaQBPAfAYgFsYY5e3YmNL5JgXhFa/PubI2c71PBnANwFcyRj78FxW1Mrj45w/n3Pe4ezrdwF8Uv6ui64leo4Joi7HxYcccfzDGOsC8DEAf8s5/wnnfIYL7uecv5ZzXnCWizHG/osxdoAxNuSEnhLO357OGDvEGHsvY2yYMXaUMfZGbRuNPPefGWPHAHyDMdbDGPsVY2yEMTbh/H+ts/y/Q3xrv9IJx1zpPH4xY+xuxtiU8/Nibft/ZIz9O2PsNgh3oiqExxh7H2NsN2NshjH2CGPs5drf/ooxdqtzDBOMsb2Msedrf9/EGLvJee7vAPQ3cu6d83yIc/4hAP8L4BPaOjljbKvz/xc4+zTDGDvMGPtHbbmXMsa2M8amnf1/XtgxO4/9jXZMtzHGPssYm2SM7XHO4V8xxg461/EKbTvfZIz9fw1e7xcyxu539umg4wBKbnZ+TjrX7yLGWIQx9q+Msf3O+r7tvC7BGNvonIs3McYOQLiFDcMYO9U57knG2MOMsZdofws8r4yxfuc1N8kYG2eM3cIaEK6c81HO+XcAvB3A+xljfc769jHGnqVtV4Wig45Pe8x0lvkjY+zfnOs1wxi7njHWr63vDc65G2OMfdC/vQbPE2eMvYMxthPATuexzznXb5oxdi9j7Gna8kHHcAUT7/FRxtgH5rhsgjH2LSbeZ48yxv4fY+xQM8dCnLiQ8CKWCxdBhB2uqbPcxwGcBOAcAFsBrAHwIe3vKwF0OY+/CcAXGGM9TTy3F8AGAG+BeP98w/l9PYAcgCsBgHP+AQC3AHin8039nYyxXgC/BvB5AH0APgPg1/LG5/B6Z91pAPsDjm83hKDrAvBRAP/HGFul/f1CAI9DiKpPAvg6Y4w5f/segHudv/0bgLnkUV0N4FzGWCrgb18H8FbOeRoiFPwHAGCMXQDg2wD+CUA3gEsB7NOeV++YLwSwA+KcfQ/ADwA8GeIavQ5C3HaE7G+t650B8AZnn14I4O2MsZc5f7vU+dntXL87APyV8+8ZEKK4A8711rgMwKkAnhuyP1UwxiwAvwRwPYBBAH8H4LuMsZOdRQLPK4D3AjgEYADC/f0XiLBco1wDwARwQRPPqXd8rwHwRojjiAKQIvE0AF8E8FoAq+Bek7nwMojXxGnO73dDvGd7IV4fP2aMxWs8/6kQrt/lAD7EGDt1Dst+GMBGiNfBsyFehwTRECS8iOVCP4BRPd+GMXa7820/xxi71BEYbwHwbs75OOd8BsB/AHiVtp4SgI9xzkuc82sBzAI4ucHnVgB8mHNe4JznOOdjnPOfcs6zzvL/DnFjCuOFAHZyzr/DOS9zzr8PEb57sbbMNznnDzt/L/lXwDn/Mef8COe84uTn7IT3xrmfc/41zrkNkfu2CsAKxth6CLHyQWf/b4a42TfLEQAMQqz4KQE4jTHWyTmf4Jzf5zz+JgBXcc5/5+z3Yc75Y40eM4C9nPNvOMf0QwDrIK5hgXN+PUT+z9aQ/Q283gDAOf8j5/xBZ592APg+al+/1wL4DOd8D+d8FsD7AbyKeUNeH+GcZzjnuRrr8fMUCBH3cc55kXP+BwC/AvBq7RiCzmsJ4vpucI7vlmbyHJ1zPQohWBql3vF9g3P+hPP3H0EIIgB4JYBfcs5v5ZwXIb7QzDUn8z+d92gOADjn/+e8F8uc809DfEE7ucbzP+q8fx8A8ACAs+ew7F8A+A/nehyC+DJFEA1BwotYLowB6Ndvcpzziznn3c7fIhDf/JMA7nUE2SSA3ziPq/X4kqWzEDe9Rp47wjnPy18YY0nG2Fec8Mk0RHiqm4VXea1GtaOzH95v/gdrnAMZrtmu7eMZ8IYMj8n/cM5lYnyHs+0JznnGt+1mWQNxw5wM+NsrALwAwH4mQpoXOY+vg3Dqwqh5zACGtP/Lm63/sTDHK+x6gzF2IWPsRiZCxVMQ+YK1wq/+67cfwjFaoT1W71jC1nuQc17xrVu+LsLO66cA7AJwPRMh2Pc1s1HHaRsAMN7E0+od3zHt/+pcwzlG+QfntTnWxHZD94Ex9o9OuG/KeU90ofZ1DNvHZpb1HI9/nwiiFiS8iOXCHRDVWC+tscwoxE34dM55t/Ovi4sk3Xo08lz/N/T3QnyzvpBz3gk3PMVClj8CEZbUWQ/gcI1tKBhjGwB8DcA7AfQ5ovMhbXu1OAqgxxciXN/A8/y8HMB9PgEHAOCc3805fylEmOnnEI4HIG5KW2qsc7GqUb8H4BcA1nHOuwB8GeHXDqi+fusBlOEVhnM5liMA1vnys9TrIuy8cpHn+F7O+WYALwHwHtZc4cNLnf2/y/k9A/HlQ7Iy4DlzvVZHAahqTiZyJ/vCF6+J2gcnn+v/QThQPc57YgqNvSfmg+d4IL5cEERDkPAilgWc80mInKYvMsZeyRhLM5HsfA6AlLNMBUKYfJYxNggAjLE1jLG6+TZzfG4aQqxNOvlb/gqxIXgT5K8FcBJj7DWMMZMx9pcQeSq/qrd/DimIm86Is39vhHC86sI53w/gHgAfZYxFGWNPhTfEGQoTrGGiAu5vIHKJ/MtEGWOvZYx1OSGsaYjQLCBylN7IGLvcuWZrGGOnNLLtFpMGMM45zzt5aK/R/jYCsf/69fs+gHczUaTQARGK/iFvst0EYyyu/4MQPlkA/48xZjHRduLFAH5Q67wyxl7EGNvqhMmnANhwz3mt7fcyxl4L4AsAPsE5l87TdojQqcUYOx8iPLhQ/ATAi5kojIgC+AgWRhylIcTjCACTMfYhAJ0LsN56/AiiMKGHMbYG4ssQQTQECS9i2cA5/ySA90B8wx1y/n0FwD9D9JiC8/9dAP7khP9+j9r5HjrNPve/ASQg3LI/QYQmdT4H4JVMVD593rnBvQjCKRtzjuNFnPPRRnaOc/4IgE9DuH9DAM4EcFtjhwZACIsLIUJLH4ZIeK/FasbYLERe1N3O9p7u5FUF8XoA+5xz9zaInChwzu+CSLj+LIRAuAnVzt9i8LcAPsYYm4HIOZIOnQyF/TuA25yw7lMAXAXgOxAh5b0A8hCJ8M2wBkKs6//WQQit50O8lr4I4A1aHlzgeQWwDeI1Ogvxmvgi5/zGGtt+wLmeuyAE9Lu5qFSVfBDCmZyA+JLzvSaPLRTO+cMQ5+oHEG7RLIBhCBd7PvwW4n33BER4No/2hP0+BlHYsBfiGvwE8z8W4gSBGqgSBEEQbcVxDCcBbOOc713k3Zk3jLG3A3gV57xWcQZBACDHiyAIgmgDjLEXOwUpKQD/BeBBeNuKLBsYY6sYY5c4ofOTIVzsny32fhHLAxJeBEEQRDt4KUQhwRGIMOmrmml/scSIQqQ5zED0VbsGIkRMEHWhU
},
"metadata": {
"needs_background": "light"
}
}
],
2021-08-26 03:04:25 +00:00
"source": [
"plt.figure(figsize=(10,5))\nplt.title(\"Generator and Discriminator Loss During Training\")\nplt.plot(G_losses,label=\"G\")\nplt.plot(D_losses,label=\"D\")\nplt.xlabel(\"iterations\")\nplt.ylabel(\"Loss\")\nplt.legend()\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Visualization of Gs progression**\n\nRemember how we saved the generators output on the fixed_noise batch\nafter every epoch of training. Now, we can visualize the training\nprogression of G with an animation. Press the play button to start the\nanimation.\n\n\n"
]
},
{
"cell_type": "code",
2021-08-27 02:51:03 +00:00
"execution_count": 16,
2021-08-26 03:04:25 +00:00
"metadata": {
"collapsed": false
},
2021-08-27 02:51:03 +00:00
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": "\n<link rel=\"stylesheet\"\nhref=\"https://maxcdn.bootstrapcdn.com/font-awesome/4.4.0/css/font-awesome.min.css\">\n<script language=\"javascript\">\n function isInternetExplorer() {\n ua = navigator.userAgent;\n /* MSIE used to detect old browsers and Trident used to newer ones*/\n return ua.indexOf(\"MSIE \") > -1 || ua.indexOf(\"Trident/\") > -1;\n }\n\n /* Define the Animation class */\n function Animation(frames, img_id, slider_id, interval, loop_select_id){\n this.img_id = img_id;\n this.slider_id = slider_id;\n this.loop_select_id = loop_select_id;\n this.interval = interval;\n this.current_frame = 0;\n this.direction = 0;\n this.timer = null;\n this.frames = new Array(frames.length);\n\n for (var i=0; i<frames.length; i++)\n {\n this.frames[i] = new Image();\n this.frames[i].src = frames[i];\n }\n var slider = document.getElementById(this.slider_id);\n slider.max = this.frames.length - 1;\n if (isInternetExplorer()) {\n // switch from oninput to onchange because IE <= 11 does not conform\n // with W3C specification. It ignores oninput and onchange behaves\n // like oninput. In contrast, Microsoft Edge behaves correctly.\n slider.setAttribute('onchange', slider.getAttribute('oninput'));\n slider.setAttribute('oninput', null);\n }\n this.set_frame(this.current_frame);\n }\n\n Animation.prototype.get_loop_state = function(){\n var button_group = document[this.loop_select_id].state;\n for (var i = 0; i < button_group.length; i++) {\n var button = button_group[i];\n if (button.checked) {\n return button.value;\n }\n }\n return undefined;\n }\n\n Animation.prototype.set_frame = function(frame){\n this.current_frame = frame;\n document.getElementById(this.img_id).src =\n this.frames[this.current_frame].src;\n document.getElementById(this.slider_id).value = this.current_frame;\n }\n\n Animation.prototype.next_frame = function()\n {\n this.set_frame(Math.min(this.frames.length - 1, this.current_frame + 1));\n }\n\n Animation.prototype.previous_frame = function()\n {\n this.set_frame(Math.max(0, this.current_frame - 1));\n }\n\n Animation.prototype.first_frame = function()\n {\n this.set_frame(0);\n }\n\n Animation.prototype.last_frame = function()\n {\n this.set_frame(this.frames.length - 1);\n }\n\n Animation.prototype.slower = function()\n {\n this.interval /= 0.7;\n if(this.direction > 0){this.play_animation();}\n else if(this.direction < 0){this.reverse_animation();}\n }\n\n Animation.prototype.faster = function()\n {\n this.interval *= 0.7;\n if(this.direction > 0){this.play_animation();}\n else if(this.direction < 0){this.reverse_animation();}\n }\n\n Animation.prototype.anim_step_forward = function()\n {\n this.current_frame += 1;\n if(this.current_frame < this.frames.length){\n this.set_frame(this.current_frame);\n }else{\n var loop_state = this.get_loop_state();\n if(loop_state == \"loop\"){\n this.first_frame();\n }else if(loop_state == \"reflect\"){\n this.last_frame();\n this.reverse_animation();\n }else{\n this.pause_animation();\n this.last_frame();\n }\n }\n }\n\n Animation.prototype.anim_step_reverse = function()\n {\n this.current_frame -= 1;\n if(this.current_frame >= 0){\n this.set_frame(this.current_frame);\n }else{\n var loop_state = this.get_loop_state();\n if(loop_state == \"loop\"){\n this.last_frame();\n }else if(loop_state == \"reflect\"){\n this.first_frame();\n this.play_animation();\n }else{\n this.pause_animation();\n this.first_frame();\n }\n }\n }\n\n Animation.prototype.pause_animation = function()\n {\n this.direction = 0;\n if (this.timer){\n clearInterval(this.timer);\n this.timer = null;\n }\n }\n\n Animation.prototype.play_animation = function()\n {\n this.pa
},
"metadata": {},
"execution_count": 16
},
{
"output_type": "display_data",
"data": {
"text/plain": "<Figure size 576x576 with 1 Axes>",
"image/svg+xml": "<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\"?>\n<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n<svg height=\"449.28pt\" version=\"1.1\" viewBox=\"0 0 449.28 449.28\" width=\"449.28pt\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n <metadata>\n <rdf:RDF xmlns:cc=\"http://creativecommons.org/ns#\" xmlns:dc=\"http://purl.org/dc/elements/1.1/\" xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\">\n <cc:Work>\n <dc:type rdf:resource=\"http://purl.org/dc/dcmitype/StillImage\"/>\n <dc:date>2021-08-26T21:50:26.076113</dc:date>\n <dc:format>image/svg+xml</dc:format>\n <dc:creator>\n <cc:Agent>\n <dc:title>Matplotlib v3.4.3, https://matplotlib.org/</dc:title>\n </cc:Agent>\n </dc:creator>\n </cc:Work>\n </rdf:RDF>\n </metadata>\n <defs>\n <style type=\"text/css\">*{stroke-linecap:butt;stroke-linejoin:round;}</style>\n </defs>\n <g id=\"figure_1\">\n <g id=\"patch_1\">\n <path d=\"M 0 449.28 \nL 449.28 449.28 \nL 449.28 0 \nL 0 0 \nz\n\" style=\"fill:none;\"/>\n </g>\n <g id=\"axes_1\">\n <g clip-path=\"url(#pa263c3ed1f)\">\n <image height=\"435\" id=\"imagec6cafb8cfc\" transform=\"scale(1 -1)translate(0 -435)\" width=\"435\" x=\"7.2\" xlink:href=\"data:image/png;base64,\niVBORw0KGgoAAAANSUhEUgAAAbMAAAGzCAYAAACl7fmHAAEAAElEQVR4nOz8R7OteXbeif1e77d3x5t7zvU3fWZVZWUBBRRAgCSIJthsKkLRgw6N9BU01VzSoBU96VBoIDUpNUWIbJoGwIKpAsqlz7z+eLu936+3GtxiE/0RFIE13pMV+/8+a63nedYSgIK/i7+Lv4u/i7+Lv4v/Pw4ZoFxy2NraAimjSCQKQUSSc5JMQBFTikIgB/JCQpFEchKiTMIWU9JCJM4ETL0giyAVQRVyxLwgE2RSQUAVCvI8J0okTF0gzRNSRJQCRAQSoUAUQBYhyCWSBAxDgDghikUMCUQ1J8pyJFFAygUiWaDIBGQphzgjEkWsoiAXIMlFTo9PEQWBu3fuIMiQ5TkCAmIhkQjFm8SBQszJUtAkiVQTCIMCS8vIYkhTCUUuKESBTEwREhFJysnyjLRQ0UQJQcgJoxxDFyAtKGSBXBCQC4FAyBByCZWcVM5JQxldBbIcL8/RdIEiziEXEQTQRZlQyihiULSUNBFJUhFNBrHIyAS4vLzFcz0e3b+PIIogFAh5hiCKZEpBkUiIUo6QFviZgKmKUBQECei6BEJCnuUIBSiCiC8ViJGCJqcUWU4Yy6iajChDnCaIZCAKZIWIUICMQCFnZKGAKRb8+hfksoiSy6RFiiCBKKSQQVyAikAh5qSCgpblZHJBlgCqjJjmICQMenPG4xH3Dg7RDZVczCARkciJZIkiy1GljFwWCeMcAxmxEEmEHFnMEAuJWBQgz5CBXIQ0k1CKnELOiVMJRcgpJJEiAyEHjZwMgSyXEBQQpYQwVjCkiFyQiTMQxQI5k8mFmKJQkaSCvMjIMglVzMgF4c1bFlMKRPIMBKFABDIhYzkPubm+Zmd7i4rtkCspeSoiCxKZVJDHErISkRcSSS4gizlFoZCJGWKWoYgieZaTiiK6DHlREMcSuiiQizF+JqJKEnJRkAkpaSGjyBlikpPkBoqUkhcFKQJKliFIBXEugyhjKCJpEhJlIqaSUSAQ5TKGlFBkEomYI2UFyBKpkFNkApJQIAgpgQ8nJye0m03WWnUSCfJCQM4FMhHSVEZXfUglUgEKQUYoZAo1IU1yrDwjLxTiXETQCoQiIU8E1FygMEWyJCMVRFRBgSLETxQcLSNPJdJcRJNicgRiUUIqYsRUIBdF8lxAVWSyIiNJQDVSpFglRURWMqI8IkdCzAQERYA4IxdlZLGAIifLRF6+eEnJcdjf2iZTMrIC1FQkRaDIReQiQVAlYjEjTyRUhDeYkAuYZk6WFcSZjCQICGLy5s0VGUomkAgyaVagKiJSmhLFEqLxBjugQCwEJFJiTUBMBXQpJxYlklxEFTOyLCNKJGxdJY8islxGUguELCOVeYOJQkFOQVKALonkeUaai5wcnyEUBfcODymkgjQXUfMMBJlUyBBEEIuCDJEsl5AlgSJNCDKZip6SZwKxICIWCYKkkBcFRVRgGDGRkBNEJrosoAgxYSyhGxlCBKmoUIgpJJBIGSCjyjkU4MdgqBJkOVEhoGkFRSSBICKKKUpekIgFRaagijmpWBBnBZqSkiUyuZBzc9FntVq+wfTtnXX+6J/9IbIQgmgSuyqmHpLqCsQzJKmOILosCx1BkSD08IKUHTFl5Sv4ho2qFyBI+PmSuhAQFSayKRD4NqaeE0gpq1lCNc2wMhXfMYjEGFtPmU+n6I6OGpgILYHZysbJVCgmrGZgJzJKS8A3Q5SZhKAKpHmA7AhkQcHmIuKqsYOdjklznTiT+G//L/9XFEnij/7ZP6YAAimnkqWglFjKMSUhZx7nVOWIYCWhOyVUQ+RmodGRu8ShgJeBnCnkkopsLpDRKSIF1ZqSFGvEHtS0jKsoZENLiCUFTwgQ9RrVzOFWOyP1LZppjKHAZbxJ01yS+QUTXGpijBXVcFkQyy0sJWesDNBCi2oUgyAylet00ghXTkjzmH/5L/4958cn/JP/6g/JhTfFVMsyLLFCYPcIYw05NRBKSxZemZqcs5QjoolPWTQwlRSfCM9qURUUgmRCQoGS5NTygp7gYCsmWDHuMqQSRQRqlSKbIhkVtMwg8sfElYzK0iPHQihEFmYLA5uYLnkSoOgqpanHasNBiRRcz0NQTUrTOTgmcabiGQaSpyCLXf78p18x/quf8Pv/6PdplGvkkk8eydiShm8MCDMLM8yRygl9waHi5qQSpFFKXQtJEgMvVMnqBboB3LqkeolKGuDqAr4sUpFVBE0l8n3EJMMWDGbTPnJ9gxyZQp7iFypmNiKQKkixilpIIMvk4giiCoaikyVjQquJupwjSQKJZVBxeyzkMqqikKUFIiqiNuWLbybc/D/+OT/47e9yuPuYpBiSCDaO7BBYHklhoU+uEVUH31GQFxmiWSYMpwhpgmEWZLOEvFrBJMH1E+S8Rl2ecmWaJElEVdMwMh2mAy5Vh46UUsg9cvExWaxQySasLBFtPiKs7yJmC6S4gqSWccUBhj+m7FqERsykXafqeiRZTqT4NEYBs2oDQQgQExvRMFHjFVdTn5P/8wlvPXrA9370HdJqiDUXEXOTRLGZSypV4RjFL1MYU3yvzqpQsSohbuLTkgWmizlF6x55nuBEc8CnFCeslDUcLWHqaQiWSZYOSecK9WrKYg5Z2UHzUiw9YRb5aGpIMmugdASSBCylRuEFzCIfqSJgLAtSS6fpjZhsdRD7fYo0R
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAcEAAAHBCAYAAAARuwDoAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/MnkTPAAAACXBIWXMAAAsTAAALEwEAmpwYAAEAAElEQVR4nOz916/lWZbfiX1+3hzv7fX3hjeZEZGmfFZVd7UhG80ZUuAApCABehvwQX+E3gYQIM0A8yBAgoQB3XBINodks011ZVWnrTSR4SNuxPX3Hu/P+Xmnh0jOm5gE5mEgdHxezw8HZ5299v7+1l57rS0kScIb3vCGN7zhDX8TEf+3/gFveMMb3vCGN/xvxRsRfMMb3vCGN/yN5Y0IvuENb3jDG/7G8kYE3/CGN7zhDX9jeSOCb3jDG97whr+xvBHBN7zhDW94w99Y5P/Uh4IgvKmfeMMb3vCGN/z/NUmSCP+/PvtPiiBANpel2WohCRKxHyFqgBDhxQKCKGJEPgEysSQiRxBICVIAoiAgihBHIXEiIKoQxjJEAokeYgQSniAhihCFAVIgIAgJogJCHJLEIoEUIQoqsR8TpWMkTyJOAmRBIY4hihIUWUFQPIRIJBFE5DAhUCAMBBQRYmLCREQVIYkTYlHkYP8lgiiws7uDGMeEgCyKRIkAgghxgigHCLFImAhIgkgkJAhxjJTIRFKIEoQksQJSTCyqxJGPpIMciDhhgqBJJHGIEIOMgiBEIEISicixjy/KhHGEIoMkilhegKlqxLFPiIIC+EmCJklEYUJCBLJK7MRIRgQIRD6oqkgcRMSyyPnpKdbK4vL1axDFSIiIYkxMhCjIhGGCpAXIgcIyiRAEE5GAJAYtFonkAAAxTpCEmFhXCL0EmQQJWIoCmqCQhAGJmCAEAoniI0QKghCTSAkSKn6SoAgCQhwRhAmSroHvvXZGQUAhIhZlgvj19wqiQBSFiJKIIMTEsYQoCohhQiiLjPoDRsMRu5cvISoiciQQJ4ACkiARBiGSnCAAQSigiAoRIbGQoEQSJAGJJpIECUKSIChgBQoaIYKQECUCkqQgxwFhKJJICSI+kijjRyJikiCKIqGQoCIQywlxAIIgIhCTIEDy2m8EXSB0fFRRASEhSkIQFFAjBE8kkWTkOMQTBezFkvOzM9Y31jFTBmIsEgGSlIAoEkUCEjGJkBAmCZKgQRwCr+eKEIcIikTixyAnCIjE3455JEbEoYQkSQh+RCRJkMSIiCAk+FGCLkGUQAzIiUSsuAihjJBIIMUkJOCCIEsgJ8RhhCwqxIQEgogaQ6j4SL5KKAnIcUQoSkSey8GrAyq1KuVyESlW8AlRBZFISl77vSAT45EkCqIUk4QxCApCBKIQIUgQhyKikBCKQASiJIFkEwcKiSghyAFJJCE6EookEKoRYeIjhSqikuAnCkokkCgeYiITRDGqCJEo4noiJiKiHOALMXIsE8cRgiwRRwmJGqH6Cp6QIAsJkShCHPPi6VMy2QzttRZiIhIQoiLy+t8SXvuJFKB4MrEoAgKhmCD4EYoogAKhB4KUIIUxoSJAAokSofsSNgKSKICYEPoxMhooIYigBDGIMaGkkoQhiRajegp+EiFLGhEBcZygxhKoCUkUESUSJBGypBBEMZIcIoYSLgKanBDFr7Xh4NUrAHb3dhHjhBBQiIliSGQBAYGEABEFP4lRUEGKECIRwY+JNYEkCZAiDUF2iUWRxBeRpAhRSlgGErqiIsQeYSKj+yKBHJAIEUIiokgJiAJ+8tp2ORTwElAVnSTxiGIRNRAJ5BhRjMCHSE6QVQXPDVEkSOKIWBCRRYgjAQSZi7MTlsvlf1LjvlME19tt/ujv/zFFIjpOie2yy3jpYSUhQlahMLGxxCxSLo0vucSLMdIig1pz0dyYiZyiKrosUEhCEV9OY5XPudbLcZ5IhKpOxh4wddKUswm2ZCNEIUJWZmr51KyYhSYi6SZpK2JoQoUSsdWlS0hTrTLRluTnEV5OQ59E+KrJIjQwBItC6DHyTMp1nXFkowUC/93/9b9FlmX++H/395DEBas4z0YmZDiPEdDQQ5OlMiAnKoyEPGrRw/AdRoc2WzmNeaggyDG6XuPC77HtugxVDWFdwxi5HE+KrNUyCPEUK4qRTAk1HiGoeXRL58JLWFMG9NQ0VVSIxhyqMZfiTebSlHgwIqeV6PkBLVHBT5m4aR99PqeDyqYY4KAwHUqkaxGepyHH8D/+k3/MwatD/v7f/3s4gQO6gukkhMiYscMyZ+ClQmpLjeOZTaO0zmpxwCpKKLNOYh6gqTqqoRF4Ppofcega5M2Yoh/wKFXkB4rM/myBKBqktRkzcUlhZCKW03irBTlJoE8ROY7QQotxnGfTzHLoHJAVXSSvQLYgkeqK9NspTC9gJfg4qoTqKej0WVgipVqO7kQhKy75zV99wm9+/Rv+zt/6A6J0GkNVCbQY0fUp4TCWVRI9hkAlGCmUM036yTOSJCITqwTZkFwiECcpVrJJczXmKDGomjGuZ7MIcrSzFYbuKepEhRysxBVrqsRBaJL1JLTQYpLzKSYNxGDAIgSpkWM4cbm6XDHKFHASuGTN6EVrhI0FqZGPpSYoboyd9UlPMojZPEEAobvi6OgF//yf/jM++P6PqO9tkNYT5lGamjBmFevYho6SJARLGzkUMMotpu6IvBcgmjqu7ZFRBeaRgKL6iJZMggeU8YQhSiIiV2oMF3MK0QhXrUDgk0kUXEelVEiYhT6TRKUgSATOCDHJI8hpLGnCeuAxFA3itEBKcbCHKqWKymxh4wUymUQhVFeoEURihkSBwA6xllP+2//7f8fN27e49u49NpOYTsqgnLhYQsI8ligGEithToRAMa3Rn4Rs2wJ2pUSor8hFEh0vZlsQ6NgzRC1NRq6xSB9jeiqharAITDL06Z9XaOgiQnbJyJW4bAqMPB9PjTCoMI3PqfopelUF3UlQx0t6YZZMykaXfbRAw6qKOF3QEwlLDxFI2LJdeigkFYN4BhIS/83z/wvtVos/+i//SxR84tAkl4/x45iAGFNQWQoBphfjuCaC5BCYAbOZTKFgUU8UZkmWmi7RnUQo2Qm4BaZqzE5nyTBTxKkWUSYH2IuETLOCb89YpmV2HI3lvIdbVEBRX68rHZmJFlPI1fAHQyxJoSymcfUBhm/imCq+65G1wJFDYgWKC4duvkBejpi5MRox//3/7b9HROC/+q/+DkpssQgUyjmN2SggyYcEeoYgsGgtRUZRDs0wkIIudhiiSGVCdYwpBKRTBQ6wKU9E3HQaJViSH4u8zKdoZQ2SqMd4oVMSYaxOyUcmOhodVmwFBr18BiH2KPhLOp7CmrHJhfcEDZeKsMsgNyPvLljkm3gLi4Yd0dV9am4FX5oxWalsF2T2Q5NSYPHP/vm/4PnTZ/9JjfvOnKAgJKRCFcv18dsiyURgJejkFI94EhO5eWQlICVd4AsD1FhANCGeyXSEPLOmQDy3WAp5lkKCJIa0XyjMbAuj6BCoDrYpIeZXGKfgWlOE0oLVRYBpFZAND8OZok9gEXWJWwosDVaxSaaSZuToJE6Ml42R5hJ1MY0pgtlwyCcCXhJTLoo41pJloiLJEYLw2i5ZEFB8jUwqIDyxSQkSEh5Weo5g5ZgkPvnUEKPXwZ8NqFRXWILN3LGZX1/hTB5QUdKgTgjMIspUIAlD6gWXcueC4MIhF75g5kbMOkV8Oca2ewSxgD60yKguziRLd6xQqpWZ8hLLDUBO4TVSCKHIIN1HC7usOhaeEpBrzcFa4DoLats2ph1jkEPQQqRvR1OQbCRRJjIEdEVAS0oIpZi00QcrwR8tkNdlovkBShKQTi+JnDkDoUg/bbNIJkyFFKtQI4VNrgf+YEK1ETEZnCEkCmockTsOyI1TzIsrBG9M7CfYmQV5ZQaywEQKkLcEBPuAJiGKlxCtVBarHNOCRVJasArnxDMNVUvIrBYskwxS0WPW88BPg+gjSa/fViUjIbdKiLUQ3Q5QBZOekCLKg2OnMUKZdEmmaR+TWWnIYZauaCC7eRapKYoRYgwtVlaCIciEIxPPsygVV
},
"metadata": {
"needs_background": "light"
}
}
],
2021-08-26 03:04:25 +00:00
"source": [
"#%%capture\nfig = plt.figure(figsize=(8,8))\nplt.axis(\"off\")\nims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]\nani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)\n\nHTML(ani.to_jshtml())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Real Images vs. Fake Images**\n\nFinally, lets take a look at some real images and fake images side by\nside.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Grab a batch of real images from the dataloader\nreal_batch = next(iter(dataloader))\n\n# Plot the real images\nplt.figure(figsize=(15,15))\nplt.subplot(1,2,1)\nplt.axis(\"off\")\nplt.title(\"Real Images\")\nplt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))\n\n# Plot the fake images from the last epoch\nplt.subplot(1,2,2)\nplt.axis(\"off\")\nplt.title(\"Fake Images\")\nplt.imshow(np.transpose(img_list[-1],(1,2,0)))\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Where to Go Next\n----------------\n\nWe have reached the end of our journey, but there are several places you\ncould go from here. You could:\n\n- Train for longer to see how good the results get\n- Modify this model to take a different dataset and possibly change the\n size of the images and the model architecture\n- Check out some other cool GAN projects\n `here <https://github.com/nashory/gans-awesome-applications>`__\n- Create GANs that generate\n `music <https://deepmind.com/blog/wavenet-generative-model-raw-audio/>`__\n\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "PyTorch",
"language": "python",
"name": "pytorch"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6-final"
}
},
"nbformat": 4,
"nbformat_minor": 0
}