Variational Autoencoder in PyTorch, commented and annotated.

I have recently become fascinated with (Variational) Autoencoders and with PyTorch.

Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar’s blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma’s original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time.

In my case, I wanted to understand VAEs from the perspective of a PyTorch implementation. I started with the VAE example on the PyTorch github, adding explanatory comments and Python type annotations as I was working my way through it.

This post summarises my understanding, and contains my commented and annotated version of the PyTorch VAE example. I hope it helps!

What is PyTorch?

PyTorch is FAIR’s (that’s Facebook AI Research) Python dynamic deep learning / neural network library. The way that FAIR has managed to make neural network experimentation so dynamic and so natural is nothing short of miraculous. Read this post by fast.ai to find out more about their reasons for excitement, many of which I share.

What is an autoencoder?

The general idea of the autoencoder (AE) is to squeeze information through a narrow bottleneck between the mirrored encoder (input) and decoder (output) parts of a neural network. (see the diagram below)

Because the network achitecture and loss function are setup so that the output tries to emulate the input, the network has to learn how to encode input data on the very limited space represented by the bottleneck.

What is a variational autoencoder?

Variational Autoencoders, or VAEs, are an extension of AEs that additionally force the network to ensure that samples are normally distributed over the space represented by the bottleneck.

They do this by having the encoder output two n-dimensional (where n is the number of dimensions in the latent space) vectors representing the mean and the standard devation. These Gaussians are sampled, and the samples are sent through the decoder. This is the reparameterization step, also see my comments in the reparameterize() function.

What a fabulous trick!

The loss function has a term for input-output similarity, and, importantly, it has a second term that uses the KullbackÔÇôLeibler divergence to test how close the learned Gaussians are to unit Gaussians.

In other words, this extension to AEs enables us to derive Gaussian distributed latent spaces from arbitrary data. Given for example a large set of shapes, the latest space would be a high-dimensional space where each shape is represented by a single point, and the points would be normally distributed over all dimensions. With this one can represent existing shapes, but one can also synthesise completely new and plausible shapes by sampling points in latent space.

Results using MNIST

Below you see 64 random samples of a two-dimensional latent space of MNIST digits that I made with the example below, with ZDIMS=2.

pytorch-vae-sample-z2-epoch10.png

Next is the reconstruction of 8 random unseen test digits via a more reasonable 20-dimensional latent space. Keep in mind that the VAE has learned a 20-dimensional normal distribution for any input digit, from which samples are drawn that reconstruct via the decoder to output that appear similar to the input.

pytorch-vae-reconstruction-z10-epoch10.png

A diagram of a simple VAE

An example VAE, incidentally also the one implemented in the PyTorch code below, looks like this:

pytorch-vae-arch-2.png

A simple VAE implemented using PyTorch

I used PyCharm in remote interpreter mode, with the interpreter running on a machine with a CUDA-capable GPU to explore the code below. PyCharm parses the type annotations, which helps with code completion. I also made extensive use of the debugger to better understand logic flow and variable contents. (Debuggability is one of PyTorch’s strong points.)

Let me know in the comments to this post if you have any suggestions on how the code comments could be further improved.

# example from https://github.com/pytorch/examples/blob/master/vae/main.py
# commented and type annotated by Charl Botha <cpbotha@vxlabs.com>

import os
import torch
import torch.utils.data
from torch import nn, optim
from torch.autograd import Variable
from torch.nn import functional as F
from torchvision import datasets, transforms
from torchvision.utils import save_image

# changed configuration to this instead of argparse for easier interaction
CUDA = True
SEED = 1
BATCH_SIZE = 128
LOG_INTERVAL = 10
EPOCHS = 10

# connections through the autoencoder bottleneck
# in the pytorch VAE example, this is 20
ZDIMS = 20

# I do this so that the MNIST dataset is downloaded where I want it
os.chdir("/home/cpbotha/Downloads/pytorch-vae")

torch.manual_seed(SEED)
if CUDA:
    torch.cuda.manual_seed(SEED)

# DataLoader instances will load tensors directly into GPU memory
kwargs = {'num_workers': 1, 'pin_memory': True} if CUDA else {}

# Download or load downloaded MNIST dataset
# shuffle data at every epoch
train_loader = torch.utils.data.DataLoader(
    datasets.MNIST('data', train=True, download=True,
                   transform=transforms.ToTensor()),
    batch_size=BATCH_SIZE, shuffle=True, **kwargs)

# Same for test data
test_loader = torch.utils.data.DataLoader(
    datasets.MNIST('data', train=False, transform=transforms.ToTensor()),
    batch_size=BATCH_SIZE, shuffle=True, **kwargs)


class VAE(nn.Module):
    def __init__(self):
        super(VAE, self).__init__()

        # ENCODER
        # 28 x 28 pixels = 784 input pixels, 400 outputs
        self.fc1 = nn.Linear(784, 400)
        # rectified linear unit layer from 400 to 400
        # max(0, x)
        self.relu = nn.ReLU()
        self.fc21 = nn.Linear(400, ZDIMS)  # mu layer
        self.fc22 = nn.Linear(400, ZDIMS)  # logvariance layer
        # this last layer bottlenecks through ZDIMS connections

        # DECODER
        # from bottleneck to hidden 400
        self.fc3 = nn.Linear(ZDIMS, 400)
        # from hidden 400 to 784 outputs
        self.fc4 = nn.Linear(400, 784)
        self.sigmoid = nn.Sigmoid()

    def encode(self, x: Variable) -> (Variable, Variable):
        """Input vector x -> fully connected 1 -> ReLU -> (fully connected
        21, fully connected 22)

        Parameters
        ----------
        x : [128, 784] matrix; 128 digits of 28x28 pixels each

        Returns
        -------

        (mu, logvar) : ZDIMS mean units one for each latent dimension, ZDIMS
            variance units one for each latent dimension

        """

        # h1 is [128, 400]
        h1 = self.relu(self.fc1(x))  # type: Variable
        return self.fc21(h1), self.fc22(h1)

    def reparameterize(self, mu: Variable, logvar: Variable) -> Variable:
        """THE REPARAMETERIZATION IDEA:

        For each training sample (we get 128 batched at a time)

        - take the current learned mu, stddev for each of the ZDIMS
          dimensions and draw a random sample from that distribution
        - the whole network is trained so that these randomly drawn
          samples decode to output that looks like the input
        - which will mean that the std, mu will be learned
          *distributions* that correctly encode the inputs
        - due to the additional KLD term (see loss_function() below)
          the distribution will tend to unit Gaussians

        Parameters
        ----------
        mu : [128, ZDIMS] mean matrix
        logvar : [128, ZDIMS] variance matrix

        Returns
        -------

        During training random sample from the learned ZDIMS-dimensional
        normal distribution; during inference its mean.

        """

        if self.training:
            # multiply log variance with 0.5, then in-place exponent
            # yielding the standard deviation
            std = logvar.mul(0.5).exp_()  # type: Variable
            # - std.data is the [128,ZDIMS] tensor that is wrapped by std
            # - so eps is [128,ZDIMS] with all elements drawn from a mean 0
            #   and stddev 1 normal distribution that is 128 samples
            #   of random ZDIMS-float vectors
            eps = Variable(std.data.new(std.size()).normal_())
            # - sample from a normal distribution with standard
            #   deviation = std and mean = mu by multiplying mean 0
            #   stddev 1 sample with desired std and mu, see
            #   https://stats.stackexchange.com/a/16338
            # - so we have 128 sets (the batch) of random ZDIMS-float
            #   vectors sampled from normal distribution with learned
            #   std and mu for the current input
            return eps.mul(std).add_(mu)

        else:
            # During inference, we simply spit out the mean of the
            # learned distribution for the current input.  We could
            # use a random sample from the distribution, but mu of
            # course has the highest probability.
            return mu

    def decode(self, z: Variable) -> Variable:
        h3 = self.relu(self.fc3(z))
        return self.sigmoid(self.fc4(h3))

    def forward(self, x: Variable) -> (Variable, Variable, Variable):
        mu, logvar = self.encode(x.view(-1, 784))
        z = self.reparameterize(mu, logvar)
        return self.decode(z), mu, logvar


model = VAE()
if CUDA:
    model.cuda()


def loss_function(recon_x, x, mu, logvar) -> Variable:
    # how well do input x and output recon_x agree?
    BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784))

    # KLD is Kullback–Leibler divergence -- how much does one learned
    # distribution deviate from another, in this specific case the
    # learned distribution from the unit Gaussian

    # see Appendix B from VAE paper:
    # Kingma and Welling. Auto-Encoding Variational Bayes. ICLR, 2014
    # https://arxiv.org/abs/1312.6114
    # - D_{KL} = 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
    # note the negative D_{KL} in appendix B of the paper
    KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
    # Normalise by same number of elements as in reconstruction
    KLD /= BATCH_SIZE * 784

    # BCE tries to make our reconstruction as accurate as possible
    # KLD tries to push the distributions as close as possible to unit Gaussian
    return BCE + KLD

# Dr Diederik Kingma: as if VAEs weren't enough, he also gave us Adam!
optimizer = optim.Adam(model.parameters(), lr=1e-3)


def train(epoch):
    # toggle model to train mode
    model.train()
    train_loss = 0
    # in the case of MNIST, len(train_loader.dataset) is 60000
    # each `data` is of BATCH_SIZE samples and has shape [128, 1, 28, 28]
    for batch_idx, (data, _) in enumerate(train_loader):
        data = Variable(data)
        if CUDA:
            data = data.cuda()
        optimizer.zero_grad()

        # push whole batch of data through VAE.forward() to get recon_loss
        recon_batch, mu, logvar = model(data)
        # calculate scalar loss
        loss = loss_function(recon_batch, data, mu, logvar)
        # calculate the gradient of the loss w.r.t. the graph leaves
        # i.e. input variables -- by the power of pytorch!
        loss.backward()
        train_loss += loss.data[0]
        optimizer.step()
        if batch_idx % LOG_INTERVAL == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                100. * batch_idx / len(train_loader),
                loss.data[0] / len(data)))

    print('====> Epoch: {} Average loss: {:.4f}'.format(
          epoch, train_loss / len(train_loader.dataset)))


def test(epoch):
    # toggle model to test / inference mode
    model.eval()
    test_loss = 0

    # each data is of BATCH_SIZE (default 128) samples
    for i, (data, _) in enumerate(test_loader):
        if CUDA:
            # make sure this lives on the GPU
            data = data.cuda()

        # we're only going to infer, so no autograd at all required: volatile=True
        data = Variable(data, volatile=True)
        recon_batch, mu, logvar = model(data)
        test_loss += loss_function(recon_batch, data, mu, logvar).data[0]
        if i == 0:
          n = min(data.size(0), 8)
          # for the first 128 batch of the epoch, show the first 8 input digits
          # with right below them the reconstructed output digits
          comparison = torch.cat([data[:n],
                                  recon_batch.view(BATCH_SIZE, 1, 28, 28)[:n]])
          save_image(comparison.data.cpu(),
                     'results/reconstruction_' + str(epoch) + '.png', nrow=n)

    test_loss /= len(test_loader.dataset)
    print('====> Test set loss: {:.4f}'.format(test_loss))


for epoch in range(1, EPOCHS + 1):
    train(epoch)
    test(epoch)

    # 64 sets of random ZDIMS-float vectors, i.e. 64 locations / MNIST
    # digits in latent space
    sample = Variable(torch.randn(64, ZDIMS))
    if CUDA:
        sample = sample.cuda()
    sample = model.decode(sample).cpu()

    # save out as an 8x8 matrix of MNIST digits
    # this will give you a visual idea of how well latent space can generate things
    # that look like digits
    save_image(sample.data.view(64, 1, 28, 28),
               'results/sample_' + str(epoch) + '.png')

How to debug PyInstaller DLL / PYD load failed issues on Windows

TL;DR

When debugging DLL load errors on Windows, use lucasg’s open source and more modern rewrite of the old Dependency Walker software. Very importantly, keep on drilling down through indirect dependencies until you find the missing DLLs.

The Problem

Recently I had to package up a wxPython and VTK-based app for standalone deployment on Windows. Because of great experience with PyInstaller, I opted to use this tool.

With the first try with the freshly built package on the deployment machine, it refused to start up due to an ImportError: DLL load failed: The specified module could not be found., and specifically with the vtk.vtkCommonCorePython.pyd Python extension DLL.

What was frustrating, is that the relevant file was definitely present and in the right place, namely the same folder as the exe file.

A test app that imports only wx (4.0.0rc) and vtk (8.0.1) generated the following traceback:

[3096] LOADER: Running pyiboot01_bootstrap.py
[3096] LOADER: Running t1.py
Traceback (most recent call last):
  File "site-packages\vtk\vtkCommonCore.py", line 5, in <module>
  File "C:\Users\cpbotha\Miniconda3\envs\env1\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 718, in load_module
ImportError: DLL load failed: The specified module could not be found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "t1.py", line 9, in <module>
  File "C:\Users\cpbotha\Miniconda3\envs\env1\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
  File "site-packages\vtk\__init__.py", line 41, in <module>
  File "C:\Users\cpbotha\Miniconda3\envs\env1\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
  File "site-packages\vtk\vtkCommonCore.py", line 9, in <module>
  File "C:\Users\cpbotha\Miniconda3\envs\env1\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 718, in load_module
ImportError: DLL load failed: The specified module could not be found.
[3096] Failed to execute script t1
[3096] LOADER: OK.
[3096] LOADER: Cleaning up Python interpreter.

Digging in using pdb and ctypes.WinDLL

By adding import pdb; pdb.set_trace() at a strategic point, I could use the Python debugger to try and investigate why it was not able to load the DLL which was clearly in the right place.

First I confirmed that it could load other namespaced PYDs in the same directory:

(Pdb) import ctypes
(Pdb) ctypes.WinDLL("wx.siplib.pyd")
<PyInstallerWinDLL 'C:\Users\cpbotha\Downloads\t1\wx.siplib.pyd', handle 7fff66320000 at 0x151a6781048>

However, using the same invocation on the offending PYD would still raise an error. By the way, I had to disable PyInstaller’s silly exceptions, because they masked the underlying OSError exception.

Great deal of good that did me, because Windows 10 gets me no additional information other than reporting “error 126” on the top-level DLL. Why can’t such a mature system not offer a little more guidance?

(Pdb) ctypes.WinDLL("vtk.vtkCommonCorePython.pyd")
>>> OSError: [WinError 126] The specified module could not be found
(Pdb) ctypes.WinDLL("C:\\Users\\cpbotha\\Downloads\\t1\\vtk.vtkCommonCorePython.pyd")
>>> OSError: [WinError 126] The specified module could not be found
(Pdb) import os
(Pdb) os.path.exists("C:\\Users\\cpbotha\\Downloads\\t1\\vtk.vtkCommonCorePython.pyd")
True
(Pdb) ctypes.WinDLL("vtkCommonCorePython.pyd")
>>> OSError: [WinError 126] The specified module could not be found
(Pdb) os.path.exists("vtkCommonCorePython.pyd")
True

I did spend more time than I should have in pdb tracing through the whole complicated PyInstaller and Python import code innards. The fact that some PYDs loaded and some did not should have more quickly pushed me in the direction of nested dependencies.

Seeing the light with lucasg’s Dependencies

Although I had at an earlier stage checked DLL loading first with Dependency Walker (this gets very confused by the new Windows api-ms-win-* DLLs) and later with lucasg’s improved utility, I did not drill down far enough into the dependency tree.

It’s important to keep on drilling down until you see missing DLLs, the application won’t automatically traverse the tree.

Anyways, drilling down from the offending vtk.vtkCommonCorePython.pyd soon enough led me to the culprit: The Intel Threading Building Blocks (TBB) DLL conda package I was using was accidentally built in debug mode, and the debug runtimes it relied one were obviously not being deployed:

screenshot_2017-12-06_13-52-48.png

After switching to a TBB conda package from a different channel, the app was finally able to start up and run on the deployment machine.

Run code on remote ipython kernels with Emacs and orgmode.

As is briefly documented on the ob-ipython github, one can run code on remote ipython kernels.

In this post, I give a little more detail, and show that this also works wonderfully for remote generation but local embedding of graphics in Emacs Org mode.

As I hinted previously, the jupyter notebook is a great interface for computational coding, but Emacs and Org mode offer far more flexible editing and are more robust as a documentation format.

On to the show. (This whole blog post is a single Org mode file with org-babel source code blocks, the last of which is live.)

Start by starting the ipython kernel on the remote machine:

me@server$ jupyter --runtime-dir
>>> /run/user/1000/jupyter

me@server$ ipython kernel
>>> To connect another client to this kernel, use:
>>>    --existing kernel-11925.json

We have to copy that json connection file to the client machine, and then connect to it with the jupyter console:

me@client$ jupyter --runtime-dir
>>> /Users/cpbotha/Library/Jupyter/runtime

me@client$ cd /Users/cpbotha/Library/Jupyter/runtime
me@client$ scp me@server:/run/user/1000/jupyter/kernel-11925.json .
me@client$ jupyter console --existing kernel-12818.json --ssh meepz97
>>> [ZMQTerminalIPythonApp] To connect another client via this tunnel, use:
>>> [ZMQTerminalIPythonApp] --existing kernel-12818-ssh.json

Note that we copy the json file into our local jupyter runtime directory, which will create the ssh connection file there, and enable us to reference it by name.json only (vs its full path) in any ob-ipython source code blocks.

Now you can open ob-ipython org-babel source blocks which will connect to the remote kernel. They start like this:

#+BEGIN_SRC ipython :session kernel-12818-ssh.json :exports both :results raw drawer

Let’s try it out:

%matplotlib inline
# changed to png only for the blog post
%config InlineBackend.figure_format = 'png'

from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm

fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
cset = ax.contour(X, Y, Z, cmap=cm.coolwarm)
ax.clabel(cset, fontsize=9, inline=1)

plt.show()

428ndb.png

Pretty amazing! The code is executed on the remote machine, and the resultant plot is piped back and displayed embedded in orgmode as SVG!

Getting ob-ipython to show documentation during company completion.

ob-ipython is an Emacs package that enables org-babel to talk to a running ipython kernel. The upshot of this is that you can use org-mode instead of the jupyter notebook for interspersing executable code, results and documentation.

The screenshot from the ob-ipython github shows it in action: ob-ipython-github-screenshot.jpg

Personally, I would like to use this for controlling ipython kernels on remote GPU- and deep learning-capable Linux machines, all via Emacs on my laptop. The juyter notebook is really fantastic, but it’s not Emacs.

You could also use ein for this, but then you would have to give up org-mode.

As I was testing ob-ipython yesterday, I noticed that its company-backend (completion system for Emacs) doc-buffer support was absent. Usually, as you’re exploring possible code completions, you can press <f1> or C-h to show help on the currently highlighted completion candidate.

Fast-forward an hour or two of Emacs Lisp surgery, and I was able to hook up the ob-ipython company-mode backend to ob-ipython’s inspection facility. Now pressing C-h gets you detailed help in a company-documentation buffer!

Here is my github pull request, and here is a screenshot of the company-mode ob-ipython documentation in action:

ob-ipython-company-doc-buffer.png

Hopefully this will be merged soon so it can find its way onto the Melpa package archives.

Here’s a bonus screenshot showing the ob-ipython notes from my org-mode journal where you can see embedded Python code that has been executed via the connected ipython kernel, with the resultant SVG format plot embedded and displayed inline:

ob-ipython-notes-example-nov-2017-3.png

P.S. I am currently disabling elpy-mode when the ob-ipython minor mode is active, until I figure out a better solution to elpy interfering with ob-ipython.

(use-package ob-ipython
  :config
  ;; for now I am disabling elpy only ob-ipython minor mode
  ;; what we should actually do, is just to ensure that
  ;; ob-ipython's company backend comes before elpy's (TODO)
  (add-hook 'ob-ipython-mode-hookp
            (lambda ()
              (elpy-mode 0)
              (company-mode 1)))
  (add-to-list 'company-backends 'company-ob-ipython)
  (add-to-list 'org-latex-minted-langs '(ipython "python")))

The RobotDyn Joystick shield has the XBee TX / RX lines switched to D0 and D1 or completely disconnected

RobotDyn offers a well-manufactured Joystick and XBee shield for the Arduino Uno which I am currently using for some IEEE 802.15.4-related experiments.

However, as it is not mentioned in any official documentation, I want to document here that the XBee TX / RX lines are connected to the Arduino D1 and D0 lines respectively and can only be disconnected via the “USB sketch update / Wireless” hardware switch at the top left:

The D0 and D1 lines are of course also connected to the Arduino’s main serial interface and connection to the host computer. This is why the switch has to be on “USB sketch update” when you program the board.

Unfortunately, this also means that it won’t be possible to send debug output from the Arduino to the host machine’s serial monitor when the XBee is active, i.e. the switch is in “wireless” mode. This is especially problematic when writing and debugging new programmes that use the XBee radio module.

With the SparkFun XBee Shield, one can switch the mounted XBee to use either pins 0 and 1, or pins 2 and 3 (see the section named “UART/SoftwareSerial Switch”), neatly solving this problem. It would have been great if the RobotDyn unit had done something similar, but keeping cost under control probably played a role.