Use the Google Cloud Speech API to transcribe a podcast

As I was listening to the December 21 episode of the CPPCast, together with TWiML&AI my two most favourite podcasts, I couldn’t help but be a little bewildered by the number of times the guest used the word “like” during their interview.

Most of these were examples of speech disfluency, or filler words, but I have to admit that they detracted somewhat from an otherwise interesting discourse.

During another CPPCast episode which I recently listened to, the hosts coincidentally discussed the idea of making available transcriptions of the casts.

These two occurrences, namely the abundance of the “like” disfluency and the mention of transcription, connected in the back of my mind, and produced the idea of finding out how one could go about to use a publically available speech API to transcribe the podcast, and count the number of utterances of the word “like”.

Due to the golden age of information we find ourselves in, this was not that hard at all.

Selecting the API

After a short investigation of Microsoft’s offerings seemed to indicate that I would not be able to transcribe just under an hour of speech, I turned to Google.

The Google Cloud Speech API has specific support for the asynchronous transcription of speech recordings of up to 3 hours.

Setting up the project and service account

Make sure that you can access the Google Cloud Dashboard with your google account. I created a new project for this experiment called cppcast-speech-to-text.

Within that project, select APIs & Services dashboard from the menu on the left, and then enable the Speech API for that project by selecting the Enable APIs and Services link at the top.

Next, go to IAM & Admin and Service Accounts via the main menu, and create a service account for this project.

Remember to select the download JSON private key checkbox.

Transcode and upload the audio

For the Speech API, you will have to transcode the MP3 to FLAC, and you will have to upload the file to a Google Cloud Storage bucket.

I transcoded the MP3 to a 16kHz mono FLAC (preferred by the API) as follows:

ffmpeg -i cppcast-131.mp3 -y -vn -acodec flac -ar 16000 -ac 1 cppcast-131.flac

This turned my 39MB MP3 into a 61MB FLAC file.

Create a storage bucket via the cloud dashboard main menu’s StorageBrowser menus, and then upload the FLAC file to that bucket via the web interface.

Note down the BUCKETNAME and the FILENAME, you’ll need these later when starting the transcription job.


I used the Asynchronous Speech Recognition API, as this is the only API supporting speech segments this long.

First startup the Google Cloud Shell by clicking on the boxed >_ icon at the top left. In this super convenient Debian Linux shell, gcloud is already installed, which is why I chose to use it.

Upload your service account JSON private key, and activate it by doing the following:

export GOOGLE_APPLICATION_CREDENTIALS=~/your-service-account-key.json

Using one of the installed editors, or just uploading, create a file called async-request.json in your home:

  "config": {
      "encoding": "FLAC",
      "sampleRateHertz": 16000,
      "language_code": "en-US"

You are now ready to make the request using curl, and the async-request.json file you created:

curl -X POST \
     -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
     -H "Content-Type: application/json; charset=utf-8" \
     --data @async-request.json ""

You should see a response looking something like this:

  "name": "LONG_JOB_NUMBER"

Soon after this, you can start seeing how your job is progressing:

curl -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
     -H "Content-Type: application/json; charset=utf-8" \

The response will look like this while your request is being processed:

  "name": "LONG_JOB_NUMBER",
  "metadata": {
    "@type": "",
    "progressPercent": 29,
    "startTime": "2018-02-14T20:17:05.885941Z",
    "lastUpdateTime": "2018-02-14T20:22:26.830868Z"

In my case, the 56 minute podcast was transcribed in just under 17 minutes.

When the job is done, the response to the above curl request will contain the transcribed text. It looks something like this:

  "name": "LONG_JOB_NUMBER",
  "metadata": {
    "@type": "",
    "progressPercent": 100,
    "startTime": "2018-02-14T20:17:05.885941Z",
    "lastUpdateTime": "2018-02-14T20:35:16.404144Z"
  "done": true,
  "response": {
    "@type": "",
    "results": [
        "alternatives": [
            "transcript": "I said 130 want to see PP cast with guest Nicole mazzuca recorded September 14th 2017",
            "confidence": 0.8874592

// and so on for the whole recording

You can download the full transcription here.

Too many likes?

I wrote the following Python to tally up the total number of words, and the total number of “like” utterances.

import json

with open('/Users/cpbotha/Downloads/cppcast-131-text.json') as f:
    # results: a list of dicts, each with 'alternatives', which is a list of transcripts
    res = json.load(f)['response']['results']

num_like = 0
num_words = 0
for r in res:
    alts = r['alternatives']
    # ensure that we only have one alternative per result
    assert len(alts) == 1
    # break into lowercase words
    t = alts[0]['transcript'].strip().lower().split()
    # tally up total number of words
    num_words += len(t)
    # count the like utterances
    num_like += sum(1 for w in t if w == 'like')

In this 56 minute long episode of CPPCast, 7411 words were detected, 214 of which were the word “like”.

This is not quite as many as I imagined, but still comes down to 3.82 likes per minute, which is enough to be quite noticeable.


  • We should try to use “like” and other speech disfluencies far less often. Inserting a small pause makes more sense: The speaker and the listeners get a little break to process the ongoing speech, and the speech comes across as more measured.
  • All in all, it took me about 2 hours from idea to transcribed text. I find it wonderful that machine learning for speech-to-text has become so democratised.
  • After my transcription job was complete, I saw that it was possible to supply phrase hints to the API. I could have uploaded a list of words we expect to occur during this podcast, such as “CPPCast” and “C++”, and this would have been used by the API to further improve its transcription.

Creating a Django migration for a GiST / GIN index with a special index operator.

In order to add a GiST index on a Postgres database that could be used to accelerate trigram matches using the pg_trgm module and the special gist_trgm_ops operator, I had to code up a special Django Index

Django will hopefully soon support custom index operators, but if you need the functionality right now, this example will do the trick.

The special GiST index class looks like this:

from django.contrib.postgres.indexes import GistIndex

class GistIndexTrgrmOps(GistIndex):
    def create_sql(self, model, schema_editor):
        # - this Statement is instantiated by the _create_index_sql()
        #   method of django.db.backends.base.schema.BaseDatabaseSchemaEditor.
        #   using sql_create_index template from
        #   django.db.backends.postgresql.schema.DatabaseSchemaEditor
        # - the template has original value:
        #   "CREATE INDEX %(name)s ON %(table)s%(using)s (%(columns)s)%(extra)s"
        statement = super().create_sql(model, schema_editor)
        # - however, we want to use a GIST index to accelerate trigram
        #   matching, so we want to add the gist_trgm_ops index operator
        #   class
        # - so we replace the template with:
        #   "CREATE INDEX %(name)s ON %(table)s%(using)s (%(columns)s gist_trgrm_ops)%(extra)s"
        statement.template =\
            "CREATE INDEX %(name)s ON %(table)s%(using)s (%(columns)s gist_trgm_ops)%(extra)s"

        return statement

Which you can then use in your model class like this:

class YourModel(models.Model):
    some_field = models.TextField(...)

    class Meta:
        indexes = [

The migration will then generate SQL (use sqlmigrate to inspect the SQL) which looks like this:

CREATE INDEX "app_somefield_139a28_gist" ON "app_yourmodel" USING gist ("some_field" gist_trgrm_ops);

You can easily modify this pattern for other (Postgres) indices and their operators.

Variational Autoencoder in PyTorch, commented and annotated.

I have recently become fascinated with (Variational) Autoencoders and with PyTorch.

Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar’s blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma’s original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time.

In my case, I wanted to understand VAEs from the perspective of a PyTorch implementation. I started with the VAE example on the PyTorch github, adding explanatory comments and Python type annotations as I was working my way through it.

This post summarises my understanding, and contains my commented and annotated version of the PyTorch VAE example. I hope it helps!

What is PyTorch?

PyTorch is FAIR’s (that’s Facebook AI Research) Python dynamic deep learning / neural network library. The way that FAIR has managed to make neural network experimentation so dynamic and so natural is nothing short of miraculous. Read this post by to find out more about their reasons for excitement, many of which I share.

What is an autoencoder?

The general idea of the autoencoder (AE) is to squeeze information through a narrow bottleneck between the mirrored encoder (input) and decoder (output) parts of a neural network. (see the diagram below)

Because the network achitecture and loss function are setup so that the output tries to emulate the input, the network has to learn how to encode input data on the very limited space represented by the bottleneck.

What is a variational autoencoder?

Variational Autoencoders, or VAEs, are an extension of AEs that additionally force the network to ensure that samples are normally distributed over the space represented by the bottleneck.

They do this by having the encoder output two n-dimensional (where n is the number of dimensions in the latent space) vectors representing the mean and the standard devation. These Gaussians are sampled, and the samples are sent through the decoder. This is the reparameterization step, also see my comments in the reparameterize() function.

What a fabulous trick!

The loss function has a term for input-output similarity, and, importantly, it has a second term that uses the KullbackÔÇôLeibler divergence to test how close the learned Gaussians are to unit Gaussians.

In other words, this extension to AEs enables us to derive Gaussian distributed latent spaces from arbitrary data. Given for example a large set of shapes, the latest space would be a high-dimensional space where each shape is represented by a single point, and the points would be normally distributed over all dimensions. With this one can represent existing shapes, but one can also synthesise completely new and plausible shapes by sampling points in latent space.

Results using MNIST

Below you see 64 random samples of a two-dimensional latent space of MNIST digits that I made with the example below, with ZDIMS=2.


Next is the reconstruction of 8 random unseen test digits via a more reasonable 20-dimensional latent space. Keep in mind that the VAE has learned a 20-dimensional normal distribution for any input digit, from which samples are drawn that reconstruct via the decoder to output that appear similar to the input.


A diagram of a simple VAE

An example VAE, incidentally also the one implemented in the PyTorch code below, looks like this:


A simple VAE implemented using PyTorch

I used PyCharm in remote interpreter mode, with the interpreter running on a machine with a CUDA-capable GPU to explore the code below. PyCharm parses the type annotations, which helps with code completion. I also made extensive use of the debugger to better understand logic flow and variable contents. (Debuggability is one of PyTorch’s strong points.)

Let me know in the comments to this post if you have any suggestions on how the code comments could be further improved.

# example from
# commented and type annotated by Charl Botha <>

import os
import torch
from torch import nn, optim
from torch.autograd import Variable
from torch.nn import functional as F
from torchvision import datasets, transforms
from torchvision.utils import save_image

# changed configuration to this instead of argparse for easier interaction
CUDA = True
SEED = 1

# connections through the autoencoder bottleneck
# in the pytorch VAE example, this is 20
ZDIMS = 20

# I do this so that the MNIST dataset is downloaded where I want it

if CUDA:

# DataLoader instances will load tensors directly into GPU memory
kwargs = {'num_workers': 1, 'pin_memory': True} if CUDA else {}

# Download or load downloaded MNIST dataset
# shuffle data at every epoch
train_loader =
    datasets.MNIST('data', train=True, download=True,
    batch_size=BATCH_SIZE, shuffle=True, **kwargs)

# Same for test data
test_loader =
    datasets.MNIST('data', train=False, transform=transforms.ToTensor()),
    batch_size=BATCH_SIZE, shuffle=True, **kwargs)

class VAE(nn.Module):
    def __init__(self):
        super(VAE, self).__init__()

        # ENCODER
        # 28 x 28 pixels = 784 input pixels, 400 outputs
        self.fc1 = nn.Linear(784, 400)
        # rectified linear unit layer from 400 to 400
        # max(0, x)
        self.relu = nn.ReLU()
        self.fc21 = nn.Linear(400, ZDIMS)  # mu layer
        self.fc22 = nn.Linear(400, ZDIMS)  # logvariance layer
        # this last layer bottlenecks through ZDIMS connections

        # DECODER
        # from bottleneck to hidden 400
        self.fc3 = nn.Linear(ZDIMS, 400)
        # from hidden 400 to 784 outputs
        self.fc4 = nn.Linear(400, 784)
        self.sigmoid = nn.Sigmoid()

    def encode(self, x: Variable) -> (Variable, Variable):
        """Input vector x -> fully connected 1 -> ReLU -> (fully connected
        21, fully connected 22)

        x : [128, 784] matrix; 128 digits of 28x28 pixels each


        (mu, logvar) : ZDIMS mean units one for each latent dimension, ZDIMS
            variance units one for each latent dimension


        # h1 is [128, 400]
        h1 = self.relu(self.fc1(x))  # type: Variable
        return self.fc21(h1), self.fc22(h1)

    def reparameterize(self, mu: Variable, logvar: Variable) -> Variable:

        For each training sample (we get 128 batched at a time)

        - take the current learned mu, stddev for each of the ZDIMS
          dimensions and draw a random sample from that distribution
        - the whole network is trained so that these randomly drawn
          samples decode to output that looks like the input
        - which will mean that the std, mu will be learned
          *distributions* that correctly encode the inputs
        - due to the additional KLD term (see loss_function() below)
          the distribution will tend to unit Gaussians

        mu : [128, ZDIMS] mean matrix
        logvar : [128, ZDIMS] variance matrix


        During training random sample from the learned ZDIMS-dimensional
        normal distribution; during inference its mean.


            # multiply log variance with 0.5, then in-place exponent
            # yielding the standard deviation
            std = logvar.mul(0.5).exp_()  # type: Variable
            # - is the [128,ZDIMS] tensor that is wrapped by std
            # - so eps is [128,ZDIMS] with all elements drawn from a mean 0
            #   and stddev 1 normal distribution that is 128 samples
            #   of random ZDIMS-float vectors
            eps = Variable(
            # - sample from a normal distribution with standard
            #   deviation = std and mean = mu by multiplying mean 0
            #   stddev 1 sample with desired std and mu, see
            # - so we have 128 sets (the batch) of random ZDIMS-float
            #   vectors sampled from normal distribution with learned
            #   std and mu for the current input
            return eps.mul(std).add_(mu)

            # During inference, we simply spit out the mean of the
            # learned distribution for the current input.  We could
            # use a random sample from the distribution, but mu of
            # course has the highest probability.
            return mu

    def decode(self, z: Variable) -> Variable:
        h3 = self.relu(self.fc3(z))
        return self.sigmoid(self.fc4(h3))

    def forward(self, x: Variable) -> (Variable, Variable, Variable):
        mu, logvar = self.encode(x.view(-1, 784))
        z = self.reparameterize(mu, logvar)
        return self.decode(z), mu, logvar

model = VAE()
if CUDA:

def loss_function(recon_x, x, mu, logvar) -> Variable:
    # how well do input x and output recon_x agree?
    BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784))

    # KLD is Kullback–Leibler divergence -- how much does one learned
    # distribution deviate from another, in this specific case the
    # learned distribution from the unit Gaussian

    # see Appendix B from VAE paper:
    # Kingma and Welling. Auto-Encoding Variational Bayes. ICLR, 2014
    # - D_{KL} = 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
    # note the negative D_{KL} in appendix B of the paper
    KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
    # Normalise by same number of elements as in reconstruction
    KLD /= BATCH_SIZE * 784

    # BCE tries to make our reconstruction as accurate as possible
    # KLD tries to push the distributions as close as possible to unit Gaussian
    return BCE + KLD

# Dr Diederik Kingma: as if VAEs weren't enough, he also gave us Adam!
optimizer = optim.Adam(model.parameters(), lr=1e-3)

def train(epoch):
    # toggle model to train mode
    train_loss = 0
    # in the case of MNIST, len(train_loader.dataset) is 60000
    # each `data` is of BATCH_SIZE samples and has shape [128, 1, 28, 28]
    for batch_idx, (data, _) in enumerate(train_loader):
        data = Variable(data)
        if CUDA:
            data = data.cuda()

        # push whole batch of data through VAE.forward() to get recon_loss
        recon_batch, mu, logvar = model(data)
        # calculate scalar loss
        loss = loss_function(recon_batch, data, mu, logvar)
        # calculate the gradient of the loss w.r.t. the graph leaves
        # i.e. input variables -- by the power of pytorch!
        train_loss +=[0]
        if batch_idx % LOG_INTERVAL == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                100. * batch_idx / len(train_loader),
      [0] / len(data)))

    print('====> Epoch: {} Average loss: {:.4f}'.format(
          epoch, train_loss / len(train_loader.dataset)))

def test(epoch):
    # toggle model to test / inference mode
    test_loss = 0

    # each data is of BATCH_SIZE (default 128) samples
    for i, (data, _) in enumerate(test_loader):
        if CUDA:
            # make sure this lives on the GPU
            data = data.cuda()

        # we're only going to infer, so no autograd at all required: volatile=True
        data = Variable(data, volatile=True)
        recon_batch, mu, logvar = model(data)
        test_loss += loss_function(recon_batch, data, mu, logvar).data[0]
        if i == 0:
          n = min(data.size(0), 8)
          # for the first 128 batch of the epoch, show the first 8 input digits
          # with right below them the reconstructed output digits
          comparison =[data[:n],
                                  recon_batch.view(BATCH_SIZE, 1, 28, 28)[:n]])
                     'results/reconstruction_' + str(epoch) + '.png', nrow=n)

    test_loss /= len(test_loader.dataset)
    print('====> Test set loss: {:.4f}'.format(test_loss))

for epoch in range(1, EPOCHS + 1):

    # 64 sets of random ZDIMS-float vectors, i.e. 64 locations / MNIST
    # digits in latent space
    sample = Variable(torch.randn(64, ZDIMS))
    if CUDA:
        sample = sample.cuda()
    sample = model.decode(sample).cpu()

    # save out as an 8x8 matrix of MNIST digits
    # this will give you a visual idea of how well latent space can generate things
    # that look like digits
    save_image(, 1, 28, 28),
               'results/sample_' + str(epoch) + '.png')

How to debug PyInstaller DLL / PYD load failed issues on Windows


When debugging DLL load errors on Windows, use lucasg’s open source and more modern rewrite of the old Dependency Walker software. Very importantly, keep on drilling down through indirect dependencies until you find the missing DLLs.

The Problem

Recently I had to package up a wxPython and VTK-based app for standalone deployment on Windows. Because of great experience with PyInstaller, I opted to use this tool.

With the first try with the freshly built package on the deployment machine, it refused to start up due to an ImportError: DLL load failed: The specified module could not be found., and specifically with the vtk.vtkCommonCorePython.pyd Python extension DLL.

What was frustrating, is that the relevant file was definitely present and in the right place, namely the same folder as the exe file.

A test app that imports only wx (4.0.0rc) and vtk (8.0.1) generated the following traceback:

[3096] LOADER: Running
[3096] LOADER: Running
Traceback (most recent call last):
  File "site-packages\vtk\", line 5, in <module>
  File "C:\Users\cpbotha\Miniconda3\envs\env1\lib\site-packages\PyInstaller\loader\", line 718, in load_module
ImportError: DLL load failed: The specified module could not be found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "", line 9, in <module>
  File "C:\Users\cpbotha\Miniconda3\envs\env1\lib\site-packages\PyInstaller\loader\", line 631, in exec_module
  File "site-packages\vtk\", line 41, in <module>
  File "C:\Users\cpbotha\Miniconda3\envs\env1\lib\site-packages\PyInstaller\loader\", line 631, in exec_module
  File "site-packages\vtk\", line 9, in <module>
  File "C:\Users\cpbotha\Miniconda3\envs\env1\lib\site-packages\PyInstaller\loader\", line 718, in load_module
ImportError: DLL load failed: The specified module could not be found.
[3096] Failed to execute script t1
[3096] LOADER: OK.
[3096] LOADER: Cleaning up Python interpreter.

Digging in using pdb and ctypes.WinDLL

By adding import pdb; pdb.set_trace() at a strategic point, I could use the Python debugger to try and investigate why it was not able to load the DLL which was clearly in the right place.

First I confirmed that it could load other namespaced PYDs in the same directory:

(Pdb) import ctypes
(Pdb) ctypes.WinDLL("wx.siplib.pyd")
<PyInstallerWinDLL 'C:\Users\cpbotha\Downloads\t1\wx.siplib.pyd', handle 7fff66320000 at 0x151a6781048>

However, using the same invocation on the offending PYD would still raise an error. By the way, I had to disable PyInstaller’s silly exceptions, because they masked the underlying OSError exception.

Great deal of good that did me, because Windows 10 gets me no additional information other than reporting “error 126” on the top-level DLL. Why can’t such a mature system not offer a little more guidance?

(Pdb) ctypes.WinDLL("vtk.vtkCommonCorePython.pyd")
>>> OSError: [WinError 126] The specified module could not be found
(Pdb) ctypes.WinDLL("C:\\Users\\cpbotha\\Downloads\\t1\\vtk.vtkCommonCorePython.pyd")
>>> OSError: [WinError 126] The specified module could not be found
(Pdb) import os
(Pdb) os.path.exists("C:\\Users\\cpbotha\\Downloads\\t1\\vtk.vtkCommonCorePython.pyd")
(Pdb) ctypes.WinDLL("vtkCommonCorePython.pyd")
>>> OSError: [WinError 126] The specified module could not be found
(Pdb) os.path.exists("vtkCommonCorePython.pyd")

I did spend more time than I should have in pdb tracing through the whole complicated PyInstaller and Python import code innards. The fact that some PYDs loaded and some did not should have more quickly pushed me in the direction of nested dependencies.

Seeing the light with lucasg’s Dependencies

Although I had at an earlier stage checked DLL loading first with Dependency Walker (this gets very confused by the new Windows api-ms-win-* DLLs) and later with lucasg’s improved utility, I did not drill down far enough into the dependency tree.

It’s important to keep on drilling down until you see missing DLLs, the application won’t automatically traverse the tree.

Anyways, drilling down from the offending vtk.vtkCommonCorePython.pyd soon enough led me to the culprit: The Intel Threading Building Blocks (TBB) DLL conda package I was using was accidentally built in debug mode, and the debug runtimes it relied one were obviously not being deployed:


After switching to a TBB conda package from a different channel, the app was finally able to start up and run on the deployment machine.

Run code on remote ipython kernels with Emacs and orgmode.

As is briefly documented on the ob-ipython github, one can run code on remote ipython kernels.

In this post, I give a little more detail, and show that this also works wonderfully for remote generation but local embedding of graphics in Emacs Org mode.

As I hinted previously, the jupyter notebook is a great interface for computational coding, but Emacs and Org mode offer far more flexible editing and are more robust as a documentation format.

On to the show. (This whole blog post is a single Org mode file with org-babel source code blocks, the last of which is live.)

Start by starting the ipython kernel on the remote machine:

me@server$ jupyter --runtime-dir
>>> /run/user/1000/jupyter

me@server$ ipython kernel
>>> To connect another client to this kernel, use:
>>>    --existing kernel-11925.json

We have to copy that json connection file to the client machine, and then connect to it with the jupyter console:

me@client$ jupyter --runtime-dir
>>> /Users/cpbotha/Library/Jupyter/runtime

me@client$ cd /Users/cpbotha/Library/Jupyter/runtime
me@client$ scp me@server:/run/user/1000/jupyter/kernel-11925.json .
me@client$ jupyter console --existing kernel-12818.json --ssh meepz97
>>> [ZMQTerminalIPythonApp] To connect another client via this tunnel, use:
>>> [ZMQTerminalIPythonApp] --existing kernel-12818-ssh.json

Note that we copy the json file into our local jupyter runtime directory, which will create the ssh connection file there, and enable us to reference it by name.json only (vs its full path) in any ob-ipython source code blocks.

Now you can open ob-ipython org-babel source blocks which will connect to the remote kernel. They start like this:

#+BEGIN_SRC ipython :session kernel-12818-ssh.json :exports both :results raw drawer

Let’s try it out:

%matplotlib inline
# changed to png only for the blog post
%config InlineBackend.figure_format = 'png'

from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm

fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
cset = ax.contour(X, Y, Z, cmap=cm.coolwarm)
ax.clabel(cset, fontsize=9, inline=1)


Pretty amazing! The code is executed on the remote machine, and the resultant plot is piped back and displayed embedded in orgmode as SVG!