I’m helping someone process a collection of research data that has been entered by a third party using Excel. We’re using LibreOffice Calc, because research should be reproducible by anyone, not just those in possession of prioprietary software licenses (this also means that we use R, JGR and DeduceR instead of SPSS and Statistica; perhaps more on that later).

After having to fix hundreds of badly entered dates with basic functions (we highly recommend that ISO 8601 format dates, i.e. YYYY-MM-DD, are used from the very start, instead of ambiguous local formats), we ended up with a stubborn subset of DD-MM-YYYY formatted dates in cells that explicitly had ISO 8601 format configured.

What could we do to convert these DD-MM-YYYY dates to the ISO 8601 standard YYYY-MM-DD dates?

The trick here is to use LibreOffice RIGHT, MID and LEFT functions to pull the faulty date apart, and then to put it back together in a new column using the DATE function.

In this screenshot you can see an example:

RIGHT(A2,4) extracts 4 characters from the right of 13-12-1981, yielding 1981, MID(A2,4,2) yields the 2 characters starting at position 4, and LEFT(A2,2) gives us the two characters at the left, in other words the day.

Putting all of this together with the function =DATE(RIGHT(A2,4), MID(A2,4,2), LEFT(A2,2)) will return the date formatted according to the column format setting (right click column heading, select Format Cells and then the ISO 8601 formatting). Obviously you have to replace A2 with the cell broken date format that you want to fix.

Globally, you would create a new column, use the formula above in its first row to fix the first date in the existing broken date column, then replicate the formula all the way down by clicking on it, and then dragging the rectangle at its lower right all the way down.

(Searching for “LibreOffice convert date formats” was of no help, whereas doing the same for Excel yielded at least two good answers (one and two), on which this post is based. I’m putting this out there so that searching for LibreOffice will also turn up something useful.)

I’ve just activated CloudFlare (the free tier) for vxlabs.com, hoping to do even faster page loads. Most of my WordPress installations already use WP Super Cache to serve mostly static pages when you come here, but CloudFlare should speed this up even further via their content and network traffic optimization, and their servers dotted all over the globe.

Configuration was quite painless (you have to configure your domain’s DNS to point to CloudFlare’s servers, then a few settings using their web interface). It somehow missed my mail server’s DNS record when copying everything over, but that was quick to fix. (It also seems that the SSL-protected roundcube I have on another server in this domain now keeps on logging me out after 2 seconds, but I’m not 100% sure that that’s due to CloudFlare.)

In any case, I soon noticed that the MathJax equations in this post about level sets did not show up. Turns out you need to change the CloudFlare Performance Profile from “CDN + Full Optimizations” to “CDN + Basic Optimizations”, and then your equations should appear again.

In other words, do this:

to see this:

(TL;DR See the last paragraph for how to get the Dell U2713HM working on the HDMI output of the Acer V3-571G at 2560×1440 @ 50Hz.)

The Dell Ultrasharp U2713HM is a 27″ IPS panel with a resolution of 2560×1440. I recently acquired this monitor and wanted to connect it to my Linux-only Acer V3-571G i7 laptop, which only a VGA (D-SUB; max resolution 2048×1536) and an HDMI 1.4 output.

The monitor has been optimised to show the Dell logo. (image from the Engadget review I linked to above.)

HDMI 1.4 does support 2560×1440, but the HDMI 1.3 input on the Dell U2713HM does not. (The HDMI 1.4 input on the more expensive Dell U2713H does.) This means that we have to use either the DVI or DisplayPort inputs.

For 2560×1440 at 60Hz refresh rate, normal single-link DVI is not sufficient. One either needs dual-link DVI, which I don’t have, or one can use a cheap HDMI to DVI connector and tweak the timings of normal single-link DVI to supply 2560×1440 at a frequency that is as close as possible to 60Hz, but still fits within the available bandwidth.

Part of this tweaking is making use of reduced blanking, an optimization that can be done on LCD panels where there’s no electron beam (as is the case in CRTs) that needs time to be repositioned. In short, we can squeeze out more resolution and refresh from the same bandwidth.

NotebookCheck has a wealth of information on tweaking these timings. Unfortunately, the configuration they supply for 2560×1440 at 55Hz only caused flickering on my setup.

Fortunately, Linus Torvalds (just some guy who seems to know quite a bit about Linux documented on Google+ his adventures getting such a monitor going under Linux, albeit with a 30Hz refresh rate. Fortunately, a commenter named Tim Small posted the timings he had generated with a hacked version of cvt!

Based on his timings, I could get my monitor going stably at 2560×1440 at 50Hz. Enter the following in a terminal:

xrandr --newmode "2560x1440_50.00_rb" 200.25  2560 2608 2640 2720  1440 1443 1448 1474  +HSync -Vsync
xrandr --addmode HDMI1 "2560x1440_50.00_rb"


When you enter after the second line, the monitor should switch to the 2560×1440 mode. After having done this, 2560×1440 appears as a selectable mode in the Ubuntu Displays app.

Today, Qt 5.3.1 was released along with Qt Creator 3.1.2. Unfortunately, nsf’s EmacsKeys plugin, merged into the Qt trunk a few months ago, was not a part of this release (it should be included in Qt Creator 3.2).

Because the Emacs keybindings are hardwired into my fingers, and I’m using QtCreator for a project at the moment, I spent some time figuring out how to get the plugin built for Qt Creator 3.1.2. This post explains how you too can build it, but, if you’re on Ubuntu 14.04 with Qt 5.3.1 x64, you can just download my binaries and the keymap file (see under requirements).

I’ve tried to setup most of the common Emacs keybindings documented in nsf’s plugin, plus a few more, and I’ve gotten rid of the conflicts. (Thanks nsf for putting me on the right path with that!)

A subset of the Emacs keybindings in the keymap file.

## Requirements

You need to have Qt 5.3.1 installed. I used the open source x64 .run files made available by the qt-project.

For both of the following approaches, download and unpack the qtcreator-emacskeys archive I’ve prepared especially for you.

## The Easy Way: Install Binaries

• Copy the archive nsf directory to Qt/Tools/QtCreator/lib/qtcreator/plugins.
• Copy the archive emacskeys.kms to Qt/Tools/QtCreator/share/qtcreator/schemes.
• Start QtCreator.
• In Help | About | Plugins activate EmacsKeys under Utilities.
• Restart QtCreator.
• Under Tools | Options | Environment | Keyboard click on Import and then select emacskeys.nsf.

You can scroll down to the Emacs Keys section and check that my choices work for you.

## The Hard Way: Build ‘em Yourself

Get the Qt Creator source code by typing:

git clone --recursive https://git.gitorious.org/qt-creator/qt-creator.git


Copy the src/plugins/emacskeys directory somewhere else, out of the whole qt-creator tree, because you’re going to revert to the 3.1.2 release version:

cd qt-creator
git checkout tags/v3.1.2


Copy the emacskeys.pro from my archive into the emacskeys directory that you copied out. Edit QTCREATOR_SOURCES to point to te v3.1.2 qt-creator checkout that you prepared above, and IDE_BUILD_TREE to point to your installed QtCreator directory.

In QtCreator, open the .pro file that you’ve just edited, and build the project. If all goes according to plan, this will put the resultant .so file into the correct plugins directory in a subdirectory called nsf.

Now follow the rest of the steps from the easy way above.

## Limitations

I experienced the problem that Alt-W would deselect any existing mark at the press of Alt, so nothing was copied. To get around this, I’ve mapped copy to Esc-W.

Post summary: The level set method is a powerful alternative way to represent N-dimensional surfaces evolving through space.

(This is a significantly extended blog-post version of three slides from my Medical Visualization lecture on image analysis.)

Imagine that you would like to represent a contour in 2D or a surface in 3D, for example to delineate objects in a 2D image or in a 3D volumetric dataset.

Now imagine that for some reason you would also like have this contour or surface move through space, for example to inflate it, or to shrink it, at the same time dynamically morphing the surface to better fit around the object of interest.

Mathematically, we would have for N dimensions the following representation:

$C(p,t) = \lbrace x_0(p,t), x_1(p,t), \cdots, x_{N-1}(p,t) \rbrace\tag{1}$

Where $C(p,t)$ is the collection of contours that you get when you morph or evolve the initial contour $C(p,0)$ over time $t$. At each point in time, each little bit of the contour moves orthogonally to itself (i.e. along its local normal) with a speed $F(C(p,t))$ that is calculated based on that little bit of contour at that point in time.

This morphing of the contour can thus be described as follows:

$\frac{\partial C(p,t)}{\partial t} = F(C(p,t))\vec{n}(C(p,t))\tag{2}$

In other words, the change in the contour relative to the change in time is defined by that contour- and time-dependent speed $F$ and the normal to the contour at that point in space and time.

There are at least two ways of representing such contours, both in motion and at rest.

## Contours as (moving) points in space: Explicit and Lagrangian.

Most programmers would (sooner or later) come up with the idea of using a bunch of points, and the line segments that connect them, to represent 2D contours, and, analogously, dense meshes consisting of triangles and other polygons to represent 3D surfaces, a la OpenGL.

You would be in good company if you then implemented an algorithm whereby for each iteration in time, you iterated over all points and moved each point a small distance $F\times \Delta t$, orthogonally to the contour containing it. Such an algorithm could be called active contours, or snakes, and this way of representing a contour or surface as a collection of points moving through space is often called the Lagrangian formulation.

Now imagine that your contour or surface became quite large. You would need to add new points, or a whole bunch of new triangles. This could cause minor headaches. However, your headaches would grow in severity if your contour or surface were to shrink, and you would at some point need to remove, very carefully, extraneous points, edges or triangles. However, this headache pales in comparison to the one you would get if your surface, due to the object it was trying to delineate, would have to split into multiple surfaces, or later would have to merge back into a single surface.

## Contours as (changing) measurements of the space around them: Implicit and Eulerian.

However, if you were as clever as, say James Sethian and Stanley Osher, you would decide to sidestep all of that headache and represent your contour implicitly. This can be done by creating a higher-dimensional embedding function or level set function of which the zero level set, or the zero-valued isocontour, is the contour that you’re trying to represent. This is often called the Eulerian formulation, because we’re focusing on specific locations in space as the contour moves through them.

Huh?

(that was what I said when I first read this.)

What’s an isocontour? That’s a line, or a surface (or a hyper-surface in more than 3D) that passes through all locations containing the isovalue. For example, on a contour map you have contour lines going through all positions with the same altitude.

If I were to create a 2D grid, with at each each grid position the floating point closest distance to a 2D contour (by convention, the distance inside the contour is negative and outside positive), then the contour line at value zero (or the zero level set) of that 2D grid would be exactly the 2D contour!

The 2D grid I’ve described above is known as an embedding function $\phi(x,t)$ of the 2D contour. There are more ways to derive such embedding functions, but the signed distance field (SDF) is a common and practical method.

Let’s summarise what we have up to now: Instead of representing a contour as points, or a surface as triangles, we can represent them as respectively a 2D or 3D grid of signed distance values

For the general case up above, the contour at fixed time $t_i$ would be:

$C(t_i) = \lbrace x \vert \phi(x,t_i) = 0 \rbrace$

### Moving contours are simply point-wise changes in their embedding functions

Instead of directly evolving the contour, it can be implicitly evolved by incrementally modifying its embedding function. Given the initial embedding function $\phi(x,t=0)$, the contour can be implicitly evolved as follows:

$\frac{\partial \phi(x,t)}{\partial t} = - F(x,t) \lvert \nabla \phi(x,t) \rvert$

where $F(x,t)$ is a scalar speed function, $\nabla$ is the gradient operator and $\lvert \nabla \phi(x,t) \rvert$ is the magnitude of the gradient of the level set function. Note the similarity with the general contour evolution equation 2 above.

In practice, this means that for each iteration, you simply raster scan through the embedding function (2D, 3D or ND grid), and for each grid position you calculate speed $F(x,y)$, multiply with the negative gradient magnitude of the embedding function, and then add the result to whatever value you found there.

If you were to extract the zero level set at each iteration, you would see the resultant contour (or surface) deform over time according to the speed function $F$ that you defined.

MAGIC!

### A visual confirmation

The figure below is a signed distance field representing two concentric circles in 2D, or a 2D donut. Note that the values are negative inside the donut, and positive elsewhere (in the center of the image, and towards the limits of the grid).

Let’s have a donut!

For the sake of this exposition, let’s define $F(x,t)=1$, i.e. the donut should get fatter over time. Eventually it will get so fat that the hole in the middle will be closed up.

You can now check by inspection that at each point in the embedding function, or the image, you would have to add a small negative number ($-1$ multiplied by the gradient magnitude). First positive regions close to the contour will become negative, increasing its size, and regions further away will approach zero from above, and eventually also become negative, making the donut even fatter. Eventually the region in the middle will become all negative, in other words, close up.

## Conclusion

Representing surfaces implicitly, and specifically as signed distance fields, has numerous advantages.

• Contours always have sub-grid-point resolution.
• Topological changes, such as splitting and merging of N-dimensional objects, is automatically handled.
• Implementation of N-dimensional contour propagation becomes relatively straight-forward.
• With this implicit representation, morphing between two N-dimensional contours is a simple point-wise weighted interpolation!

Although iterating through the embedding function is much simpler than managing a whole bunch of triangles and points moving through space, it can be computationally quite demanding. A number of optimization techniques exist, mostly making use of the fact that one only has to maintain the distance field in a narrow region around the evolving contour. These are called narrow-band techniques.

The Insight Segmentation and Registration ToolKit, or ITK, has a very good implementation of a number of different level set method variations. You can also use the open source DeVIDE system to experiment with level set segmentation and 3D volumetric datasets (it makes use of ITK for this).

(I’m planning to make available my MedVis post-graduate lecture exercises. Let me know in the comments how much I should hurry up.)