If you want to run OpenGL 3.2+ apps in a Windows guest, AVOID Parallels 13 and buy VMWare Fusion 10 instead.

TL;DR: Parallels Desktop 13 only supports OpenGL 3.2 on an extremely limited subset of mostly games. VMWare Fusion 10 has full OpenGL 3.3 support. In my case, this made the difference between being able to work on a VTK-based client project (VMWare Fusion 👍👍) or NOT being able to work the project (Parallels 👎👎).

I bought a Parallels Desktop Pro 13 subscription to be able to do Linux and Windows development on my MacBook Pro.

Although PD is extremely well done otherwise, they seem to have been dragging their feet with rolling out full OpenGL 3.2 support, as can be seen in a number of threads on their forums, e.g. here, here and here.

For a client project, we are currently working on a cross-platform VTK-based app which targets Windows as its main platform. I was looking forward to using my Parallels Windows 10 guest to test the prototype out. Unfortunately, when trying to run a simple VTK sample I was greeted with this error message:

Warning: In ..\Rendering\OpenGL2\vtkOpenGLRenderWindow.cxx, line 647
vtkWin32OpenGLRenderWindow (0000018AAABA44B0): VTK is designed to work
with OpenGL version 3.2 but it appears it has been given a context
that does not support 3.2. VTK will run in a compatibility mode
designed to work with earlier versions of OpenGL but some features may
not work.

The app then reproducibly crashes hard.

Further investigation with the OpenGL Extension Viewer showed that there were three different OpenGL renderers: Two of them are OpenGL 2.1 capable, and one is 3.2 capable. However, as a user, you can’t decide which app gets which renderer.

Further digging, also with the glewinfo app, reveals that Parallels only very selectively supports certain games and apps. See the list at the end of this knowledge base article.

I logged a support issue re the VTK 7 and later OpenGL 3.2+ requirement. The bug was acknowledged, and Parallels confirmed that it was now on their backlog, but that they could give no indication of when this would be available.

Client projects really can’t wait for this, so based on good reviews of the OpenGL support in VMWare Fusion 10, I made use of their Cyber Monday special to purchase a license for the Pro version.

After installation, I imported the VM from Parallels Desktop:

… after which I was greeted with the VM configuration screen where I could configure the 3D support:

After which I could boot up the same Windows VM and run the VTK sample app without any issues whatsoever:

So there you go: If you need good OpenGL support in your VMs, prefer VMWare Fusion over Parallels Desktop.

Notes on my full-time testing of 7 Dropbox alternatives

Ever since I dropped Dropbox in September of 2013 due to mounting privacy concerns, I’ve been searching for and testing various filesystem syncing solutions to take its place.

This post is a summary of the notes I made during this time.

It’s quite hard finding reviews online that go deeper than the very surface. Most sync software reviews can be summarised as:

Hey, look, here’s a new service that’s a dropbox killer / dropbox alternatives / disruptive in the sync space. Wow, you get N gigabytes of free space, that’s more than dropbox. Look, it seems to be syncing these three files I put in there. That’s great. The End.

I hope the notes in this post can fill a tiny little bit of the giant sinkhole left by the hundreds of vacuous reviews following the template above. Over the past two years, I have used seven different Dropbox alternatives on my primary file collections (just under 50G in total). During each period, I actually committed to the relevant syncing system as my main and only sycing tool and kept notes detailing how this went.

TL;DR In searching for a Dropbox-level solution to synchronise my files between my four computers, I’ve spent some quality time with btsync, syncthing, seafile, CloudStation, SpiderOak, Wuala, Dropbox and unison. This post summarises my experiences. Warning: There is no real winner, but there is quite an amount of hopefully useful information.

Requirements

I need a system that is able to keep my two main working repositories of files in sync across two laptops and two workstations. I have a Synology DS213j low-power home NAS that can also be used as part of syncing solutions that support it.

The two repositories are:

  • sync-1: 13 gigabytes spread over 100 thousand files. This repo changes quite often. (At the start of the adventures described below, this repo was 15G spread over 150 thousand files.)
  • sync-2: 35 gigabytes spread over 80 thousand files. This repo changes less often.

I should be able to close one laptop, and continue working on my desktop on the same files, without having to think too much about it. The client software should support Linux, because that’s what I mostly use. Preferably, the tool should support delta-syncing (owncloud does not do this, for example), deduplication and LAN sync, because I live an a bandwidth-starved part of the world. Importantly, it should be relatively easy to roll-back inadvertent changes and deletions.

Finally, an important use case for me is syncing git repos that I’m working on. This means that I want to work on some source code in a git repo, then go home without having to commit and push just because I’m going home, and continue working at home on a different computer, perhaps committing and pushing from there when I’m good and ready with my changes.

bittorrent sync

This peer-to-peer personal syncing solution, also called btsync, is often touted as a great dropbox replacement, and is also the one I started using right after I dropped Dropbox. I only have experience with pre-2.0 btsync.

btsync is really fantastically fast. Multiple gigabytes of files can be spread really really quickly through your mesh network.

However, I after upgrading to 1.4 I soon ran into the btsync-simply-refuses-to-sync-and-there’s-nothing-you-can-do-about-it problem also described in this forum topic. This was quite frustrating, to say the least.

I downgraded to 1.3.109, and all was fantastic. It managed to bring my sync-1 repository in sync between 4 nodes in no time at all.

I made sure that all nodes were time-synced using ntp because I had had some snafus with git repositories getting corrupted using btsync.

All was well in the world of syncing, until my Synology, part of my btsync mesh, was down for a few days, whilst all other nodes remained regularly active. When the Synology was switched on again, btsync happily overwrote a newer subdirectory of files with older versions across all nodes.

This last incident, together with the elephant in the room, namely the fact that btsync is closed source and hence quite hard to have audited, or just to be able to fix by myself, led to yet another breakup after some months.

Bye-bye btsync, it’s you, not me.

syncthing

Syncthing is more or less an open source version of btsync.

The author is passionate about this project, it’s run very well, the tool itself has a gorgeous web-interface and the project’s goals are laudable. This is the peer-to-peer syncing tool you really want to succeed!

However, besides the fact that the binary took up a little too much RAM on my Synology (this is forgivable, my synology only has 512MB), there were two issues preventing me from using this as my primary sync tool:

  • syncthing does not have filesystem monitoring integrated. This means it needs to scan your whole sync repository every so often for changes. This scan was eating a significant amount of CPU cycles on my laptop.
  • My work laptop very often had issues connecting to my synology at home, even after futzing with port-forwarding on my firewall at home.

I’ll probably check in again after some time to see how my favourite peer-to-peer syncing tool is doing!

seafile

Seafile is a fantastic project. In short, it’s sort of a DropBox clone, but both the server and the client are completely open source. As if that wasn’t enough, it supports, optionally, end-to-end client-side encryption.

Because I was not yet ready to invest time and money to setup my own server, and I wanted to evaluate its performance first, I bought a three month subscription to their commercial service, seacloud, for $30.

Uploading my sync-1 repository (15G and 150 thousand files at the time) took about 45 hours.

Soon after this, I started running into issues.

Firstly, every time I moved with my laptop to a new access point (for example between home and work), it would just sit there reporting that it was “syncing”, when in fact it had merely gotten really confused by the changed network connection. Only stopping and starting the client would get it going again.

Secondly, and this was the deal-breaker, even a minute change to a small text file, would result in the seafile client (I tested up to version 4.0.4) using up a whole Core i7 core on my laptop for a few minutes. I logged this as a bug on the seafile github project.

Searching around the net, and reading the comments on the bug I reported, you’ll see that more people are experiencing this error.

Unfortunately, this meant that I had to leave seafile on my journey to a suitable syncing solution. I will keep an eye on it, because it does have amazing potential.

If you’re interested in seafile, you will definitely have to read the blog posts of Pat Regan. They are filled with in-depth information and experiences, very much unlike most other syncing tool reviews on the interwebs.

CloudStation

The Synology NAS is a great product. They’re basically efficient little Linux machines with a whole bunch of Synology-written and packaged software to boost your home network’s utility. I have the DS213j.

CloudStation is Synology’s answer to Dropbox.

The out-of-the-box experience is really smooth. Configuring it on the Synology server is well done, whilst the client installs easily on your Linux (or Windows or MacOS) machines.

Uploading my 15G, 150 thousand file (at that point) sync-1 repository took about 40 hours in total. This is with me sitting on the same LAN as the Synology. The poor little thing is just very under-powered.

On my main Linux laptop, I could not get the file manager sync status icon overlays working, and together with Synology support I was not able to get this problem sorted.

I could live with no icon overlays, convenient as they may be, but the deal-breaking was a day or so later when the CloudStation client started acting up.

As I was editing a Python source code file, the client kept deleting the file, as I was working on it, and making it appear as filename_hostname_date_Conflict.ext. The first time I simply renamed this file back, thinking some once-off issue with the sync, but CloudStation stubbornly kept on deleting my file and creating the conflict-named one.

Synology, I love you, but CloudStation is OUT!

SpiderOak

This is probably the most well-known Dropbox alternative that supports end-to-end encryption. It’s even been name-dropped by Edward Snowden, which is high praise in these circles.

Although SpiderOak as a company has a solid reputation, and has released a number of open source encryption-related software packages, the SpiderOak client itself is closed-source. This means we have nothing more than SpiderOak’s word that their client is indeed performing the end-to-end encryption in a secure way, and is not adding some backdoor key to every packet passing from your computer to their servers. I do think that their service has a higher chance of not beeing snoopable than, say, Dropbox.

In any case, I signed up for the new (at the start of 2015) $12 / month 1 terabyte package and again started uploading my sync-1 repo. SpiderOak was the slowest of the bunch: It took about 60 hours to upload the complete repository.

SpiderOak is significantly more flexible than Dropbox. You can configure it to backup any number of directory trees on your computers. Once a tree has been backed up, it becomes part of your SpiderOak cluster and is visible from all other nodes. To sync, directories on multiple computers, you first have to setup backup on all computers for the tree in question, and only once all backups are complete, you can configure a sync involving any number of those backed-up directories.

SO also supperts lan sync, and has been designed from the start to support per-user de-duplication of file blocks, even although file blocks are encrypted. Furthermore, they’ve also managed an rsync-like efficient file transfer with encrypted blocks. Pretty nifty!

After having completed the backup and sync setup of sync-1 on two laptops and two workstations, I was quite happy with SpiderOak’s flexibility, and the apparent security of the software, so I decided to add sync-2 to the mix, but only for two of the computers.

After the long process of getting everything synced up and starting to think that SO was going be The One, I started running into issues. Firstly, SO loves your RAM. At one point, the client was taking up just under 1GB of RSS on my main Linux laptop. Ouch.

I could probably have learned to live with that, were it not for the fact that the client is just extremely slow to pick up new changes on the filesystem and especially so when adding new computers to your sync network (that syndication process, anyone?). At the end of the work day, I would often have to wait quite a whie for SO to sync all the last changes I had made so I could go home.

Perhaps more insidious than that, was the behaviour that SO would have huge but silent problems with starting to sync again after laptop resume. I would get home, resume my laptop, and after 30 minutes SO would still show no activity whatsoever, in picking up new changes from the SO servers, or picking up changes on my laptop. I would have to kill the client and start it up again.

All in all, SO is probably the service that I wanted the most to succeed, but in the end, these opaque anomalous behaviours and the continuous waiting killed it for me. Sorry SO, I really want to see this work…

Wuala

After the SpiderOak adventure, I briefly tried the other end-to-end encrypted file syncing service, this one from Switzerland. It’s called Wuala, and it belongs to La Cie, which belongs to Seagate, a US company, so we can unfortunately not count Wuala’s Swiss origins as a privacy plus.

In any case, Wuala does not have a free tier anymore, so I ponied up a few euros for the 20G package. I was planning to start by testing just my sync-1 repository.

When this started to upload my files, it surprised me by maxing out my admittedly puny almost 1 Mbit/s upstream. At work, where we are fortunate enough to have 20 Mbit/s symmetric optic fibre, it managed to get up to 3 to 4 Mbit/s, which is quite good considering that that fibre is shared by a number of engineers.

I was also quite impressed by the Java client application. It had similar flexibility to SpiderOak, but was much more user-friendly. Also the real-time per file sync feedback (only in the app itself) was significantly more useful than SpiderOak’s log-style reporting of syncinc activity.

As Wuala was uploading from my laptop, I thought: Hey, let’s install Wuala on my workstation as well. The sync-1 file tree there is identical to that on my laptop. Any self-respecting file sync service should be able to handle this in its stride, and perhaps with two computers encrypting and uploading I could get slightly higher throughput.

Well, it turns out that Wuala is quite lacking in this regard. As soon as I started up the client on my workstation, on the already identical file repos, the clients on both laptop and workstation started throwing up numerous error dialogs about file conflicts.

I was quite surprised by this. I expected a mature syncing service like Wuala to handle this in its stride. It has all of the file block hashes, so at the very least it should realise that a bunch of blocks already exist on its servers, skip the already present blocks, and just continue with its merry life.

At this point, I was not in the mood to continue with Wuala, and so I said good bye.

Dropbox remission

After Wuala, and almost two years of searching, I was ready to give up. Finding a tool or system that would satisfy all my requirements had already cost me too much time. I was ready to tell the NSA: “Oh go ahead, run your algorithms over all my data. Read my files. Do that thing that you do, I just want to SYNC!”

I signed up for the Dropbox 1 TB package and started uploading sync-1, now whittled down to about 100 thousand files and 12G.

Wow, it was just as fast and as pretty as I remembered it!

However, I soon came across files that simply refused to synchronise. Dropbox would get stuck at “Uploading N files…” for hours, and N would remain constant. Finally I had to resort to uploading these files through the web interface. This cost me time (that thing I was trying to save) but got the job done.

After one and a half weeks, my Linux workstation which had been inactive for a few days, was switched on again for the first time. All of a sudden, massive file activity on my laptop.

Hey, a mass deletion of all my source code!

I know that source code was not deleted on the Linux workstation. I don’t know how this deletion could have been triggered, but it was.

No problem right? Let’s just go to the dropbox events interface, and undelete those files. Wait what? The events interface refuses to work, instead just reporting “There was a problem completing this request”:

ARGH!

I can undelete the files via the file browsing interface, but they’re spread out and that would take ages. Fortunately I had a good backup of the whole sync-1 repo so I could restore from there.

Dropbox support has been really helpful and said they would undo the mass deletion events. However, it’s now four days later; my files have not been restored, and the dropbox events interface is still broken.

If this had been my primary file store and I didn’t have the great backups I did, this would have been a complete nightmare. As it stands, I’ve unlinked and uninstalled dropbox from everywhere, but I would still like to have my files restored, and the mystery of the events interface solved.

My confidence in dropbox (it NEVER disappointed me in the years before 2013) has taken a severe knock. Added to its security issues, we’ll have to see what happens to our relationship in the coming time.

Unison 2.48.3

In the meantime, I’ve returned to my very old friend unison. I used this in the years before dropbox to keep my stuff synchronised.

Furthermore, the unison boys and girls have been busy in the meantime. In version 2.48.3, there’s even a neat filesystem monitor with which you can setup lightning fast automatic bi-directional file syncing. It’s all very nerd-DIY, but for some of us that’s a plus.

unison is terribly efficient at transferring changes to and fro. It makes use of the rsync-algorithm, with added niceties like duplicate detection shortcuts. I’ve now set it up in a star topology with my synology. I’ve setup daily incremental backups on the synology so that I have at least some form of roll-back and deletion recovery.

At the moment, because I’m still a little shaken by the mass deletion event, I have the unison GUI running. I simply press A to have it scan and sync any specific file tree that I’ve setup. It takes about a second for sync-1, and I get a list of the changes it makes so I can check that it’s not nuking half of my files.

unison makes me work harder, but of all of the solutions discussed in this post, it gives me the most control.

Other sync systems

This is for sync systems that are still on my list to try, or that I used very briefly.

git-annex

Git-based open source syncing system. By design, unable to synchronise git repositories. The author does not feel that this is an issue.

After a healthy bout of laughing, I nuked it from all of my computers.

mega

Mega does end-to-end encryption, and its client is open source. I would like to try this at some point, but at the moment I have to focus on, you know, actually working.

Parting words

There you have it: A whole bunch of words about a whole bunch of personal syncing solutions. I have not yet found The One, although at this moment unison is doing the job quite well, albeit with some klunkiness.

I would love to hear your thoughts on and your motivation of your favourite syncing tool in the comments below, or over at Hacker News!

Syntax-highlighting markdown fenced code blocks in Emacs

The syntax-highlighted fenced code blocks in GitHub flavored markdown, or GFM, are a beautiful and useful invention. One starts a code block with three or more backticks or tildes, followed by the name of the language, and then proceeds to show one’s code, which, at least on GitHub, is then syntax highlighted.

In other words, something like this in your markdown:

```python
def computer_says(no):
    print("computer says %s" % (no,))
```

Would become this in the preview:

def computer_says(no):
    print("computer says %s" % (no,))

When I’m editing my markdown, I’d obviously like to see this language-specific highlighting interspersed with my normal markdown highlighting. SublimeText’s MarkdownEditing package does a superb job of this, but of course we’re currently rediscovering the universe that is Emacs.

DuckDuckGoing around, we run into at least two Emacs packages that do this: mmm-mode and polymode. We decided to try out both of them, finally ending up (quite happy) with the result shown in this screenshot:

Editing this post with emacs, markdown-mode and mmm-mode
Editing this post with emacs, markdown-mode and mmm-mode

polymode

After git cloning polymode into my ~/.emacs.d, I installed it according to the instructions, by adding the following to my ~/.emacs.d/init.el:

(setq load-path
      (append '("~/.emacs.d/polymode/" "~/.emacs.d/polymode/modes")
              load-path))

(require 'poly-R)
(require 'poly-markdown)

Initially I had just the poly-markdown require, but that yielded an empty variable error. With this configuration, opening any .md file should activate the poly-markdown mode.

poly-markdown creates an indirect buffer for every code block that you create. This means if you switch buffers using for example C-x C-b, you’ll see for each file you’re editing with poly-markdown as many extra buffers as there are fenced code blocks in your file.

When you start a new fenced code block, polymode picks this up automatically. When you load a new colour theme, the code blocks don’t always pick it up immediately, but this can be lived with.

mmm-mode

This all worked in my case, but I wanted to try out mmm-mode as well. While not as hip and generic as polymode, this has been around for quite a while longer, and has seen much testing.

I installed mmm-mode from ELPA with M-x package-install RET mmm-mode (yes, it’s that easy) and then changed my .emacs.d/init.el as follows:

(require 'mmm-mode)

(mmm-add-classes
'((markdown-python
:submode python-mode
:face mmm-declaration-submode-face
:front "^```python[\n\r]+"
:back "^```$")))

(setq mmm-global-mode 't)
(mmm-add-mode-ext-class 'markdown-mode nil 'markdown-python)

I’m only showing Python here, but I’ve defined classes and added them to markdown-mode for JavaScript and emacs-lisp as well. There’s probably a better way to define the extra classes for a whole list of languages, but my (lack of) elisp skills doesn’t know about it yet.

conclusion

I definitely liked mmm-mode better. It doesn’t create all of those indirect buffers (which do affect my workflow). When you create a new fenced code block, it doesn’t highlight this until you do e.g. M-x mmm-parse-buffer. However, I like this sort of determinism. Other than that, mmm-mode felt generally more stable and responsive than polymode. This no surprise; although polymode shows great potential, it’s still in alpha. Because I need something that works now, I’ll hold onto my mmm-mode for a while longer.

Acer V3-571G FullHD IPS: Superb price/performance Linux development laptop

I recently needed a new mobile development workstation. My main requirements were that it should have at least a Full HD (1920×1080) IPS (in-plane switching) screen and a good keyboard, and that it should be able to run Linux, preferably Ubuntu, as its primary operating system.

After experimenting with a screenshot of my 1920×1080 desktop workstation running IntelliJ Idea 12 (my IDE of choice) on an Asus UX31A with 13″ Full HD IPS screen,  I realised that I would have to go with a larger screen. The Asus UX52VS with 15.6″ IPS also looked like a good bet, but there were no reviews available yet, it was not clear whether the 4GB RAM and hybrid HDD (large spindle drive, 24GB SSD cache) would be easily upgradable to full SSD, and the  €1200 price tag was reason for more consideration.

I finally stumbled upon this review of the Acer V3 571G with Full HD IPS, which was mostly quite surprised that such a laptop with such a screen could be sold for entry-level prices. I subsequently purchased model number V3-571G-73638G75Maii, with Full HD IPS (this is the LP156WF4-SPB1 LED IPS matte panel by LG Philips ), Intel i7 36732QM (a real mobile quad-core; many mobile i7s are dual core), NVIDIA GeForce 710m with 2GB VRAM (Optimus graphics switching), 8GB RAM, 750G HDD, all for €799. I also purchased an Intel 520 240G SSD, a really fast SSD with built-in hardware encryption that would replace the main HDD, for €200.

Photo courtesy of notebookcheck. Do see their great review (linked in the post).
Photo courtesy of notebookcheck. Do see their great review (linked in the post).

Upgrading HDD and RAM

My first impression of the laptop was that in reality it does not look quite as cheap as the photos might make one believe. I was pleasantly surprised when I set out to replace the HDD with the Intel SSD. After removing two screws on the underside, a panel can be removed behind which the hard drive and RAM can be easily upgraded:

Upgrading the hard drive and ram has been made straight-forward, as it should be.
Upgrading the hard drive and ram has been made straight-forward, as it should be.

Configuring Linux: Ubuntu 12.04.2

After the SSD upgrade, installing Ubuntu 12.04.2 went mostly without a hitch. 12.04.2 comes with the LTSEnablementStack, backports of the Quantal kernel (3.5) and the new X stack to support more hardware. This caused some dependency problems when I installed bumblebee (Linux support for NVIDIA Optimus graphics switching), but this problem was almost immediately fixed by the ubuntu-x-swat team when I reported it on #freenode, so you should be fine. Just in case you need a reminder, bumblebee is installed and configured as follows:

sudo add-apt-repository ppa:ubuntu-x-swat/x-updates
sudo add-apt-repository ppa:bumblebee/stable
sudo apt-get update
sudo apt-get install bumblebee bumblebee-nvidia primus

If you want to run something on the NVIDIA, just do “primusrun command” or “optirun command”, where the former is preferred due to performance.

Other than that, make sure you have GRUB_CMDLINE_LINUX_DEFAULT=”apci_backlight=vendor acpi_osi=” in your /etc/default/grub (run update-grub and reboot after you change this) to get the screen brightness hotkeys working. Unfortunately, the brightness notifier itself does not work, but this is not a problem.

Weakpoint: BIOS ATA security support

After a few mails to-and-fro with Acer tech support (they do respond, mostly) and two nights of experiments, I can now confirm that the HDD password implementation on the laptop is worth less than nothing. In the spirit of full disclosure, this is the Insyde H2O BIOS implementation of HDD  passwords. This BIOS is used on many modern laptops besides Acer.

For many of the current self-encrypting drives, BIOS support of the ATA security feature mode set is important. It should be possible to set both master and user passwords, and, more importantly, the BIOS should ask for this password at bootup, at which point it should pass the user-entered password, unchanged, to the hard drive as ATA commands. Setting the HDD password on the Acer does none of the above. Instead, it sets a fixed password that has nothing to do with the user password. At bootup, it asks for a HDD password. However, if you enter this incorrectly 3 times, you get a hash code. This hash code can be used with a simple Python script to generate a master unlock password with which the HDD can be trivially unlocked. I confirmed experimentally that this works.

I also experimented with setting the ATA security user password to a known value using hdparm from a Linux boot USB. The Insyde H2O BIOS unfortunately does not fall back to sane behaviour.

To summarise: The Acer BIOS can’t be used to manage ATA security. Because it is important that my SSD is fully encrypted, I now boot the laptop with a USB stick, unlock with the real ATA user password using hdparm, and then warm-boot back into the SSD. I perceive this as a relatively small price to pay for reasonable and super fast data security (my Intel does 500MB+ read and write, all with AES-128 encryption). Remember that software encryption has a severe performance and durability impact on all SSDs, especially those using compressing controllers such as the Sandforce, but also SSDs that employ no compression at all. AES-NI is really not the issue here, it has to do with the performance and durability optimizations modern SSD controllers do.

Verdict

The matte Full HD IPS screen on this laptop is a pleasure to use. I find the chiclet keyboard above average for programming. It’s not as rigid as the keyboard on my Samsung NP300V3a, but it’s entirely acceptable. The combination of an Ivy Bridge i7 3632QM quad core, an Intel 520 SSD and 8GB of 1600MHz DDR3 RAM makes for a laptop that feels super responsive. Taken together with the solid Ubuntu support and the €799 + €200 price tag, and in spite of lack of ATA security support in the BIOS, I can only highly recommend this machine to any developer looking for a powerful Linux laptop on a budget.