Notes on my full-time testing of 7 Dropbox alternatives

Ever since I dropped Dropbox in September of 2013 due to mounting privacy concerns, I’ve been searching for and testing various filesystem syncing solutions to take its place.

This post is a summary of the notes I made during this time.

It’s quite hard finding reviews online that go deeper than the very surface. Most sync software reviews can be summarised as:

Hey, look, here’s a new service that’s a dropbox killer / dropbox alternatives / disruptive in the sync space. Wow, you get N gigabytes of free space, that’s more than dropbox. Look, it seems to be syncing these three files I put in there. That’s great. The End.

I hope the notes in this post can fill a tiny little bit of the giant sinkhole left by the hundreds of vacuous reviews following the template above. Over the past two years, I have used seven different Dropbox alternatives on my primary file collections (just under 50G in total). During each period, I actually committed to the relevant syncing system as my main and only sycing tool and kept notes detailing how this went.

TL;DR In searching for a Dropbox-level solution to synchronise my files between my four computers, I’ve spent some quality time with btsync, syncthing, seafile, CloudStation, SpiderOak, Wuala, Dropbox and unison. This post summarises my experiences. Warning: There is no real winner, but there is quite an amount of hopefully useful information.

Requirements

I need a system that is able to keep my two main working repositories of files in sync across two laptops and two workstations. I have a Synology DS213j low-power home NAS that can also be used as part of syncing solutions that support it.

The two repositories are:

  • sync-1: 13 gigabytes spread over 100 thousand files. This repo changes quite often. (At the start of the adventures described below, this repo was 15G spread over 150 thousand files.)
  • sync-2: 35 gigabytes spread over 80 thousand files. This repo changes less often.

I should be able to close one laptop, and continue working on my desktop on the same files, without having to think too much about it. The client software should support Linux, because that’s what I mostly use. Preferably, the tool should support delta-syncing (owncloud does not do this, for example), deduplication and LAN sync, because I live an a bandwidth-starved part of the world. Importantly, it should be relatively easy to roll-back inadvertent changes and deletions.

Finally, an important use case for me is syncing git repos that I’m working on. This means that I want to work on some source code in a git repo, then go home without having to commit and push just because I’m going home, and continue working at home on a different computer, perhaps committing and pushing from there when I’m good and ready with my changes.

bittorrent sync

This peer-to-peer personal syncing solution, also called btsync, is often touted as a great dropbox replacement, and is also the one I started using right after I dropped Dropbox. I only have experience with pre-2.0 btsync.

btsync is really fantastically fast. Multiple gigabytes of files can be spread really really quickly through your mesh network.

However, I after upgrading to 1.4 I soon ran into the btsync-simply-refuses-to-sync-and-there’s-nothing-you-can-do-about-it problem also described in this forum topic. This was quite frustrating, to say the least.

I downgraded to 1.3.109, and all was fantastic. It managed to bring my sync-1 repository in sync between 4 nodes in no time at all.

I made sure that all nodes were time-synced using ntp because I had had some snafus with git repositories getting corrupted using btsync.

All was well in the world of syncing, until my Synology, part of my btsync mesh, was down for a few days, whilst all other nodes remained regularly active. When the Synology was switched on again, btsync happily overwrote a newer subdirectory of files with older versions across all nodes.

This last incident, together with the elephant in the room, namely the fact that btsync is closed source and hence quite hard to have audited, or just to be able to fix by myself, led to yet another breakup after some months.

Bye-bye btsync, it’s you, not me.

syncthing

Syncthing is more or less an open source version of btsync.

The author is passionate about this project, it’s run very well, the tool itself has a gorgeous web-interface and the project’s goals are laudable. This is the peer-to-peer syncing tool you really want to succeed!

However, besides the fact that the binary took up a little too much RAM on my Synology (this is forgivable, my synology only has 512MB), there were two issues preventing me from using this as my primary sync tool:

  • syncthing does not have filesystem monitoring integrated. This means it needs to scan your whole sync repository every so often for changes. This scan was eating a significant amount of CPU cycles on my laptop.
  • My work laptop very often had issues connecting to my synology at home, even after futzing with port-forwarding on my firewall at home.

I’ll probably check in again after some time to see how my favourite peer-to-peer syncing tool is doing!

seafile

Seafile is a fantastic project. In short, it’s sort of a DropBox clone, but both the server and the client are completely open source. As if that wasn’t enough, it supports, optionally, end-to-end client-side encryption.

Because I was not yet ready to invest time and money to setup my own server, and I wanted to evaluate its performance first, I bought a three month subscription to their commercial service, seacloud, for $30.

Uploading my sync-1 repository (15G and 150 thousand files at the time) took about 45 hours.

Soon after this, I started running into issues.

Firstly, every time I moved with my laptop to a new access point (for example between home and work), it would just sit there reporting that it was “syncing”, when in fact it had merely gotten really confused by the changed network connection. Only stopping and starting the client would get it going again.

Secondly, and this was the deal-breaker, even a minute change to a small text file, would result in the seafile client (I tested up to version 4.0.4) using up a whole Core i7 core on my laptop for a few minutes. I logged this as a bug on the seafile github project.

Searching around the net, and reading the comments on the bug I reported, you’ll see that more people are experiencing this error.

Unfortunately, this meant that I had to leave seafile on my journey to a suitable syncing solution. I will keep an eye on it, because it does have amazing potential.

If you’re interested in seafile, you will definitely have to read the blog posts of Pat Regan. They are filled with in-depth information and experiences, very much unlike most other syncing tool reviews on the interwebs.

CloudStation

The Synology NAS is a great product. They’re basically efficient little Linux machines with a whole bunch of Synology-written and packaged software to boost your home network’s utility. I have the DS213j.

CloudStation is Synology’s answer to Dropbox.

The out-of-the-box experience is really smooth. Configuring it on the Synology server is well done, whilst the client installs easily on your Linux (or Windows or MacOS) machines.

Uploading my 15G, 150 thousand file (at that point) sync-1 repository took about 40 hours in total. This is with me sitting on the same LAN as the Synology. The poor little thing is just very under-powered.

On my main Linux laptop, I could not get the file manager sync status icon overlays working, and together with Synology support I was not able to get this problem sorted.

I could live with no icon overlays, convenient as they may be, but the deal-breaking was a day or so later when the CloudStation client started acting up.

As I was editing a Python source code file, the client kept deleting the file, as I was working on it, and making it appear as filename_hostname_date_Conflict.ext. The first time I simply renamed this file back, thinking some once-off issue with the sync, but CloudStation stubbornly kept on deleting my file and creating the conflict-named one.

Synology, I love you, but CloudStation is OUT!

SpiderOak

This is probably the most well-known Dropbox alternative that supports end-to-end encryption. It’s even been name-dropped by Edward Snowden, which is high praise in these circles.

Although SpiderOak as a company has a solid reputation, and has released a number of open source encryption-related software packages, the SpiderOak client itself is closed-source. This means we have nothing more than SpiderOak’s word that their client is indeed performing the end-to-end encryption in a secure way, and is not adding some backdoor key to every packet passing from your computer to their servers. I do think that their service has a higher chance of not beeing snoopable than, say, Dropbox.

In any case, I signed up for the new (at the start of 2015) $12 / month 1 terabyte package and again started uploading my sync-1 repo. SpiderOak was the slowest of the bunch: It took about 60 hours to upload the complete repository.

SpiderOak is significantly more flexible than Dropbox. You can configure it to backup any number of directory trees on your computers. Once a tree has been backed up, it becomes part of your SpiderOak cluster and is visible from all other nodes. To sync, directories on multiple computers, you first have to setup backup on all computers for the tree in question, and only once all backups are complete, you can configure a sync involving any number of those backed-up directories.

SO also supperts lan sync, and has been designed from the start to support per-user de-duplication of file blocks, even although file blocks are encrypted. Furthermore, they’ve also managed an rsync-like efficient file transfer with encrypted blocks. Pretty nifty!

After having completed the backup and sync setup of sync-1 on two laptops and two workstations, I was quite happy with SpiderOak’s flexibility, and the apparent security of the software, so I decided to add sync-2 to the mix, but only for two of the computers.

After the long process of getting everything synced up and starting to think that SO was going be The One, I started running into issues. Firstly, SO loves your RAM. At one point, the client was taking up just under 1GB of RSS on my main Linux laptop. Ouch.

I could probably have learned to live with that, were it not for the fact that the client is just extremely slow to pick up new changes on the filesystem and especially so when adding new computers to your sync network (that syndication process, anyone?). At the end of the work day, I would often have to wait quite a whie for SO to sync all the last changes I had made so I could go home.

Perhaps more insidious than that, was the behaviour that SO would have huge but silent problems with starting to sync again after laptop resume. I would get home, resume my laptop, and after 30 minutes SO would still show no activity whatsoever, in picking up new changes from the SO servers, or picking up changes on my laptop. I would have to kill the client and start it up again.

All in all, SO is probably the service that I wanted the most to succeed, but in the end, these opaque anomalous behaviours and the continuous waiting killed it for me. Sorry SO, I really want to see this work…

Wuala

After the SpiderOak adventure, I briefly tried the other end-to-end encrypted file syncing service, this one from Switzerland. It’s called Wuala, and it belongs to La Cie, which belongs to Seagate, a US company, so we can unfortunately not count Wuala’s Swiss origins as a privacy plus.

In any case, Wuala does not have a free tier anymore, so I ponied up a few euros for the 20G package. I was planning to start by testing just my sync-1 repository.

When this started to upload my files, it surprised me by maxing out my admittedly puny almost 1 Mbit/s upstream. At work, where we are fortunate enough to have 20 Mbit/s symmetric optic fibre, it managed to get up to 3 to 4 Mbit/s, which is quite good considering that that fibre is shared by a number of engineers.

I was also quite impressed by the Java client application. It had similar flexibility to SpiderOak, but was much more user-friendly. Also the real-time per file sync feedback (only in the app itself) was significantly more useful than SpiderOak’s log-style reporting of syncinc activity.

As Wuala was uploading from my laptop, I thought: Hey, let’s install Wuala on my workstation as well. The sync-1 file tree there is identical to that on my laptop. Any self-respecting file sync service should be able to handle this in its stride, and perhaps with two computers encrypting and uploading I could get slightly higher throughput.

Well, it turns out that Wuala is quite lacking in this regard. As soon as I started up the client on my workstation, on the already identical file repos, the clients on both laptop and workstation started throwing up numerous error dialogs about file conflicts.

I was quite surprised by this. I expected a mature syncing service like Wuala to handle this in its stride. It has all of the file block hashes, so at the very least it should realise that a bunch of blocks already exist on its servers, skip the already present blocks, and just continue with its merry life.

At this point, I was not in the mood to continue with Wuala, and so I said good bye.

Dropbox remission

After Wuala, and almost two years of searching, I was ready to give up. Finding a tool or system that would satisfy all my requirements had already cost me too much time. I was ready to tell the NSA: “Oh go ahead, run your algorithms over all my data. Read my files. Do that thing that you do, I just want to SYNC!”

I signed up for the Dropbox 1 TB package and started uploading sync-1, now whittled down to about 100 thousand files and 12G.

Wow, it was just as fast and as pretty as I remembered it!

However, I soon came across files that simply refused to synchronise. Dropbox would get stuck at “Uploading N files…” for hours, and N would remain constant. Finally I had to resort to uploading these files through the web interface. This cost me time (that thing I was trying to save) but got the job done.

After one and a half weeks, my Linux workstation which had been inactive for a few days, was switched on again for the first time. All of a sudden, massive file activity on my laptop.

Hey, a mass deletion of all my source code!

I know that source code was not deleted on the Linux workstation. I don’t know how this deletion could have been triggered, but it was.

No problem right? Let’s just go to the dropbox events interface, and undelete those files. Wait what? The events interface refuses to work, instead just reporting “There was a problem completing this request”:

ARGH!

I can undelete the files via the file browsing interface, but they’re spread out and that would take ages. Fortunately I had a good backup of the whole sync-1 repo so I could restore from there.

Dropbox support has been really helpful and said they would undo the mass deletion events. However, it’s now four days later; my files have not been restored, and the dropbox events interface is still broken.

If this had been my primary file store and I didn’t have the great backups I did, this would have been a complete nightmare. As it stands, I’ve unlinked and uninstalled dropbox from everywhere, but I would still like to have my files restored, and the mystery of the events interface solved.

My confidence in dropbox (it NEVER disappointed me in the years before 2013) has taken a severe knock. Added to its security issues, we’ll have to see what happens to our relationship in the coming time.

Unison 2.48.3

In the meantime, I’ve returned to my very old friend unison. I used this in the years before dropbox to keep my stuff synchronised.

Furthermore, the unison boys and girls have been busy in the meantime. In version 2.48.3, there’s even a neat filesystem monitor with which you can setup lightning fast automatic bi-directional file syncing. It’s all very nerd-DIY, but for some of us that’s a plus.

unison is terribly efficient at transferring changes to and fro. It makes use of the rsync-algorithm, with added niceties like duplicate detection shortcuts. I’ve now set it up in a star topology with my synology. I’ve setup daily incremental backups on the synology so that I have at least some form of roll-back and deletion recovery.

At the moment, because I’m still a little shaken by the mass deletion event, I have the unison GUI running. I simply press A to have it scan and sync any specific file tree that I’ve setup. It takes about a second for sync-1, and I get a list of the changes it makes so I can check that it’s not nuking half of my files.

unison makes me work harder, but of all of the solutions discussed in this post, it gives me the most control.

Other sync systems

This is for sync systems that are still on my list to try, or that I used very briefly.

git-annex

Git-based open source syncing system. By design, unable to synchronise git repositories. The author does not feel that this is an issue.

After a healthy bout of laughing, I nuked it from all of my computers.

mega

Mega does end-to-end encryption, and its client is open source. I would like to try this at some point, but at the moment I have to focus on, you know, actually working.

Parting words

There you have it: A whole bunch of words about a whole bunch of personal syncing solutions. I have not yet found The One, although at this moment unison is doing the job quite well, albeit with some klunkiness.

I would love to hear your thoughts on and your motivation of your favourite syncing tool in the comments below, or over at Hacker News!

44 thoughts on “Notes on my full-time testing of 7 Dropbox alternatives”

  1. > Mega does end-to-end encryption, and its client is open source.
    > I would like to try this at some point, but at the moment I have
    > to focus on, you know, actually working.

    Mega works fantastically.

    For me, it has replaced Dropbox on both my laptop (running Ubuntu) and my Android Phone.

    It does work, and I think you should review it and add it to this review.

    1. I just tried the Mega sync client (with Nautilus icon overlays!) and the Chrome extension. It’s all extremely slick, and the uploads are really fast, BUT unfortunately Mega does not support delta syncing.

      As you can see here for example https://www.facebook.com/MEGAprivacy/posts/761108923985179 and as I’ve just experimentally confirmed, making a few byte change in a multi-megabyte file results in the whole file being uploaded all over again.

      If they ever get around to implementing this (SpiderOak does, together with block-based encryption) I will definitely test with my complete file repositories.

  2. I’m surprised not to see Jottacloud in amongst that lot. It’s pretty much an exact clone of Dropbox as regards functionality, but is based in Norway and subject to their strict privacy laws. The company also runs their infrastructure on renewable energy [if that’s a consideration for you].

    I’ve been a happy user since the Dropbox/Condoleeza Rice scandal broke. Admittedly there were a few initial hiccups with syncing when the service was new, but it’s come on in leaps and bounds since and now works for me just as flawlessly as Dropbox ever used to.

    DISCLAIMER: The link above is my referral link which [a la Dropbox] earns extra storage for new customers and myself]. If you want the ‘untainted’ version, just go to jottacloud.com.

      1. I was in a meeting with Jottacloud for work about a year-and-half ago, and asked about a Linux client at that point. The answer was pretty much that they were “considering” it, so it was at least not firmly on the road map at that time.

        FWIW, Jottacloud also powers a lot of ISP branded cloud solutions in this Norway, and probably in other countries too. Both TeleNor Min Sky and Get Sky are Jottacloud on the backend.

        I tried running it for a while personally, but it was always getting conflicts when I was editing and saving files on multiple computers. Say what you will about Dropbox, but it’s always been stellar about that stuff for me.

  3. Would have been nice if there was another concluding paragraph mentioning your choice and the rationale of your decision…
    But thanks for the info 🙂

  4. This is an excellent post, Charl. I am in complete agreement with you about the quality of most articles about the various cloud storage and synchronization options, and I very much agree with your word choice.

    Vacuous. When I put together my last cloud storage comparison post, I immediately decided that I wanted to link to one or more blog posts talking about each of the products. I figured it couldn’t be too hard to find posts talking about these various products. I was so wrong. When I was lucky, the best I usually found was just rehashes of the installation documentation.

    So thank you very much for giving me something that I can reference in the future!

    I have one piece of advice for you. All of the Dropbox-style synchronization software that I played with tends to slow down as file count grows. It was sometimes very obvious where that performance graph started to turn into a hockey stick. For me, ownCloud started slowing down at several thousand files. Seafile hit the same sort of wall at several tens of thousands.

    When I first started using Seafile, it was painfully slow when waiting to sync a library with 70k files. I’ve since split up my data into several libraries, none with more than about 20k files. That changed my sync time from, “this is going to take days!” to, “this is maxing out my 70 megabit Internet connection!”

    1. Hi there Pat, thank you very much for stopping by and taking the time to comment!

      You make a really good point about breaking one’s sync repo up into more manageable chunks.

      I have broken up my main sync repo into two, but the smallest is still 100k files. I should spend some more time further breaking that up. (I have to say, unison is handling my 100k repo with aplomb! 🙂

  5. Charl thanks so much for such a detail review, which are getting rare these days.
    Thank goodness this is one of the first results I got from google!

  6. Hi Charl,

    Excellent post (as ever!). One question – have you considered ownCloud? It’s a lot more than just file synchronisation, but it’s open source, under active development, has mobile clients, etc. Or was the (current) lack of delta synchronisation a showstopper?

    Cheers, MJ

    1. Hi there MJ, long time no see!

      I did briefly mention ownCloud, in the context of it having no delta sync. In these parts of the world, having a multi-megabyte file transfer just because I added one line of text is, at least for me, a bit of a deal-breaker.

      As you will see in the comments, I was hugely impressed by mega, but also had to drop it when my tests showed that it also doesn’t do any kind of delta transfer.

      1. Hi Charl,

        Indeed, it’s been a long time!

        Whoops, missed the comment about mega (that’ll teach me to not read things thoroughly!) 🙁

        I guessed ownCloud wasn’t considered for that reason, just wanted to confirm. I’m in the process of re-evaluating my use of Dropbox and am pondering a return to Unison but I think I’ll give ownCloud a try first, mainly because I want to use some of the other features it offers, like shared calendars, etc. The lack of ownCloud delta file sync is more annoying for me than a showstopper as I tend not to sync very frequently-changing objects like source code using a file sync application.

        Cheers, MJ

  7. Thank you for this write-up, it’s refreshing to see one which goes into detail and doesn’t exist a medium for affiliate links, like most comparisons I come across. I too have been on the same search; predominantly for backup with multiple versioning, but sync is a bonus. I started out trying CrashPlan due to the good feedback they get on Hacker News, and that you can configure your own encryption key, but the memory requirements (over 1GB for tracking 800Mb files) of the Windows client and the poor upload speed made it unsuitable. I gave up after 30 days without testing on OSX.

    Have you considered https://tresorit.com/? I found it today and it is really expensive compared to others, but I have reached the stage where I want something that works reliably and efficiently, and that’s more important than cost. But their website is very light on solid information (like if it does delta sync).

    Thanks!

    1. I was aware of Tresorit, had done some reading and then wanted to try it out, but then I saw that there was no Linux client yet. For me, that’s an instant deal breaker.

      1. This was Tresorit’s response:

        The Block level replication, also known as delta sync or incremental sync (so the upload of the changed part of the file only) is really on our roadmap, we just had some prerequisites we needed to have – a new file system – in order to prepare for this advanced feature.

        We hope you still like using Tresorit, we will let you know immediately once you can upload only the changed parts.

        Thanks for your cooperation, have a nice day!

  8. Hi Charl

    After several years going through a similar path with all the associated pain, and like, you know, wanting to do something related to work rather than mucking about with lsync, I thought I would comment. No other post really seems worth the time. There are many useful comments here. Mine won’t be I am afraid. I just wanted to say thank you.

    For a year or so I thought it was me being a dunderhead. Sync is hard but how others seem to have made it so hard is to me quite the art form.

    I am with you on Unison. For a while I went back to scripts and rsync. All else seemed too tricky and broken. Although SpiderOak still seems to be the most reliable if instantaneous background sync and RAM hunger aren’t issues (which they are for you and many others).

    Thank you very much.

    Michael

  9. Has anyone tried Sync.com? They look really good. Currently 2TB of secure storage for $100/year. Based in Canada not the US and I love the vault feature where you can send files you don’t want synced everywhere.

  10. Hi Charl,

    Thank you very much for a very informative post

    I am wondering if you have found anything better in the last 6 months.

    If you could, please give me some advice about this.

    Here are my requirements:

    * Must have file versioning enabled in the cloud.

    * Must encrypt locally on my computer before being sent to the cloud.

    * Does not require me to create a virtual drive/folder on my hard drive and then drag and drop files to encrypt then back up. In other words, it encrypts current folders and files in my hard drive as I work on them. No drag and drop to a virtual drive/folder required.

    * Preferably open source.

    * Preferably no Java .

    * Preferably strongest encryption algorithm used — Perhaps Truecrypt ? EncFS ? dm-crypt LUKS ?

    If you want more info, please see

    https://www.reddit.com/r/DataHoarder/comments/3hhj41/how_do_you_encrypt_then_sync_your_files_to_any/

    Thank you very much.

  11. Hi Charl,

    Excellent article, thank you.
    I have had a similar experience but never written about it so I thought I would add to yours.

    * Wuala: Used it for a year or so until it began to cost. I checked on the 2015-08-25 and noticed they are being shut down permanently and are about to go read-only on the 2015-09-30 until final shutdown.

    * BitTorrent Sync: I am still using it with version 2.0.128 for a number of things however I occasionally experience file corruption when a laptop which has a file open and has been asleep for a while wakes up. I also had an issue which they have fixed with random characters added to the end of certain file types (v.2.0.9x) which was rapidly fixed but created an incredible mess in my svn repository and corrupted tons of xml files. I also experienced the 1.4 issue you talked about.

    * syncthing: total disaster as one of my nodes had data on a usb drive that fell off…. all other nodes were deleted. I had fortunately turned on the “archive” function on one of the nodes and everything went in there. The next problem was that there is no recover function to get the files back out of the archive and they were all renamed. The suggested bash scripts using find/sed/awk style corrections did not fix 100% of the files. Apparently a fix for both the above problem is planned.

    * unison: my fallback, the problem is that it is manual and i am lazy 🙂 I am sticking with bittorrentsync and daily backups but it is not ideal.

    Apparently seafile is making good progress. I will be keeping an eye on seafile and syncthing.

    thanks
    Alex

  12. Hi Charl,
    I just wanted to say thank you for putting this stuff up, it’s helpful having such detailed info of your experience available.

    I was running a Raspberry Pi Server with Seafile for a while more than a year ago and it worked well except the CPU issue, which I found manageable with a bit of fine-tuning of the settings. I stopped and went back to Dropbox because of internal issues on my home-network but I will give it a try again in a month or two.

    Thanks,
    Edgar

  13. Hi, thanks for the review.
    I use syncthing for several months now. It’s quite good.
    You pointed out about “syncthing does not have filesystem monitoring integrated”. have you tried out syncthing-inotify? or perhaps you don’t use any Linux OSes? actually it can monitor any changes of filesystem. CMIIW

    1. Thanks bakatare!

      I’m really curious: How many files are you syncing with syncthing?

      When I wrote this blog post, third party filesystem monitoring for syncthing was still in its infancy. (I use mostly Linux and OSX, but whatever sync tool I use needs to support all three of the major desktop systems)

  14. I’m feeling your pain…

    I have come quite close to my requirements using Syncany + Dropbox.
    I’d love to hear your thoughts on this.

    Cheers,
    Kidlike

  15. Great article. I am looking for something a bit out of the mainstream. I use Photoshop Elements 14 on two Macs and cannot place the library folder in the cloud. (You can do this in Windows.) Therefore, I need a modern day version of Folder Share whereby I can select the PSE Library File (where it resides) on both computers and automatically sync any changes made to one file – to the other. Any suggestions?????

  16. Strange that I cant see PYDIO here (kinda like owncloud, syncthing and bittorrent). It has community and Pro version.

    But personally I stick with syncthing (BTW, file monitoring aka syncthing-inotify works great on all platforms – FreeBSD, Linux, M$ and another spying OS aka Mac). Due to paranoia we run own discovery and proxy server for all syncthing instances, so it complete own corporate syncing network. So far syncthing is getting better and better IMO.

    1. At that point I did not try out PYDIO, because it failed the not-being-implemented-in-PHP requirement. 😛

      I’m very curious about your SyncThing implementation. Awesome that you’re using your own discovery and proxy server.

      How well does it do these days with single repos with more than 150 thousand files? How quick is it on repos at least that large when you modify a single file?

  17. Every now and them I try something else, but I always end up back to Unison. It’s truly the nearly perfect sync app, missing only the “watch mode”, which is now present and the reason I’m writing: to ask your opinion about it nowadays if you don’t mind. I would like your opinion as an experienced user, instead of writing to the developer.

    I use unison in a daily basis, but manually… well, semi-manually with the auto and batch options set to true. But I’m not using the watch mode yet.

    At my University, the IT department only allow us to connect from outside via SSH (with a custom port). Everything else stops at the proxy/firewall. So, I’m really “stuck” with unison. That’s not a big issue yet, since I still believe unison is the best.

    Is there any other option that connects using SSH?

    Also I’m wondering about how the watch mode handles the SSH connection. What happens if the connection drops? Does it automatically tries to reconnect? Or is it better to continue using a cron job?

    How’s your experience with unison nowadays?

    I’ve recently got a WD My Cloud 4TB, and I’m planning to setup it as the main server with backups, while the unison “clients” sync in watch mode, or something like this. So, the last question, the watch mode works if there are three computers connected? (A to B, and C to B).

    1. Hi there Gerson,

      I use the unison watcher in a different context (syncing source trees between hosts and containers) where it works quite well.

      However, in your case with your multi-point multi-directional and firewall piercing requirements, I would try syncthing (open source) or resilio sync (commercial, but free version available; this is what btsync became).

      Both of these are peer-to-peer syncing solutions that are usually able to get out of firewalls that allow web traffic. resilio does filesystem change detection out of the box, where with syncthing you have to install an addon for your OS.

      Let us know here how it goes!

  18. Have you given Tresorit a shot? It’s a bit pricey but it’s the only one so far that has a mobile camera upload as robust as Dropbox’s.

  19. Thanks for your effort writing this up. I am in the same boat, trying to find the right solution, although I didn’t try nearly as many programs as you did. I ended up using syncthing after trying scripts with robocopy, syncbackpro and owncloud.

    I have three folders I need to sync across my workstation and my laptop.

    1. An often changing Project folder with my projects and source code. I is 20GB big and has over 300K files
    2. A fairly static Data folder, 650 GB bit and about 50K files
    3. A small and often changing Scripts folder with small scripts, few hundred small text files, no problem at all.

    The Project folder is my troublemaker. It has lots of files and only some of them change often. SyncThing and SyncBackPro are the fastest programs scanning the folder, still this takes some minutes.

    The Data folder has some large files (14GB) that take a long time to sync when the are created, they don’t change afterwards. This folder is scanned in seconds.

    I installed SyncThing on my workstation, laptop and a backup server. When I am at work I use a VPN to stay connected to the backup server.

    At night I stop SyncThing on the server and make a incremental backup of all the three folders, to safeguard against sync disasters.

    I use SyncTrayzor on all the computers to monitor the folders for changes. This way changes are propagated within seconds.

    I had a few problems with this setup and I am still evaluating it. First problem is that SyncThing sometimes stops on the server. When this happens none of the changes on the laptop or workstation are synced.
    Second problem is that sometimes, when creating a new file and writing some text to it, the file is marked as in conflict by SyncThing. The SyncTrayzor filesystem monitor is to aggressive I guess.

    Thanks again, and good luck!

    1. GetIt Remote is not really a dropbox alternative. With getitremote you share directories from any computer, and then you can manually access those files via the web, but there’s no real synchronisation happening. I guess for a certain limited use case this could do the trick.

    1. Yes. I sometimes use encfs, an open source tool with which boxcryptor is compatible.

      The problem with this type of bolt-on encryption, is that it’s locally quite slow (you’re now working via fusefs on Linux for example) and that any delta-syncing algorithm will now have to sync the whole file every time you change a small part of it.

Leave a Reply

Your email address will not be published. Required fields are marked *