This project is read-only.

Vsync: Consistent Data Replication for Cloud Computing

Archive plan: With CodePlex shutting down, the issue arises of what to do about support for Vsync. We haven't actually had anyone report a bug in 2 1/2 years now, although people do seem to use the system given that there is a steady rate of downloads. Meanwhile... Derecho is working, and is 15,000x faster. So given this impressive gap in speed, and the fact that my whole research group is focused on Derecho, I'm going to mentally archive Vsync, too. But I'll fix bugs anyhow! Just email me if you need help. If that happens, I'll probably create a github account for Vsync to release the fixes. Ken Birman, email

News flash: Derecho is working! Many new features, and it will need additional work to end up with Vsync-quality documentation. But we have created what seems to be the world's fastest atomic multicast / Paxos replication solution! Plus, Derecho has a whole new architecture for smart storage (intelligent data warehousing), and a way to structure applications into complex substructures. All of this is in C++, though, so you would need to code in that language to use it.

Project description: Vsync is a new option for cloud computing that can enable reliable, secure replication of data even in the highly elastic first-tier of the cloud. Vsync is a new name for a fairly mature project of Ken Birman at Cornell University, previously called Isis2. The Vsync software library helps you build applications that will run on multiple computers, coordinating actions, sharing replicated data, moving files and other information at high speeds, cooperating to support key-value storage (DHT storage), etc. Vsync aims at sophisticated developers with challenging needs, and is designed to be highly secure, fault-tolerant, consistent and very scalable, even under "cloudy conditions."

The name Vsync is a reference to the formal model used by the system, namely virtual synchrony. The model is a form of state machine replication with various optimizations available (but optional) that permit greater speed without loss of correctness.

Given its long history, Vsync is quite stable now and rather mature: if we include the older Isis2 downloads, there have been about 5250 as of December 2015. In fact, work has start on its successor: a system we're creating that we expect to call DMC (Derecho Multicast). The original thinking was to translate Vsync (which is coded in C#) into C++, and then perhaps remove less commonly used features. But C++ 11 is just too different from C#. So by now we are doing a follow-on system that really will be called Derecho, but will be built from scratch in C++, while Vsync itself will live on in stable form in C# and Java (with a helper library called Vsync is maintained by Ken Birman at Cornell, and he plans to continue to do indefinitely.

We are trying to broaden the set of languages that can be used with the library. Last year we confirmed that Vsync is compatible with a version of Python called IronPython, a version of C++ 11 called C++/CLI, and a version of Ruby called IronRuby, all of which are standard on .NET. (We also have a report that Vsync works well from F# on .NET).

This year the hope is to add Java and Scala, with the help of a library called IKVM (download it separately from IKVM allows Java-based applications to run on .NET or Mono, and since those are the underlying runtime environments where Vsync is normally used, it seems very likely that this would work. Please post on the discussions tab if you try this and want help, or want to share your experience.

The most current Vsync release is V2.2.2063. We actively support the system, so be sure to report any issues you encounter. You won't find a lot of questions on the discussion or issues page because the move to the new web site and the renaming of the system "cleared" the old content.

Background Information
Vsync was created in 2010 as a new version of an older style of group communication system that the author first began to work with in 1985 (the Isis Toolkit). Although the system started out as a data replication technology (groups of programs that can share updates), in 2013 Vsync became much more big-data oriented. In 2014 it evolved again, this time with a focus on leveraging cutting-edge remote DMA (RDMA) transfer capabilities such as Infiniband and Fast Ethernet interfaces with RDMA over Converged Ethernet (ROCE) capabilities. To maximize the payoff for the user, the 2014 technology is also designed to assist you in moving big memory-mapped files around, since more and more applications manage their data in files and memory-map them for speed. In this mode Vsync is more of a control mechanism, since the data is basically external (we use the term "out of band") relative to the system. The idea is that if our platform can help replicate and copy gigabyte objects without actually touching them directly, the dominant cost will be the actual RDMA copying cost, which of course is a pure hardware speed. You end up with the benefits of Vsync (strong consistency, fault-tolerance, automated self-management) and yet don't need to pay the high overheads of sending big data objects "through" the platform, which is coded in C#.

Using the system
The system was created in C# and is still easiest to use from that language; a direct translation to Java should be available soon and will also be quite clean and easy to use from native Java applications. Our documentation and videos on learning to work with it are in C#. But this said, in theory, Vsync can also be used from any language supported by .NET. For example, in Spring 2015 a Cornell student experimented successfully with IronRuby and F# (I've posted his instructions on how to do this on the downloads page). Let us know if you experiment successfully with something not explicitly described here, and we'll add it to our list.

Another way to use Vsync from C++ or Java or other languages is via a form of RPC that runs through web interactions (technically, web services). There is a "server" for this purpose on our downloads page.

The system is open source and open development, and we actually have a very detailed set of project suggestions aimed at students learning to use the system (or even professionals!). Vsync is solid enough for use in production settings, but obviously we do have a lot of students who are using the system to get hands-on experience with cloud or distributed computing. Don't be shy about posting questions or comments! The project suggestions page has details.

We are currently at release level V2.2.2020. My approach is to leave a few older releases around just in case I destabilize anything when fixing bugs, but in fact this has not happened in years, so in general, always go with the most current release.

Target user community
So who might find our work useful? The premise behind this project is that the need for high assurance has never been greater: with the trends towards data centers of all sizes and shapes (ranging from small racks of just a dozen or two machines to massive cloud computing data centers with hundreds of thousands of them), developers of modern computing systems need to target the Web, employ Web Services APIs, and yet somehow ensure that the solutions they build can scale out without loss of assurance properties such as data security (who knows what the other users of the cloud might be doing… or what might be watching?), consistency and fault-tolerance. The Vsync library was built to help you solve this problem in an easy way, closely matched to the style of development used for standard object-oriented applications that use GUI builders.

One fairly recent push has focused on speed. Now, I should start by saying that despite the expectations one might have, C# is actually quite fast. But it isn't fast for dealing with large data objects like mapped files that could have gigabytes of content, for example. Accordingly, during 2014 we decided to port Vsync to leverage a relatively modern API called "verbs" that was originally intended to be very broadly useful, but somehow got an early reputation of being specific to Infiniband (a very high speed interconnect, but used mostly on HPC clusters). Our initial focus has been on porting Vsync to simply use Verbs with Infiniband and to get the full speed feasible in that configuration, and this has been quite successful. For example, depending on your hardware, we're seeing data movement speeds of 4Gb/s to as much as 10 or 12 Gb/s over a 20Gb/s Infiniband network, and we think this is just the start of a process that should benefit Vsync users on every kind of network hardware. The only caveat is that obviously you need to work with the Vsync programmer's API to derive this full benefit.

But that caveat may already be fading away. We used Vsync to build a distributed management platform for 24x7 availability of cloud-hosted applications, and are just finishing a new file system for real-time uses that employs Vsync for data replication. So more and more, you may be able to benefit from Vsync without actually writing code that directly uses it.

Getting Started
  • The project Documentation page has all sorts of materials including video summaries, release descriptions, as and standard written documentation. There are also some self-test modules to evaluate your understanding of the videos, and suggestions for projects that you could do using Isis2.
  • The project suggestions page is constantly being expanded. It has many ideas for projects you could do with the system, at various levels of difficulty.
  • What's new?
  • Vsync for people who don't know C#
  • Java support For potential users who would work from Java or Scala.

For compilation under Linux environment, please see the Compile page. With proper setup, Vsync can run on Windows, Linux, Amazon EC2 or Eucalyptus-style virtualization platforms, Azure... you name it!

What's with the older Isis name?
The earlier versions of Vsync were named in reference to the Egyptian goddess Isis, who brought her brother Osiris back from death after an epic battle. But by late 2015 the use of that name for a system often employed in critical infrastructure settings became a concern, and we ultimately decided to change it.

Last edited Jun 15, 2017 at 4:48 PM by birman, version 27