High availability

The HighAvailability features of Sparksee had been deprecated since version 6.0. The HA is still present but it has not been updated with the new features of Sparksee since version 5.2.3 and it will eventually be removed. HA has not been a required feature to use Sparksee as an embedded database and we have better plans for the future to use Sparksee as a standalone database.

Sparksee versions between 4.7 and 5.2.3 of Sparksee high-performance graph database comes with high-availability features which are best suited for those applications with large petition load.

In fact, Sparksee high-availability (SparkseeHA) enables multiple replicas working together, allowing the highest scalability for Sparksee applications.

This chapter covers the architecture for these SparkseeHA features, the configuration details for enabling it, and examples of typical usage scenarios.

SparkseeHA allows to horizontal scaling of read operations whilst writes are managed centrally. Future work on SparkseeHA will provide fault tolerance and master re-election.

SparkseeHA is a software feature which is enabled through the license. Sparksee free evaluation does not provide it by default. More information about the licenses at the ‘Introduction’ section.

Architecture

Design

SparkseeHA provides a horizontally scaling architecture that allows Sparksee-based applications to handle larger read-mostly workloads.

SparkseeHA has been thought to minimize developers’ work to go from a single node installation to a multiple node HA-enabled installation. In fact, it does not require any change in the user application because it is simply a question of configuration.

To achieve this, several Sparksee slave databases work as replicas of a single Sparksee master database, as seen in the figure below. Thus, read operations can be performed locally on each node and write operations are replicated and synchronized through the master.

Figure 7.1: SparkseeHA Architecture
Figure 7.1: SparkseeHA Architecture

Figure 7.1 shows all components in a basic SparkseeHA installation:

How it works

Now that the pieces of the architecture are clear, let’s see how SparkseeHA works in different scenarios or acts in typical operations using these components. Below is an explanation of how the system acts in the described situations.

Master election

The first time a Sparksee instance goes up, it registers itself into the coordinator service. The first instance registered which becomes the master. If a master already exists, it becomes a slave.

Reads

As all Sparksee slave databases are replicas of the Sparksee master database, slaves can answer read operations by performing the operation locally. They do not need to synchronize with the master.

Writes

In order to preserve data consistency, write operations require slaves to be synchronized with the master. A write operation is as follows:

  1. A slave wishes to perform a write operation and sends it to the master.
  2. The master serializes the operation in the history log, performs the write, and replies to the slave when it has been successfully achieved.
  3. From the master the slave receives a fully updated list of write operations, which are extracted from the history log, and records them in addition to its original write. This operation preserves the consistency of the database.

If two slaves perform a write operation on the same object at the same time, it may result in a lost update in the same way as may happen in a Sparksee single instance installation if two different sessions want to write the same object at the same time.

Slave goes down

A failure in a slave during a regular situation does not affect the rest of the system. However if it goes down in the middle of a write operation the behavior of the rest of the system will depend on the use of transactions:

Slave goes up

When a Sparksee instance goes up, it registers itself with the coordinator. The instance will become a slave if there is already a master in the cluster.

If polling is enabled for the slave, it will immediately synchronize with the master to receive all pending writes. On the other hand, if polling is disabled, the slave will synchronize when a write is requested (as explained previously).

Future work

This is a first version of SparkseeHA, so although it is fully operational some important functionality is not available which will assure a complete high-availability of the system. Subsequent versions will focus on the following features:

Master goes down

A failure in the master leaves the system non-operational. In future versions this scenario will be correctly handled automatically converting one of the slaves into a master.

Fault tolerance

A failure during the synchronization of a write operation between a master and a slave leaves the system non-operational. For instance, a slave could fail during the performance of a write operation enclosed in a transaction, or there could be a general network error.

This scenario requires that the master should be able to abort (rollback) a transaction. As Sparksee does not offer that functionality, these scenarios cannot currently be solved. SparkseeHA will be able to react when Sparksee implements the required functionality.

Configuration

Installation

A complete installation includes all the elements previously described in the architecture: Sparksee (SparkseeHA configuration), the coordination service (ZooKeeper) and the load balancer. The last one is beyond the scope of this document because, as has been previously stated, it is developers’ decision which is the best to use for their specific system.

SparkseeHA is included in all distributed Sparksee packages. Thus, it is not necessary to install any extra package to make the application HA-enabled it is only a matter of configuration. Sparksee can be downloaded as usual from Sparsity’s website. Use Sparksee to develop your application. Plus, visit Sparksee documentation site to learn how to use Sparksee.

SparkseeHA requires Apache ZooKeeper as the coordination service. Latest version of ZooKeeper v3.4.3 should be downloaded from their website. Once downloaded, it must be installed on all the nodes of the cluster where the coordination service will run. Please note that Apache ZooKeeper requires Java to work, we recommend consulting the Apache ZooKeeper documentation for requirements details.

ZooKeeper

The configuration of Apache ZooKeeper can be a complex task, so we refer the user to the Apache ZooKeeper documentation for more detailed instructions.

This section does, however, cover the configuration of the basic parameters to be used with SparkseeHA, to serve as an introduction for the configuration of the ZooKeeper.

Basic ZooKeeper configuration can be performed in the $ZOOKEEPER_HOME/conf/zoo.cfg file. This configuration file must be installed on each of the nodes which is part of the coordination cluster.

For each server there are two port numbers nnnnn. The first port is mandatory because it is used for the Zookeeper servers, assigned as followers, to connect to the leader. However, the second one is only used when the leader election algorithm requires it. To test multiple servers on a single machine, different ports should be used for each server.

This is an example of a valid $ZOOKEEPER_HOME/conf/zoo.cfg configuration file:

tickTime=2000
dataDir=/var/lib/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo3:2888:3888

SparkseeHA

As previously explained, enabling HA in a Sparksee-based application does not require any update of the user’s application nor the use of any extra packages. Instead, just a few variables must be defined in the Sparksee configuration.

Please, take into account the fact that slaves should synchronize before the master’s history log expires. This will happen if the write ratio of the user’s application is high enough, otherwise you should set a polling value, which must be shorter than the master’s history log time.

These variables must be defined in the Sparksee configuration file (sparksee.cfg) or set using the SparkseeConfig class. More details on how to configure Sparksee can be found on the documentation site.

Example

Figure 7.2 is an example of a simple SparkseeHA installation containing:

Figure 7.2: SparkseeHA example
Figure 7.2: SparkseeHA example

HAProxy

HAProxy is a free, fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. Check their documentation site for more details about the installation and configuration of this balancer.

The configuration file for the example would look like this:

global
    daemon
    maxconn 500

defaults
    mode http
    timeout connect 10000ms
    timeout client 50000ms
    timeout server 50000ms

frontend http-in
    bind *:80
    default_backend sparksee

backend sparksee
    server s1 192.168.1.3:8080
    server s2 192.168.1.5:8080

listen admin
    bind *:8080
    stats enable 

ZooKeeper

In this example, the $ZOOKEEPER_HOME/conf/zoo.cfg configuration file for the ZooKeeper server would be:

tickTime=2000
dataDir=$ZOOKEEPER_HOME/var
clientPort=2181
initLimit=10
syncLimit=5

Please note that, as it is running a single-node ZooKeeper cluster, server.x variable is not necessary.

SparkseeHA

The Sparksee configuration file for the first instance (the master) would look like this:

sparksee.ha=true
sparksee.ha.ip=192.168.1.3:7777
sparksee.ha.coordinators=192.168.1.2:2181
sparksee.ha.sync=600s
sparksee.ha.master.history=24H

And this would be the content for the file in the second instance (the slave):

sparksee.ha=true
sparksee.ha.ip=192.168.5.3:7777
sparksee.ha.coordinators=192.168.1.2:2181
sparksee.ha.sync=600s
sparksee.ha.master.history=24H

The only difference between these two files is the value of the sparksee.ha.ip variable.

As seen in the [‘Architecture’ chapter][doc:Architecture] the role of the master is given to the first starting instance, so to make sure the instance master is that designated in the example, the order of the operations is as follows:

  1. Start the master server by starting first the server with the 192.168.1.3 IP address.
  2. Once the master has been started, start all the slave instances.
  3. Finally, start the HAProxy.

Likewise, to shut down the system it is highly recommended that the slaves are stopped first, followed by the master.

Back to Index