Posted on

Cassandra Multi-AZ Data Replication

The key value which we need to define in the config file in this context is called Snitch. We shall check the status of the cluster using this command as shown below: cass The owns field above indicates the percentage of data owned by each node. The owns field indicates the percentage of data owned by the node.

Let us perform some tests to make sure the data was replicated intact across multiple Availability Zones.

Test 1:

Node 1 was stopped.
Connection was made to the Cluster on remaining nodes and records were read from the table user.
All records were intact.
Node 1 was started.
On Node 1, ‘nodetool repair -h hostname_of_Node1 repair first’ was run.
Connection was made to the Cluster on Node 1 and records were read from the table user.
All records were intact.
Test 2:

Node 1 and Node 2 were stopped. We can control how nodes are configured within a cluster, including inter-node communication, data partitioning and replica placement etc., in this config file. Using CQLSH, you can execute queries using Cassandra Query Language (CQL).

Next, we shall create a table user with 5 records for tests.

CREATE TABLE user(user_id text,login text,region text,PRIMARY KEY (user_id));
1
CREATE TABLE user(user_id text,login text,region text,PRIMARY KEY (user_id));
Now, let us insert some queries in this table:

insert into user (user_id,login,region) values (‘1′,’test.1,’IN’);
insert into user (user_id,login,region) values (‘2′,’test.2′,’IN’);
insert into user (user_id,login,region) values (‘3′,’test.3′,’IN’);
insert into user (user_id,login,region) values (‘4′,’test.4′,’IN’);
insert into user (user_id,login,region) values (‘5′,’test.5′,’IN’);

insert into user (user_id,login,region) values (‘1′,’test.1,’IN’);
insert into user (user_id,login,region) values (‘2′,’test.2′,’IN’);
insert into user (user_id,login,region) values (‘3′,’test.3′,’IN’);
insert into user (user_id,login,region) values (‘4′,’test.4′,’IN’);
insert into user (user_id,login,region) values (‘5′,’test.5′,’IN’);

cqlsh> select * from user;
1
cqlsh> select * from user;
query

Now that our keyspace/database consists of data, let us check for ownership & effectiveness:

12 (1) As we can see here, the owns field above is NOT nil after defining the keyspace. So, let us go ahead and create a sample keyspace. Apache Cassandra is an open source non-relational/NOSQL database. Basically, a snitch indicates as to which Region and Availability zones does each node in the cluster belongs to. Cassandra nodes use seeds for finding each other and learning the topology of the ring. But, in this case, we shall use EC2Snitch as all of our nodes in the cluster are within a single region.

We shall set the snitch value as shown below: snitch

Also, since we are using multiple nodes, we need to group our nodes. NetworkTopologyStrategy places replicas on distinct racks/AZs as sometimes, nodes in the same rack/AZ might usually fail at the same time due to power, cooling or network issues.

Let us set the replication factor to 3 for our “first” keyspace:

CREATE KEYSPACE “first” WITH REPLICATION ={‘class’ :’NetworkTopologyStrategy’, ‘us-east’ : 3};
1
CREATE KEYSPACE “first” WITH REPLICATION ={‘class’ :’NetworkTopologyStrategy’, ‘us-east’ : 3};
The above CQL command creates a database/keyspace ‘first’ with class as NetworkTopologyStrategy and 3 replicas in us-east (In this case, one replica in AZ/rack 1a, one replica in rack AZ/1b and one replica in rack AZ/1c). The nodetool utility is a command line interface for managing a cluster. It gives information about the network topology so as to the requests are routed efficiently. We shall create a keyspace with data replication strategy & replication factor. We will also learn how to ensure that the data remains intact even when an entire AZ goes down.

The initial setup consists of a Cassandra cluster with 6 nodes with 2 nodes (EC2s) spread across AZ-1a , 2 in AZ-1b and 2 in AZ-1c.

Initial Setup:
Cassandra Cluster with six nodes.

AZ-1a: us-east-1a: Node 1, Node 2
AZ-1b: us-east-1b: Node 3, Node 4
AZ-1c: us-east-1c: Node 5, Node 6
Next, we have to make changes in the Cassandra configuration file. This strategy will also help in case of disaster recovery.

Stay tuned for more blogs!!. Cassandra uses a command prompt called Cassandra Query Language Shell, also known as CQLSH, which acts as an interface for users to communicate with it. We shall use NetworkTopology replication strategy since we have our cluster deployed across multiple availability zones. Additionally, Cassandra has replication strategies which place the replicas based on the information provided by the snitch. cassandra.yaml file is the main configuration file for Cassandra. It is massively scalable and is designed to handle large amounts of data across multiple servers (Here, we shall use Amazon EC2 instances), providing high availability. And, the total number of replicas across the cluster is known as replication factor. It is used during startup to discover the cluster.

Cassandra nodes use this list of hosts to find each other and learn the topology of the ring. There are different types of snitches available. Hence, from the above tests, it is quite clear and is recommended to use 6 node cassandra cluster spread across three availability zones and with minimum replication factor of 3 (1 replica in all the 3 AZs) to make cassandra fault tolerant from one whole Availability Zone going down. In this blog, we shall replicate data across nodes running in multiple Availability Zones (AZs) to ensure reliability and fault tolerance. As we can see, the owns field above is nil as there are no keyspaces/databases created. Replication strategy indicates the nodes where replicas are placed. (Scenario wherein an entire AZ i.e; us-east-1a would go down)
Connection was made to the Cluster on remaining nodes in the other AZs (us-east-1b, us-east-1c) and records were read from the table user.
All records were intact.
Node 1 and Node 2 were started.
‘nodetool repair -h hostname_of_Node1 repair first’ was run on Node 1
‘nodetool repair -h hostname_of_Node2 repair first’ was run on Node 2
Connection was made to the Cluster on Node 1 and Node 2 and records were read from the table user.
All records were intact.
Similar tests were done by shutting down nodes in us-east-1b & us-east-1c AZs to check if the records were intact even when an entire Availability Zone goes down. We shall do so, by defining seeds key in the configuration file (Cassandra.yaml) .

Leave a Reply