My first take on CAP theorem !!

This is purely my notion and understanding of the CAP theorem. Since the time it came out initially, it has had a fair evolution. People have identified some flaws, caveats, and as far as I know, there is no single stamped explanation of the theorem. Below is my attempt to explain it by talking about some distributed system concepts that surround the theorem.

The CAP theorem by Eric Brewer _originally_ states that any distributed system with shared data is likely to see tension between the following systemic properties:

  1. Consistency – All the replicas are in sync and maintain the same state of any given object at any given point of time. Also known as sequential consistency.
  2. Availability – A request will eventually complete successfully. A read/write request on any node of the system will never be rejected as long as the particular node is up and running.
  3. Network Partition Tolerance – When the network connecting the nodes goes down, the system will still continue to operate even though some/all nodes can longer communicate with each other.

Alternately, the theorem states that it is possible for the system to satisfy any two but not all of the properties mentioned above.

Let’s say the system comprises of 2 nodes interconnected with network.

If a data item is stored only at node N1, then that node will  always have the consistent state of that data as all the updates  are  always serialized on the same node. So, we achieve consistency “C”.

Suppose, the data at node N1 is replicated at node N2, and both the replicas are always in sync maintaining consistent copies of data at any point of time. This implies that an update happening at node N1 will atomically propagate to node N2 and vice-versa. So, a read/get request for a data item on any of the replicas (N1 or N2) will never return stale or outdated copy of the data. Our system again achieves consistency “C”.

Now that we briefly understand  “C” of the CAP theorem, let’s assume that our distributed system of two nodes N1 and N2 is designed and implemented in a way where data is replicated on both the nodes, and the system maintains consistent copy of the data.

Having said this, let’s say that the network connecting both the nodes goes down (network gets partitioned), but both nodes N1 and N2 are up and running fine.

How will this impact the overall operations of the system ? The updates happening at node N1 can no longer reach node N2 and vice-versa. If now :

1. Both the replicas continue to service update requests for a data item, and because the system is network partitioned, these updates can no longer be propagated to the other replica. Thus both the replicas diverge, and end up maintaining conflicting or inconsistent copies of the data item. So, the system no longer possesses consistency (“C”) property. Here we chose availability (“A”), and continued to service operations in a disconnected environment. In other words, we preferred availability “A” over consistency “C”, and tolerated network partitions “P”. Hence, the system provides “A” and “P” properties from the CAP theorem.


2. One of the replicas (let’s say N2) stops servicing read/write requests on detecting that network connecting the system got partitioned. This means that system treats node N1 as not available any more, and any request on this node will be rejected. In this case, we preserve the consistency property as all the updates will be serialized only at node N1, and there is no possibility of replica divergence since the other replica is not going to service any request. Note that N2 should not service even a read/get request as it will end up returning the old/stale copy of the data. Hence, this kind of system provides consistency “C” at the cost of availability “A”, and as before we tolerate network partitions “P”.


3. Both the replicas N1 and N2 stop servicing update requests, but continue to provide read/get operations. In this case, the system is available for read operations, but not for updates. Here again we preserve the consistency “C” at the cost of availability “A”, and tolerate network partitions “P”.

In all the points above, it was a choice between “C” and “A”. From the literature I have read and understood, I feel that it is almost always the case that the trade-off is between “C” and “A”. Partition tolerance “P” is not really a choice. I consider it as a baseline property used to trade-off between consistency and availability, and accordingly design the system.

If the network gets partitioned, I don’t know of a way where the system can be both consistent “C” and available “A”.

The replicas in the system are unable to communicate with each other, but the system is strongly consistent which requires the replicas to propagate changes to each other via the network, but this can’t happen as the network is partitioned“.

The above discussion also implies that “C”, “A”, and “P” can’t be together achieved in the distributed system with shared data, and this is in accordance with the theorem.

The notion of consistency has also evolved leading to systems with weak or eventually consistent models. For example, in some NoSql databases, key-value stores, the updates are allowed to happen on the replicas in the event of network failure. Background replica synchronization is done to restore the consistency between the data maintained at the replicas. This means that updates eventually propagate to all the replicas and hence the system is “eventually consistent”. It effectively means that all three  properties “C”, “A”, and “P” are provided by the data store. However, the notion and definition of “C” is very different.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at

Up ↑

%d bloggers like this: