In this post, I will briefly discuss optimistic locking technique, its advantages and potential use cases.
Pessimistic locking protocol
Let’s first discuss the opposite of optimistic locking to setup the context. Pessimistic locking is the main locking paradigm used for guaranteeing mutual exclusion for a given piece of code subject to execution by reader and writer threads concurrently.
It is called as pessimistic because each thread acquires an explicit lock before executing the critical section code and releases the lock once it finishes the critical section execution
- Thread acquires the lock
- If another thread holds the lock, the calling thread is blocked. Once the thread currently owning the lock releases it, thread(s) waiting on the lock are woken up and become eligible for acquiring the lock. The thread that actually gets to acquire the lock is more dependent on OS scheduling and/or the scheduling (if any) enforced by the lock library.
- Thread executes the critical section
- Thread releases the lock
Taking example of readers-writer lock, pessimistic locking guarantees the following
- If a writer thread owns an exclusive lock, no other reader/writer threads will be able to acquire the lock for the resource and read/update data.
- If reader thread(s) own a shared lock, no other writer thread will be able to acquire the lock for the resource and update data.
Optimistic locking protocol
The most important aspect of optimistic locking technique is that readers don’t take any locks. The high level idea for the reader is to go ahead and read data followed by some verification logic to check that the read was consistent or not. In other words,
- If the data was read while it was being mutated, then it was not a consistent read.
- If the data was read while it was not being mutated, then the reader got a consistent read.
A common way to implement this is using a monotonically increasing version number. The shared resource is associated with a version number (typically 8 bytes).
- The value starts at 0
- An even value of the version number indicates no one is currently updating the resource.
- An odd value of the version number indicates a writer is currently in the process of updating the data.
Protocol for the reader
- Read the version number — store it as V1.
- Read the data.
- Read the version number — store it as V2.
If V1 and V2 are even and equal, we got a consistent read else we know someone was changing the data. Otherwise, reader got an inconsistent read since some writer was concurrently making changes.
This could have happened even before reader executed step (1) or the writer could have started making the changes after step (1). In any case, the read happened while the data was potentially being changed. So the reader should discard the data read in step (2) and retry the operation.
Protocol for the writer
- Read the version number.
- If it is odd (implying a concurrent writer is already in progress), bail out or retry.
- If even, atomically increment the version number by 1 to make it odd:
- If you are using Java, we can use AtomicLong for the version number.
- In any case, atomic increment will be done using CAS (compare and swap).
- If the atomic increment fails (implying a concurrent writer succeeded in bumping up the version), bail out or retry.
- Make changes.
- Increment the version number by 1 to make it even.
How does the reader read data?
Since readers don’t use any locks, it is possible that writer might be actively changing the data while reader is trying to read it. The 3 step protocol mentioned above lets the reader detect whether the read was consistent or not but there is still a possibility that reader sees a core-dump (crash due to segmentation fault) while it is trying to read the data in step 2 and follows an invalid pointer.
There are two potential ways to solve this:
- Reader uses memcpy() – it implies that the data region that optimistic locking is being used over is simply copied by the reader in step 2 and not interpreted. Later if the version is matched, the reader knows it was a consistent read and reader can go ahead and interpret/understand the contents. If the version mismatches, the reader can just throw away the copied data and reattempt.
- Reader is aware of the write pattern – this avoids memcpy but is very implementation specific and really depends on the data structure (and it’s format) that is being protected by optimistic locking. For example, if the writer is changing the shared data in a strict sequence of steps then may be the writer can use store barriers to indicate its progress and/or a strong indicator of when it is not safe for the reader to proceed further.
There could be more ways to solve this and I encourage readers to share their thoughts in comments.
Suitable scenarios and Retry mechanism
Before we discuss the retry mechanism, let’s first try to understand the cases where optimistic locking is suitable. There are two scenarios where we should consider using optimistic locking:
- Writers hold locks for a short period of time. In other words, the critical section is small and can be executed fairly quickly.
- There is less contention/conflict.
High contention will increase the rate of read and write operations that fail (due to version mismatch or atomic increment failures) and have to be reattempted (if the implementation chooses to).
Reattempt for reader is to simply invoke the read operation again which will execute the 3 steps mentioned above for reader protocol.
Writer can typically use a spin-lock. In this case, the write protocol is executed in an infinite while loop. The specific implementation can choose to have a small sleep time between each attempt.
Using spin lock works since the assumptions are (1) critical section is small and (2) contention is generally less. Also, all modern CPU architectures are multi-core. Using spin-lock on a single processor system does not make any sense.
If the critical section is long and takes a significant time to execute then the writers issuing reattempts within spin locks will only be wasting CPU cycles by spinning for a long period of time. Similarly if the contention is high, the writers will again end up spinning a lot trying to race against each other at step 2 or step 3 of their protocol and making reattempts.
It is not necessary to use spin-lock for writers. It is perfectly fine to fail the operation and let the caller of the API decide if the operation needs to be reattempted.
Advantages and Use Cases
Optimistic locking avoids the overhead of managing explicit locks. If the conflicts are rare (and especially if it’s a read heavy workload), optimistic locking will help improve performance on multi-core architectures since the reads are executed without any lock/unlock and writes use a very lightweight version based lock.
On the other hand, if there are frequent conflicts then optimistic locking is unlikely to provide any benefits since the cost of constantly retrying the operation (and this actually depends on the context and the nature of write operation) might negate the benefit of avoiding the overhead of lock/unlock operations.
We can leverage optimistic locking on pretty much all data structures that we would use pessimistic locking on — in-memory caches, shared memory regions, lightweight database transactions are few of the common uses cases of optimistic locking.