Lock-Free Data Structures

While using locks helps you guard critical section of your code concurrent modifications, these blocking data structures have some inherent problems such as:

  1. Context switching overhead
  2. Potential deadlocks

Lock-free data structures guarantee that no thread will ever be blocked by ensuring that no locks are involved. Therefore, lock-free falls under the category of non-blocking data structures. Let’s first understand the basic building block of such data structures – compare-and-swap.

Compare-and-Swap (CAS)

Compare-and-Swap (CAS), also called compare-and-set is an atomic operation which updates a memory location’s value only if it matches an expected value, thus, ensuring thread-safe concurrent updates without needing locks. This operation is typically supported by hardware. For e.g., the x86 architecture offers a CMPXCHG (compare-and-exchange) instruction for it.

Below is a Java-style pseudo-code to clarify that is a non-fancy, simple operation. This is not the actual implementation of CAS, rather just an illustrative code to explain how this operation works.

public boolean compareAndSwap(int expectedValue, int newValue) {
    synchronized(this) {
        if (value == expectedValue) {
            value = newValue;
            return true;
        } else {
            return false;
        }
    }
}

Blocking Thread-Safe Stack

Before looking at a lock-free example, let’s look at the classic way of creating a thread-safe stack – using locks – for a multithreaded environment.

public class ThreadSafeBlockingStack<T> {
    static class StackNode<V> {
        StackNode<V> next;
        V data;

        StackNode(V data) {
            this.data = data;
        }
    }

    StackNode<T> top;
    ReadWriteLock lock;

    ThreadSafeBlockingStack() {
        top = null;
        lock = new ReentrantReadWriteLock();
    }

    T pop() {
        lock.writeLock().lock();
        try {
            if (top == null) {
                return null;
            }

            StackNode<T> curr = top;
            top = top.next;

            return curr.data;
        } finally {
            lock.writeLock().unlock();
        }
    }

    T peek() {
        lock.readLock().lock();
        try {
            if (top == null) {
                return null;
            }

            return top.data;
        } finally {
            lock.readLock().unlock();
        }
    }

    void push(T data) {
        StackNode<T> curr = new StackNode<T>(data);

        lock.writeLock().lock();
        try {
            curr.next = top;
            top = curr;
        } finally {
            lock.writeLock().unlock();
        }
    }

    // Other methods can be similarly implemented
}

Although the above code includes one optimization of using a read-write lock instead of a plain-old lock, it is still a blocking stack. For e.g., if a push operation is in progress, a thread calling peek will be blocked until the push completes.

Lock-Free Stack

Now, let’s create a lock-free version of the above code. Below is the non-blocking, thread-safe stack.

public class LockFreeStack<T> {
    static class StackNode<V> {
        StackNode<V> next;
        V data;

        StackNode(V data) {
            this.data = data;
        }
    }

    AtomicReference<StackNode<T>> top;

    LockFreeStack() {
        top = new AtomicReference<>();
    }

    T pop() {
        while (true) {
            StackNode<T> curr = top.get();

            if (curr == null) {
                return null;
            }

            if (top.compareAndSet(curr, curr.next)) {
                return curr.data;
            }
        }
    }

    T peek() {
        if (top == null) {
            return null;
        }

        return top.get().data;
    }

    void push(T data) {
        StackNode<T> newTop = new StackNode<T>(data);

        while (true) {
            StackNode<T> oldTop = top.get();
            newTop.next = oldTop;

            if (top.compareAndSet(oldTop, newTop)) {
                return;
            }
        }
    }

    // Other methods can be similarly implemented
}

You can observe that no locks are used and therefore, no client thread of this stack will ever be blocked. This is different from a non-lock free, thread-safe stack which would involve read-write locks

Further Study

If you want to look at some examples in Java you can start with:

  1. LinkedBlockingQueue is a blocking queue which uses a ReentrantLock.
  2. ConcurrentLinkedQueue is a lock-free version of a queue.