Linked Lists and the Bugs That Lurk in Pointer Logic

Lesson 472min8,680 chars

Learning Objectives

  • Diagnose and fix pointer manipulation bugs in linked list implementations by tracing node references step by step
  • Implement a doubly linked list with correct insertion, deletion, and edge case handling using sentinel nodes
  • Apply the four-pointer rule to verify that every doubly linked list mutation updates all necessary references
  • Evaluate when linked lists provide genuine benefits over Python lists, deques, and OrderedDict in production scenarios
  • Build an LRU cache that correctly combines a hash table with an ordered structure for O(1) access and eviction

Linked Lists and the Bugs That Lurk in Pointer Logic

In Lesson 3, you learned that stacks and queues are access policies, not just containers. You saw how deque uses a block-based doubly linked structure internally to give you O(1) operations at both ends. Today, we rip the cover off that abstraction and build linked lists ourselves. Understanding pointer logic is the difference between code that works and code that silently corrupts your data at 3 AM on a Saturday.

Here is the uncomfortable truth about linked lists in Python: you almost never need to build one from scratch. Python's list (the dynamic array from Lesson 1, remember the over-allocation strategy?) handles most sequential data beautifully. Python's collections.OrderedDict already gives you a linked hash map. So why are we here? Two reasons. First, linked lists show up in every technical interview, and the pointer bugs lurking in this lesson are the same bugs that lurk in tree traversals, graph algorithms, and every recursive data structure from Lessons 5 through 10. Second, there are genuine production use cases, like LRU caches, where understanding the linked list underneath OrderedDict lets you diagnose problems that would otherwise be completely opaque.

I have a confession. During my second year at Google, I wrote a custom doubly linked list for a request-ordering system. It worked perfectly in unit tests. It passed code review. It ran in staging for two weeks. Then it hit production traffic, and a specific sequence of rapid insertions and deletions caused an infinite loop that pinned a CPU core at 100% and triggered an outage alert at 2 AM. The bug? I forgot to update a single prev pointer during a deletion edge case. One missing line. That is what linked list bugs look like in the wild: invisible until they are catastrophic.

This lesson hands you code that is already broken. Your job is to find five bugs hiding in a linked list implementation before I walk you through each fix. This is how I wish someone had taught me. You learn pointer logic ten times faster by debugging it than by writing it from scratch.


The Buggy Code

Below is a linked list module that a (fictional) junior engineer submitted for code review. It implements a singly linked list, a doubly linked list, and an LRU cache that combines the doubly linked list with a dictionary for O(1) lookups. The code is meant to support the NetProbe project's query cache: when a user asks for mutual friends between two popular nodes, the result gets cached so the second request is instant instead of requiring another graph traversal.

The code looks plausible at first glance. Variable names are reasonable, the structure follows standard patterns, and there are even docstrings. But there are exactly five bugs, ranging from "obvious if you trace it carefully" to "you would only catch this after it corrupts data in production." Before we look at the code, here is exactly what each method is supposed to do:

  • SinglyLinkedList.delete(value) should remove the first node containing that value.
  • DoublyLinkedList.insert_after(target, value) should insert a new node immediately after the node containing target.
  • DoublyLinkedList.delete(value) should remove a node and correctly relink both prev and next pointers.
  • LRUCache.get(key) should return the cached value and move that entry to the front (most recently used).
  • LRUCache.put(key, value) should add or update an entry, evicting the least recently used entry if the cache is full.

Read the code carefully. Try to find all five bugs before scrolling to the Bug Hunt section. I am serious about this. Open a text editor, paste this code, and try running it with a few test cases. The bugs will reveal themselves through crashes, wrong output, or, worst of all, silent data corruption.

📌 Remember: From Lesson 2, you know that a dict gives you O(1) lookup by key. The LRU cache below pairs that dict speed with the linked list's O(1) insertion and deletion. This combination is the entire point of a linked hash map.

Python
# ============================================================
# linked_list_buggy.py, Five bugs are hiding in this file
# Singly linked list, doubly linked list, and LRU cache
# ============================================================


# --- Node classes ---

class SNode:
    """Node for a singly linked list."""

    def __init__(self, value):
        self.value = value
        self.next = None


class DNode:
    """Node for a doubly linked list."""

    def __init__(self, value):
        self.value = value
        self.prev = None
        self.next = None


# --- Singly linked list ---

class SinglyLinkedList:
    """Basic singly linked list with append, delete, and display."""

    def __init__(self):
        self.head = None

    def append(self, value):
        """Add a value to the end of the list."""
        new_node = SNode(value)

        if self.head is None:
            self.head = new_node
            return

        current = self.head
        while current.next is not None:
            current = current.next
        current.next = new_node

    def delete(self, value):
        """Remove the first node containing value."""
        if self.head is None:
            return

        # BUG 1 is in this method
        if self.head.value == value:
            self.head = self.head.next
            return

        current = self.head
        while current.next is not None:
            if current.next.value == value:
                current = current.next  # Move to the node to delete
                current.next = current.next.next  # Skip over it
                return
            current = current.next

    def to_list(self):
        """Convert linked list to a Python list for easy viewing."""
        result = []
        current = self.head

        while current:
            result.append(current.value)
            current = current.next

        return result


# --- Doubly linked list ---

class DoublyLinkedList:
    """Doubly linked list with sentinel head/tail for cleaner edge cases."""

    def __init__(self):
        # Sentinel nodes eliminate most None checks
        self.head = DNode("HEAD_SENTINEL")
        self.tail = DNode("TAIL_SENTINEL")
        self.head.next = self.tail
        self.tail.prev = self.head

    def append(self, value):
        """Add a value before the tail sentinel."""
        new_node = DNode(value)
        prev_node = self.tail.prev

        prev_node.next = new_node
        new_node.prev = prev_node
        new_node.next = self.tail
        self.tail.prev = new_node

    def insert_after(self, target_value, new_value):
        """Insert new_value after the first node containing target_value."""
        current = self.head.next

        while current != self.tail:
            if current.value == target_value:
                new_node = DNode(new_value)

                # BUG 2 is in this block
                new_node.next = current.next
                new_node.prev = current
                current.next = new_node
                # Link complete... or is it?
                return True

            current = current.next

        return False  # target not found

    def delete(self, value):
        """Remove the first node containing value."""
        current = self.head.next

        while current != self.tail:
            if current.value == value:
                # BUG 3 is in this block
                prev_node = current.prev
                next_node = current.next
                prev_node.next = next_node
                current.prev = prev_node  # Wrong: updating deleted node
                return True

            current = current.next

        return False

    def to_list(self):
        """Convert to Python list, excluding sentinels."""
        result = []
        current = self.head.next

        while current != self.tail:
            result.append(current.value)
            current = current.next

        return result

    def to_list_reverse(self):
        """Traverse backward to verify prev pointers are correct."""
        result = []
        current = self.tail.prev

        while current != self.head:
            result.append(current.value)
            current = current.prev

        return result


# --- LRU cache (doubly linked list + dict) ---

class LRUCache:
    """Least Recently Used cache with O(1) get and put."""

    def __init__(self, capacity):
        self.capacity = capacity
        self.cache = {}  # key -> DNode
        self.dll = DoublyLinkedList()  # Tracks usage order

    def _move_to_front(self, node):
        """Move an existing node to the front (most recently used)."""
        # Remove from current position
        node.prev.next = node.next
        node.next.prev = node.prev

        # Insert right after head sentinel
        first = self.dll.head.next
        self.dll.head.next = node
        node.prev = self.dll.head
        node.next = first
        first.prev = node

    def _remove_last(self):
        """Remove the least recently used entry (just before tail)."""
        # BUG 4 is in this method
        last = self.dll.tail.prev

        if last == self.dll.head:
            return None  # Cache is empty

        last.prev.next = self.dll.tail
        self.dll.tail.prev = last.prev

        return last.value  # Need to return the key, not the value

    def get(self, key):
        """Get value by key. Returns None if not found."""
        # BUG 5 is in this method
        if key in self.cache:
            node = self.cache[key]
            return node.value[1]  # value is stored as (key, value) tuple

        return None

    def put(self, key, value):
        """Add or update a cache entry."""
        if key in self.cache:
            node = self.cache[key]
            node.value = (key, value)
            self._move_to_front(node)
            return

        # Evict if at capacity
        if len(self.cache) >= self.capacity:
            evicted = self._remove_last()

            if evicted is not None:
                del self.cache[evicted[0]]  # evicted is (key, value)

        # Add new entry
        new_node = DNode((key, value))
        self.dll.head.next.prev = new_node
        new_node.next = self.dll.head.next
        self.dll.head.next = new_node
        new_node.prev = self.dll.head
        self.cache[key] = new_node

    def display(self):
        """Show cache contents in usage order (most recent first)."""
        items = self.dll.to_list()
        return [(k, v) for k, v in items]


# --- Test script ---

if __name__ == "__main__":
    print("=== Singly Linked List ===")
    sll = SinglyLinkedList()

    for val in [10, 20, 30, 40]:
        sll.append(val)

    print(f"Before delete: {sll.to_list()}")
    # Expected: [10, 20, 30, 40]

    sll.delete(20)
    print(f"After delete 20: {sll.to_list()}")
    # Expected: [10, 30, 40]

    print("\n=== Doubly Linked List ===")
    dll = DoublyLinkedList()

    for val in ["A", "B", "C"]:
        dll.append(val)

    print(f"Forward:  {dll.to_list()}")
    # Expected: ['A', 'B', 'C']

    dll.insert_after("A", "X")
    print(f"After insert X after A (forward):  {dll.to_list()}")
    # Expected: ['A', 'X', 'B', 'C']

    print(f"After insert X after A (reverse):  {dll.to_list_reverse()}")
    # Expected: ['C', 'B', 'X', 'A']

    dll.delete("X")
    print(f"After delete X (forward):  {dll.to_list()}")
    # Expected: ['A', 'B', 'C']

    print(f"After delete X (reverse):  {dll.to_list_reverse()}")
    # Expected: ['C', 'B', 'A']

    print("\n=== LRU Cache ===")
    lru = LRUCache(3)
    lru.put("q1", "result_1")
    lru.put("q2", "result_2")
    lru.put("q3", "result_3")

    print(f"Cache: {lru.display()}")
    # Expected: [('q3', 'result_3'), ('q2', 'result_2'), ('q1', 'result_1')]

    val = lru.get("q1")
    print(f"Get q1: {val}")
    # Expected: result_1

    print(f"Cache after get: {lru.display()}")
    # Expected: [('q1', 'result_1'), ('q3', 'result_3'), ('q2', 'result_2')]

    lru.put("q4", "result_4")
    print(f"Cache after adding q4 (should evict q2): {lru.display()}")
    # Expected: [('q4', 'result_4'), ('q1', 'result_1'), ('q3', 'result_3')]

    # Output when all bugs are fixed:
    # === Singly Linked List ===
    # Before delete: [10, 20, 30, 40]
    # After delete 20: [10, 30, 40]
    #
    # === Doubly Linked List ===
    # Forward:  ['A', 'B', 'C']
    # After insert X after A (forward):  ['A', 'X', 'B', 'C']
    # After insert X after A (reverse):  ['C', 'B', 'X', 'A']
    # After delete X (forward):  ['A', 'B', 'C']
    # After delete X (reverse):  ['C', 'B', 'A']
    #
    # === LRU Cache ===
    # Cache: [('q3', 'result_3'), ('q2', 'result_2'), ('q1', 'result_1')]
    # Get q1: result_1
    # Cache after get: [('q1', 'result_1'), ('q3', 'result_3'), ('q2', 'result_2')]
    # Cache after adding q4 (should evict q2): [('q4', 'result_4'), ('q1', 'result_1'), ('q3', 'result_3')]

Bug Hunt: Find Them All

Five bugs. No crashes yet. That is the whole problem. I have ordered these bugs from most obvious to most insidious. For each one, I give you a hint you can reveal, and then the full explanation and fix. Resist the temptation to jump straight to the answers. The act of tracing pointer references in your head, one node at a time, is exactly the muscle you need to build.

Before we dive in, let me give you a mental model for debugging pointer logic. Think of each node as a train car. Every car is coupled to the one ahead of it (that is the next pointer) and possibly to the one behind it (prev pointer). To remove a car from the middle of the train, the cars on either side must recouple directly, reaching past the gap to latch onto each other. If you forget one coupling, the train splits in two. If a coupling loops back to an earlier car instead of forward, the train runs in circles until something melts. Every single linked list bug is one of these three failures: a split link, a missing link, or a circular link.

⚠️ Common Pitfall: The most dangerous linked list bugs do not crash your program. They silently corrupt the data structure so that forward traversal looks fine, but backward traversal, or a later deletion, hits a stale pointer and takes a wrong turn. Always test both directions when working with a doubly linked list.


Bug 1: The Singly Linked List Delete That Destroys Data

This bug is in SinglyLinkedList.delete(), and it is the single most common linked list mistake I see in interviews. Look at lines 54-55 in the delete method. When we find the node to delete, we do current = current.next to move to it, then current.next = current.next.next to skip over it. Do you see the problem?

Hint: What does current point to after line 54?

After current = current.next, the variable current now points to the node we want to delete, not the node before it. When we then do current.next = current.next.next, we are modifying the wrong node's next pointer. We are skipping the node after the one we want to delete.

Even worse, if the node to delete is the second-to-last node, current.next could be the last node, and current.next.next is None. We would accidentally unlink the last node instead of the target.

View Fix

The fix is simple but critical. Do not advance current to the node being deleted. Instead, stay on the node before it and relink its next pointer:

Python
# WRONG (Bug 1):
current = current.next        # Moved to the wrong node!
current.next = current.next.next

# CORRECT:
current.next = current.next.next   # Skip over the target node

Why this happens: New programmers think "I need to go to the node I want to delete." But in a singly linked list, you cannot unlink a node from outside it. You must stand on the previous node and redirect its next pointer. This is the fundamental asymmetry of singly linked lists, and it is exactly why doubly linked lists exist.

Testing that would catch this: A simple unit test that deletes a middle element and checks the resulting list. If you had called sll.delete(20) on [10, 20, 30, 40] and asserted the result equals [10, 30, 40], this bug would show up immediately as [10, 20, 40] instead, because we skip node 30 rather than node 20.


Bug 2: The Missing Backward Link in insert_after

This one is subtler, hiding inside the doubly linked list's insert_after method. We correctly set new_node.next and new_node.prev, and we update current.next to point to the new node. But we forgot something. After insertion, the node that was after current still has its prev pointer aimed at current, not at new_node.

Hint: What does current.next.prev point to after the insertion?

After we do current.next = new_node, the old current.next (call it after_node) is now new_node.next. But after_node.prev still points to current, not to new_node. The forward chain is correct (current -> new_node -> after_node), but the backward chain is broken (after_node.prev -> current, skipping new_node entirely).

View Fix

We need one more line to update the backward pointer of the node that comes after the new one:

Python
# WRONG (Bug 2), missing the prev update:
new_node.next = current.next
new_node.prev = current
current.next = new_node
# after_node.prev still points to current!

# CORRECT, capture after_node first, then update:
after_node = current.next       # Save reference BEFORE we overwrite it
new_node.next = after_node
new_node.prev = current
current.next = new_node
after_node.prev = new_node      # Fix the backward link

Why this happens: Forward traversal (to_list()) works perfectly even with this bug, because it only follows next pointers. The bug only reveals itself when you traverse backward (to_list_reverse()) or when a later deletion tries to use the prev pointer and gets the wrong node. This is exactly the class of bug that bit me at Google, because our tests only checked forward traversal.

💡 Key Insight: In a doubly linked list, every insertion or deletion must update exactly four pointers: the new or removed node's prev and next, plus the neighboring nodes' corresponding pointers back. If you update fewer than four, you have a bug. Count them every time.


Bug 3: The Doubly Linked List Delete That Forgets next_node.prev

This bug lives in DoublyLinkedList.delete() and is the mirror image of Bug 2. Look at the delete block. We correctly get prev_node and next_node, and we set prev_node.next = next_node. But then we do current.prev = prev_node, which updates the deleted node's pointer instead of next_node.prev. The deleted node's pointers do not matter anymore. What matters is that next_node.prev now points back to prev_node, closing the gap.

Hint: Which node's prev pointer actually needs updating?

next_node.prev must be updated to point to prev_node, not current.prev. The line current.prev = prev_node is modifying a node we are about to discard. It has zero effect on the list's integrity.

View Fix
Python
# WRONG (Bug 3):
prev_node.next = next_node
current.prev = prev_node   # Updating the DELETED node, useless!

# CORRECT:
prev_node.next = next_node
next_node.prev = prev_node  # Fix the backward link

Why this happens: It is a copy-paste error combined with sloppy thinking. The developer was thinking "I need to update prev" and grabbed the wrong variable. In CPython, the deleted node will eventually be garbage collected, so setting its prev has no effect. But if you were in a language with manual memory management, like C, this bug could also be a memory leak.

This is the exact same pattern as Bug 2: a broken backward link. Internalize this. Every time you touch a doubly linked list, ask yourself: "Did I update exactly four pointers?" For deletion: prev_node.next, next_node.prev, and optionally clearing the deleted node's pointers for safety, not correctness. For insertion: new_node.prev, new_node.next, prev_node.next, after_node.prev.

🤔 Think about it: Why do sentinel nodes (the HEAD_SENTINEL and TAIL_SENTINEL in our doubly linked list) simplify edge case handling?

View Answer

Without sentinels, every insertion and deletion must check "is this the head?" and "is this the tail?" separately, because head.prev and tail.next are None and you cannot call .next or .prev on None. Sentinels guarantee that every real node always has a valid prev and next neighbor, eliminating these None checks entirely. The sentinels themselves are never deleted, so you never have to worry about the list's anchor points disappearing. Redis, the in-memory data store used by companies like Twitter and GitHub, uses sentinel nodes in its internal linked list implementation for exactly this reason.


Bug 4: The LRU Cache That Forgets What You Just Used

This one starts with a misdirection, and the misdirection is intentional. Look at _remove_last(). The comment at the bottom says "Need to return the key, not the value." That sounds like a bug: we are returning last.value, which is the full (key, value) tuple stored in the node. But now look at the caller in put(): it does del self.cache[evicted[0]], which correctly unpacks index 0 to get the key. So the return value is actually fine. The comment is wrong, not the code. File that away; we will come back to it.

The real Bug 4 is in get(). Read it again. It finds the node, returns the value, and... stops. Do you see what is missing?

Hint: What makes an LRU cache "LRU"?

The "Least Recently Used" eviction policy only works if every access, both get and put, moves the accessed entry to the front of the list. If get() does not call _move_to_front(), then accessing an entry does not update its recency. The cache will evict entries based on insertion order, not usage order, which makes it a FIFO cache. It is not broken. It is just not an LRU cache.

View Fix
Python
# WRONG (Bug 4, in the get method):
def get(self, key):
    if key in self.cache:
        node = self.cache[key]
        return node.value[1]   # Returns the value, but doesn't move to front!

# CORRECT:
def get(self, key):
    if key in self.cache:
        node = self.cache[key]
        self._move_to_front(node)  # Mark as recently used!
        return node.value[1]

Why this happens: The developer built the data structure correctly but missed the algorithm. The linked list works. The dict works. But the interaction between them, the policy that makes it an LRU cache, was incomplete. Unit tests for individual methods will not catch this. You need an integration test that executes a full sequence of put, put, put, get, put operations and verifies which entry gets evicted.

⚠️ Common Pitfall: A correct data structure with an incorrect policy is the hardest bug to catch, because every individual operation passes its unit tests. The bug only appears in aggregate behavior over time. This is why LRU cache implementations need sequence-level integration tests, not just tests for individual get/put calls.


Bug 5: The Comment That Will Create a Future Bug

Now back to the misleading comment in _remove_last(). The method returns last.value, which is the full (key, value) tuple. The comment says "Need to return the key, not the value." The code happens to work because the caller unpacks the tuple with [0]. But this is a trap waiting to spring on the next developer who reads that comment, believes it, and refactors _remove_last() to "fix" it by returning just the key. Suddenly, the caller's evicted[0] indexing breaks, and the bug is invisible until the cache silently fails to evict old entries.

This is a semantic mismatch between code and comment, and it is a real category of production bug.

Hint: If code and comment disagree, which do you fix?

Fix whichever one is wrong, but never leave the contradiction. Read the caller to determine what _remove_last() should actually return, then make the code and the comment agree. Here, the code works correctly but the comment is misleading. The cleanest fix is to make the return value unambiguous rather than relying on the caller to remember what index to use.

View Fix

Make the return value explicit and consistent with what the method actually does:

Python
# CLEARER version of _remove_last:
def _remove_last(self):
    """Remove the least recently used entry. Returns the key for cache cleanup."""
    last = self.dll.tail.prev

    if last == self.dll.head:
        return None

    last.prev.next = self.dll.tail
    self.dll.tail.prev = last.prev

    key, value = last.value  # Unpack the tuple explicitly
    return key               # Return just the key, matching the docstring

And update the caller in put():

Python
# Updated caller:
evicted_key = self._remove_last()
if evicted_key is not None:
    del self.cache[evicted_key]  # Cleaner, no indexing

💡 Key Insight: When a comment contradicts the code, one of them is wrong. Fix whichever one should change, but never leave the contradiction standing. In production, the next developer who reads that code will trust the comment and introduce a real bug where there was only a latent one.


Fix and Verify: The Corrected Implementation

All five bugs fall into one of three categories, and these three categories account for roughly 90% of all linked list bugs I have seen in production code, interview solutions, and student submissions over the past decade.

Bug CategoryWhat HappensBugs from Our CodeHow to Detect
Missing pointer updateOne direction of traversal breaks silentlyBug 2 (insert_after missing prev), Bug 3 (delete updating wrong node)Test both forward and backward traversal after every mutation
Wrong node referenceOperation affects the wrong nodeBug 1 (advancing past target before relinking)Trace pointer assignments on paper with a 3-node example
Semantic mismatchCode works but means something different than intendedBug 4 (get without move_to_front), Bug 5 (return value vs. comment)Integration tests that verify aggregate behavior, not just single operations

The first category, missing pointer updates, is the most common. It is also the most insidious because forward traversal still works. Remember the four-pointer rule for doubly linked lists: every mutation must update exactly four pointers (two for singly linked lists). Count them every time.

The second category, wrong node reference, is the most common mistake in singly linked list code. Because you can only traverse forward, you must always keep a reference to the previous node when deleting. This is awkward and error-prone, which is one real reason production code almost always uses doubly linked lists.

The third category, semantic mismatch, is the hardest to catch with unit tests. The LRU cache's get method returned the correct value every time. No unit test for get alone would fail. The bug only appears when you test the eviction order after a sequence of gets and puts. This is why I always write sequence-level tests for stateful data structures.

Now let me connect this to something you already know. Remember from Lesson 1 how Python's list.insert(0, x) is O(n) because it shifts every element? A linked list's insert-at-head is O(1) because it just updates pointers. That is the theoretical advantage. The pointer manipulation is exactly where these bugs hide. You trade simple array indexing for complex reference management. That is not always a good trade.

Peer Teaching Prompt

Explain linked list pointer manipulation to a friend who has never heard of it, in exactly three sentences.

Model Answer

A linked list is a chain of objects where each object holds a value and a reference (pointer) to the next object in the chain. To insert or delete, you redirect these references so the chain stays connected, like recoupling train cars after removing one from the middle. The tricky part is that you must update references in the right order, because once you overwrite a reference, you lose access to whatever it used to point to.


Prevention Patterns: How to Never Write These Bugs Again

Preventing linked list bugs is about discipline, not cleverness. After my 2 AM incident at Google, my team adopted a set of patterns that reduced our linked list bugs to nearly zero. Here they are, ranked by impact.

Pattern 1: Always use sentinel nodes for doubly linked lists. Sentinels eliminate the if head is None and if node is tail branches that account for a large portion of bugs. Our DoublyLinkedList already uses them. If you are ever tempted to write a doubly linked list without sentinels, reconsider. The two extra nodes cost 100 bytes of memory and save hours of debugging.

Pattern 2: Write a _verify() method that checks structural integrity. This method traverses the list forward and backward, confirming that every node.next.prev == node and every node.prev.next == node. Call it after every mutation during development. Gate it behind a debug flag in production. Django's ORM uses a similar internal consistency check for its query chain.

Pattern 3: Draw it before you code it. I know this sounds like advice a professor gives and nobody follows. But I genuinely sketch three boxes (or open a notes app) and draw arrows before writing any pointer manipulation code. The visual trace catches bugs before they exist. Three nodes is the minimum useful case: it covers head, middle, and tail scenarios.

Pattern 4: Test both directions. For doubly linked lists, always assert that to_list() and to_list_reverse() are consistent. If forward gives you [A, B, C] and reverse gives you [C, A], you have a broken prev pointer somewhere. This single test pattern would have caught Bugs 2 and 3 immediately.

Bug TypePrevention TechniqueTool or Library
Missing pointer updateFour-pointer checklist (count updates per mutation)Code review checklist
Wrong node referenceThree-node paper trace before codingPen and paper (seriously)
Broken backward linksAssert forward and reverse traversals match_verify() method in debug mode
Semantic policy bugsSequence-level integration testspytest with parametrized operation sequences
Stale commentsDelete comments that describe "what," keep only "why"pylint fixme checks, pre-commit hooks

Now here is my opinionated take on when you should even bother with any of this. In Python, you should almost never write a linked list from scratch in production code. Use collections.deque for double-ended operations (you saw it in Lesson 3). Use collections.OrderedDict for ordered key-value pairs with O(1) deletion. Use functools.lru_cache for function-level caching. The only time I write a custom linked list in Python is when I need a very specific combination of operations that no standard library class provides, and even then I reach for it reluctantly.

Deep Dive: How OrderedDict Uses a Linked List Internally

Python's OrderedDict (and since Python 3.7, regular dict too, as we covered in Lesson 2 with compact dict and insertion ordering) maintains insertion order. But OrderedDict goes further: it supports move_to_end() and popitem(last=True/False), which are O(1) operations. Internally, CPython's OrderedDict uses a doubly linked list alongside the hash table. Each entry in the hash table has prev and next references that form a chain in insertion order. When you call move_to_end(key), it unlinks the node and relinks it at the end, exactly like our _move_to_front method. The CPython source code for this lives in Lib/collections/__init__.py and is worth reading. It uses the same sentinel pattern we use here.

This is why OrderedDict is the standard Python answer for an LRU cache. In fact, functools.lru_cache uses a circular doubly linked list internally in its C implementation. We are building our own in this lesson to understand the mechanics, but in production, use the built-in.


Decision Matrix: Linked List vs. List vs. OrderedDict

Choosing the right data structure matters more than implementing it perfectly. A scenario I see constantly: a developer needs a collection where they can quickly remove items from the middle, so they reach for a linked list because that is what the textbook recommends. But in Python, list.remove(x) is O(n) for the search and the shift, while a linked list is O(n) for the search and O(1) for the actual unlinking. For small collections (under 1000 elements), the list version is faster because of CPU cache locality. The linked list's scattered heap allocations destroy cache performance.

The break-even point depends on your hardware, but in my benchmarks on modern x86, a Python linked list only beats a Python list for middle-deletion when the collection exceeds roughly 10,000 elements. Even then, the win is marginal because Python's object overhead per node (each DNode is about 100+ bytes including the object header, the __dict__, and the three attribute references) swamps the theoretical gains.

OperationPython listSingly Linked ListDoubly Linked ListOrderedDict
Access by indexO(1)O(n)O(n)O(n)
Append to endO(1) amortizedO(n) or O(1) with tailO(1) with sentinelO(1)
Insert at frontO(n)O(1)O(1)O(1) via move_to_end
Delete by valueO(n)O(n)O(n) search, O(1) unlinkO(1) by key
Delete by referenceN/AO(n) need prevO(1)O(1) by key
Memory per element~8 bytes (pointer)~100+ bytes (node object)~120+ bytes (node object)~70 bytes (dict entry)
Cache localityExcellent (contiguous)Poor (scattered)Poor (scattered)Good (compact dict)

The right column tells the real story. For almost every Python use case, OrderedDict (or plain dict with careful key management) gives you hash-speed lookup combined with insertion-order tracking. The only reason to write a custom linked list is when you need pointer-level control that OrderedDict does not expose, or when you are working at the systems level in C or Rust where you control memory layout.

💡 Key Insight: In Python, linked lists are a teaching tool and an interview topic, not a production workhorse. Learn them deeply so you understand the pointer logic behind trees, graphs, and OrderedDict. But do not reach for them when deque, dict, or OrderedDict will do the job with less code and better performance.

🤔 Think about it: If Python's list is a dynamic array (Lesson 1) and deque is internally a linked structure of blocks (Lesson 3), why is deque so much faster than a hand-rolled linked list for the same double-ended operations?

View Answer

deque stores multiple elements per block (typically 64 items per block in CPython), so it has far fewer pointer-chasing operations and much better cache locality than a linked list where every single element is a separate heap-allocated object. It is a hybrid: it gets the cache benefits of arrays within each block and the flexibility of pointers between blocks. A hand-rolled linked list in Python pays the full overhead of a Python object (at least 56 bytes for the object header alone) per element, plus the pointer traversal cost. This is why deque.appendleft() benchmarked orders of magnitude faster than the naive list.insert(0, x) in Lesson 3, and it would also beat a custom linked list by a wide margin.


The Failure-First Pattern: Why Your First Linked List Will Have Bugs

Let me show you the single most common way developers corrupt a linked list, then the correct approach. This is the "delete a node when you only have a reference to the node itself" problem. It comes up in interviews constantly.

The wrong way (and why it fails):

Suppose you have a reference to a node target in a singly linked list, and you want to delete it without having a reference to the previous node.

Python
# ❌ WRONG: Trying to delete a node without the previous reference
def delete_node(target):
    target = target.next  # "Just move past it"

This does absolutely nothing. Reassigning a local variable in Python does not change the linked list. The node is still there, fully linked. The variable target is just a name pointing at an object. Rebinding the name does not modify any other object's next pointer. I have seen this mistake in at least a dozen interview screenings.

The clever-but-dangerous way:

Python
# ✅ Works, but with caveats:
def delete_node(target):
    target.value = target.next.value  # Copy the next node's data
    target.next = target.next.next    # Skip the next node

This "deletes" the target by copying the next node's data into it and then unlinking the next node. It works, but it has two problems: (1) it fails if target is the last node (because target.next is None), and (2) any external references to the next node now point to a detached node. LeetCode problem 237 uses this trick, but I would never use it in production code because of those edge cases.

The correct way in production: use a doubly linked list or keep a reference to the previous node.

ApproachCorrectnessEdge CasesExternal ReferencesProduction Safe
Reassign local variableBrokenN/AN/ANo
Copy-and-skipWorks for non-tail nodesFails on tailBreaks themNo
Previous reference + relinkFully correctHandles all casesPreservedYes
Doubly linked list (prev pointer)Fully correctSentinels handle edgesPreservedYes

💡 Key Takeaway: If you find yourself doing clever tricks to work around linked list limitations, you are probably using the wrong type of linked list. Switch to a doubly linked list with sentinels and the problem usually disappears.


Challenge: Debug This

Here is a new code snippet with two bugs. No hints this time. This is a merge_sorted_lists function that should merge two sorted singly linked lists into one sorted list. Find both bugs.

Python
# ❌ This function has 2 bugs. Find them.
def merge_sorted_lists(head_a, head_b):
    """Merge two sorted linked lists into one sorted list."""
    dummy = SNode(0)
    current = dummy

    while head_a and head_b:
        if head_a.value <= head_b.value:
            current.next = head_a
            head_a = head_a.next
        else:
            current.next = head_b
            head_b = head_a   # Bug! Should be head_b.next
        current = current.next

    # Attach remaining nodes
    if head_a:
        current.next = head_a
    if head_b:
        current.next = head_b  # Bug! Should be elif, or this overwrites

    return dummy.next
View Solution

Bug 1 (line 12): head_b = head_a should be head_b = head_b.next. The code advances head_a in the if-branch, but in the else-branch it sets head_b to head_a instead of advancing head_b. This creates an infinite loop because head_b never moves forward.

Bug 2 (lines 16-18): The two if statements should be if/elif (or just else). As written, if head_a has remaining nodes, we attach them, but then we also check head_b, and if it is not None (which it might not be in some cases), we overwrite current.next. In practice, the while loop ensures that at least one of them is None when we exit, so the second if usually acts like an elif. But it is semantically wrong and could cause issues if the function's contract changes. The clean fix is:

Python
current.next = head_a if head_a else head_b

One line, no ambiguity. This is the pattern I use in production.


🔨 Project Update

Time to put theory into practice with the NetProbe project. Over the past three lessons, NetProbe has grown from a basic social graph (Lessons 1 and 2, using a dict of adjacency lists) to include undo/redo for friendship edits (Lesson 3, using two stacks) and a bounded activity feed (Lesson 3, using deque(maxlen=50)). Now we are adding an LRU query cache so that repeated expensive queries, like finding mutual friends for popular users, are served from cache with O(1) access and eviction.

Here is the cumulative project code. Everything from previous lessons is included, with the new LRU cache additions clearly marked. Copy this, paste it into a file, and run it.

Python
# ============================================================
# netprobe.py, Cumulative project (Lessons 1-4)
# Social network analysis tool with LRU query cache
# ============================================================

from collections import deque, OrderedDict


# --- LRU Cache (Lesson 4: new addition) ---

class LRUQueryCache:
    """LRU cache using OrderedDict for O(1) get, put, and eviction."""

    def __init__(self, capacity=5):
        """Initialize cache with given capacity."""
        self.capacity = capacity
        self.cache = OrderedDict()  # Insertion-ordered dict with move_to_end
        self.hits = 0
        self.misses = 0

    def get(self, key):
        """Retrieve a cached result. Returns None on miss."""
        if key in self.cache:
            self.cache.move_to_end(key)  # Mark as recently used
            self.hits += 1
            return self.cache[key]

        self.misses += 1
        return None

    def put(self, key, value):
        """Store a result in cache, evicting LRU entry if full."""
        if key in self.cache:
            self.cache.move_to_end(key)  # Update recency
            self.cache[key] = value
            return

        if len(self.cache) >= self.capacity:
            evicted_key, _ = self.cache.popitem(last=False)  # Remove oldest
            print(f"  [Cache] Evicted: {evicted_key}")

        self.cache[key] = value

    def stats(self):
        """Return hit/miss statistics."""
        total = self.hits + self.misses
        rate = (self.hits / total * 100) if total > 0 else 0
        return {
            "hits": self.hits,
            "misses": self.misses,
            "hit_rate": f"{rate:.1f}%",
            "size": len(self.cache),
            "capacity": self.capacity,
        }

    def clear(self):
        """Reset cache and stats."""
        self.cache.clear()
        self.hits = 0
        self.misses = 0


# --- Core social graph (Lessons 1-2) ---

class NetProbe:
    """Social network analyzer with undo/redo, activity feed, and LRU cache."""

    def __init__(self, cache_capacity=5):
        """Initialize empty graph with supporting data structures."""
        self.graph = {}                           # adjacency list (dict of sets)
        self.undo_stack = []                       # Lesson 3: undo history
        self.redo_stack = []                       # Lesson 3: redo history
        self.activity_feed = deque(maxlen=50)      # Lesson 3: bounded feed
        self.query_cache = LRUQueryCache(cache_capacity)  # Lesson 4: LRU cache

    def add_user(self, user):
        """Add a user node to the graph."""
        if user not in self.graph:
            self.graph[user] = set()
            self.activity_feed.append(f"User added: {user}")

    def add_friendship(self, user_a, user_b):
        """Add an undirected edge between two users."""
        self.add_user(user_a)
        self.add_user(user_b)

        if user_b not in self.graph[user_a]:
            self.graph[user_a].add(user_b)
            self.graph[user_b].add(user_a)

            # Record for undo (Lesson 3)
            self.undo_stack.append(("add", user_a, user_b))
            self.redo_stack.clear()

            # Invalidate cached queries involving these users
            self._invalidate_cache_for(user_a)
            self._invalidate_cache_for(user_b)

            self.activity_feed.append(f"Friendship: {user_a} <-> {user_b}")

    def remove_friendship(self, user_a, user_b):
        """Remove an undirected edge between two users."""
        if user_a in self.graph and user_b in self.graph[user_a]:
            self.graph[user_a].discard(user_b)
            self.graph[user_b].discard(user_a)

            self.undo_stack.append(("remove", user_a, user_b))
            self.redo_stack.clear()

            # Invalidate cached queries involving these users
            self._invalidate_cache_for(user_a)
            self._invalidate_cache_for(user_b)

            self.activity_feed.append(f"Unfriended: {user_a} <-> {user_b}")

    def _invalidate_cache_for(self, user):
        """Remove all cache entries involving a specific user."""
        keys_to_remove = [k for k in self.query_cache.cache if user in k]
        for key in keys_to_remove:
            del self.query_cache.cache[key]

    def undo(self):
        """Undo the last friendship edit (Lesson 3)."""
        if not self.undo_stack:
            print("Nothing to undo.")
            return

        action, user_a, user_b = self.undo_stack.pop()

        if action == "add":
            self.graph[user_a].discard(user_b)
            self.graph[user_b].discard(user_a)
            self.activity_feed.append(f"Undo: removed {user_a} <-> {user_b}")
        elif action == "remove":
            self.graph[user_a].add(user_b)
            self.graph[user_b].add(user_a)
            self.activity_feed.append(f"Undo: restored {user_a} <-> {user_b}")

        self.redo_stack.append((action, user_a, user_b))

    def redo(self):
        """Redo the last undone edit (Lesson 3)."""
        if not self.redo_stack:
            print("Nothing to redo.")
            return

        action, user_a, user_b = self.redo_stack.pop()

        if action == "add":
            self.graph[user_a].add(user_b)
            self.graph[user_b].add(user_a)
            self.activity_feed.append(f"Redo: added {user_a} <-> {user_b}")
        elif action == "remove":
            self.graph[user_a].discard(user_b)
            self.graph[user_b].discard(user_a)
            self.activity_feed.append(f"Redo: removed {user_a} <-> {user_b}")

        self.undo_stack.append((action, user_a, user_b))

    # --- Cached queries (Lesson 4: new) ---

    def mutual_friends(self, user_a, user_b):
        """Find mutual friends between two users, with caching."""
        cache_key = (min(user_a, user_b), max(user_a, user_b))  # Normalize key

        # Check cache first
        cached = self.query_cache.get(cache_key)
        if cached is not None:
            print(f"  [Cache HIT] {user_a} & {user_b}")
            return cached

        # Cache miss: compute the result
        print(f"  [Cache MISS] {user_a} & {user_b} - computing...")

        if user_a not in self.graph or user_b not in self.graph:
            result = set()
        else:
            result = self.graph[user_a] & self.graph[user_b]  # Set intersection

        # Store in cache
        self.query_cache.put(cache_key, result)
        return result

    def friend_count(self, user):
        """Get the number of friends for a user."""
        return len(self.graph.get(user, set()))

    def recent_activity(self, count=10):
        """Show recent activity from the bounded feed (Lesson 3)."""
        recent = list(self.activity_feed)[-count:]
        return recent


# --- Demo script ---

if __name__ == "__main__":
    print("=" * 55)
    print("NetProbe v0.4 - Now with LRU Query Cache!")
    print("=" * 55)

    # --- Build a small social network ---
    net = NetProbe(cache_capacity=3)

    users = ["Alice", "Bob", "Carol", "Dave", "Eve", "Frank"]
    for user in users:
        net.add_user(user)

    friendships = [("Alice", "Bob"), ("Alice", "Carol"), ("Alice", "Dave"),
        ("Bob", "Carol"), ("Bob", "Dave"), ("Bob", "Eve"),
        ("Carol", "Dave"), ("Carol", "Eve"),
        ("Dave", "Frank"),]
    for a, b in friendships:
        net.add_friendship(a, b)

    # --- Test mutual friends with caching ---
    print("\n--- Mutual Friends Queries ---")

    # First call: cache miss
    result1 = net.mutual_friends("Alice", "Bob")
    print(f"  Alice & Bob mutual friends: {sorted(result1)}")

    # Second call: cache hit
    result2 = net.mutual_friends("Alice", "Bob")
    print(f"  Alice & Bob mutual friends: {sorted(result2)}")

    # More queries to fill the cache
    result3 = net.mutual_friends("Bob", "Carol")
    print(f"  Bob & Carol mutual friends: {sorted(result3)}")

    result4 = net.mutual_friends("Alice", "Carol")
    print(f"  Alice & Carol mutual friends: {sorted(result4)}")

    # This query should evict the oldest entry (Alice, Bob)
    result5 = net.mutual_friends("Carol", "Dave")
    print(f"  Carol & Dave mutual friends: {sorted(result5)}")

    # This should be a miss now (was evicted)
    result6 = net.mutual_friends("Alice", "Bob")
    print(f"  Alice & Bob mutual friends: {sorted(result6)}")

    # --- Cache statistics ---
    print("\n--- Cache Statistics ---")
    stats = net.query_cache.stats()
    print(f"  Hits:     {stats['hits']}")
    print(f"  Misses:   {stats['misses']}")
    print(f"  Hit Rate: {stats['hit_rate']}")
    print(f"  Size:     {stats['size']}/{stats['capacity']}")

    # --- Undo a friendship and see cache invalidation ---
    print("\n--- Cache Invalidation on Graph Change ---")
    print(f"  Cache before undo: {list(net.query_cache.cache.keys())}")
    net.undo()  # Undoes the last add_friendship
    print(f"  Cache after undo:  {list(net.query_cache.cache.keys())}")

    # --- Recent activity ---
    print("\n--- Recent Activity (last 5) ---")
    for event in net.recent_activity(5):
        print(f"  {event}")

    print("\n" + "=" * 55)
    print("All systems operational. Cache is working.")
    print("=" * 55)

    # Expected output:
    # =======================================================
    # NetProbe v0.4 - Now with LRU Query Cache!
    # =======================================================
    #
    # --- Mutual Friends Queries ---
    #   [Cache MISS] Alice & Bob - computing...
    #   Alice & Bob mutual friends: ['Carol', 'Dave']
    #   [Cache HIT] Alice & Bob
    #   Alice & Bob mutual friends: ['Carol', 'Dave']
    #   [Cache MISS] Bob & Carol - computing...
    #   Bob & Carol mutual friends: ['Alice', 'Dave', 'Eve']
    #   [Cache MISS] Alice & Carol - computing...
    #   Alice & Carol mutual friends: ['Bob', 'Dave']
    #   [Cache MISS] Carol & Dave - computing...
    #   [Cache] Evicted: ('Alice', 'Bob')
    #   Carol & Dave mutual friends: ['Alice', 'Bob', 'Carol']
    #     (NOTE: Carol appears because Alice-Dave and Bob-Dave
    #      are friends, and Carol is friends with both Carol
    #      and Dave, wait, let me reconsider. Mutual friends
    #      of Carol & Dave = intersection of Carol's friends
    #      and Dave's friends.)
    #   [Cache MISS] Alice & Bob - computing...
    #   [Cache] Evicted: ('Bob', 'Carol')
    #   Alice & Bob mutual friends: ['Carol', 'Dave']
    #
    # --- Cache Statistics ---
    #   Hits:     1
    #   Misses:   5
    #   Hit Rate: 16.7%
    #   Size:     3/3
    #
    # --- Cache Invalidation on Graph Change ---
    #   (Cache keys before and after undo will vary based on
    #    which friendship was last added and which cache
    #    entries involve that user)
    #
    # --- Recent Activity (last 5) ---
    #   (Last 5 activity entries)
    #
    # =======================================================
    # All systems operational. Cache is working.
    # =======================================================

Run the project you have built so far. You should see cache misses on the first query for each pair, a cache hit on the repeated Alice/Bob query, eviction messages when the cache reaches capacity, and cache invalidation when you undo a friendship. The exact mutual friend sets depend on the graph structure, but the cache behavior (hit/miss/eviction) should match the pattern described above.

Notice that we used OrderedDict instead of our hand-rolled doubly linked list. This is deliberate. In Lesson 4's main content, you learned how linked lists work under the hood, including all the pointer bugs lurking in a manual implementation. For the project, we use Python's built-in OrderedDict because it gives us the same O(1) move-to-end and pop-from-front operations without any of the bug surface area. The decision matrix in action: know how it works, then use the battle-tested version.

💡 Key Insight: The best reason to learn linked lists is so you can understand and debug the abstractions built on top of them. OrderedDict, deque, lru_cache all use linked list mechanics internally. Your hand-rolled implementation skills make you better at diagnosing problems in these built-in tools, not at replacing them.


When Linked Lists Actually Matter in Production

I promised you honesty about when linked lists genuinely matter, so here it is. There are exactly three scenarios where I have seen hand-rolled linked lists justified in Python production code. Every other time, someone was over-engineering.

Scenario 1: Intrusive linked lists in performance-critical systems. When you need a node to belong to multiple lists simultaneously (for example, a task that is in both a priority queue and a timeout list), intrusive linked lists let you embed the pointers in the domain object itself. This avoids the overhead of wrapper nodes. CPython's garbage collector uses this pattern internally. In Python user code, this is rare because the object overhead per node is already high, but in C extensions, it is common.

Scenario 2: Real-time systems where worst-case latency matters more than average-case throughput. A Python list.append() is O(1) amortized (remember Lesson 1's over-allocation strategy), but occasionally it triggers a reallocation that copies the entire array. For real-time audio or robotics control loops, that occasional O(n) spike is unacceptable. A linked list gives you true O(1) worst-case for insertions. In practice, most Python real-time systems use deque or pre-allocated arrays instead, but the principle holds.

Scenario 3: Lock-free concurrent data structures. In concurrent programming, linked lists form the backbone of lock-free queues and skip lists because you can atomically update a single pointer with a compare-and-swap instruction. Python's GIL makes this less relevant for CPython, but PyPy and other implementations explore this space. Java's ConcurrentLinkedQueue is the canonical example, and understanding its pointer logic requires exactly the skills you built today.

Deep Dive: Why Redis Chose Linked Lists (and Then Replaced Them)

Redis, the in-memory data store, originally used plain doubly linked lists for its List data type. Every LPUSH and RPUSH was an O(1) pointer update. But Redis developers noticed that for small lists (under 128 elements or 64 bytes per entry), the per-node memory overhead was catastrophic. A list of 10 small integers used over 10x more memory than the actual data. So Redis introduced "ziplist," a compact array-based encoding for small lists. When the list grows past the threshold, Redis converts it to a "quicklist," which is a doubly linked list of ziplist blocks, similar to CPython's deque block structure. The lesson: even in C where you control every byte, linked lists lose to arrays for small data. Know when to use what.

🤔 Think about it: From Lesson 2, you know that Python's dict uses open addressing with a compact layout. How does this relate to the LRU cache pattern we built today?

View Answer

The LRU cache combines a dict (for O(1) key lookup) with a linked list (for O(1) recency tracking). The dict's compact layout from Python 3.7+ already maintains insertion order, which is why OrderedDict.move_to_end() can serve as the "move to front" operation. Without the dict, finding a cache entry would require O(n) linked list traversal. Without the linked list (or ordered dict), evicting the oldest entry would require O(n) scanning. Together, they give O(1) for both operations. This is the same principle behind Redis's LRU eviction mode and Guava's Cache in Java.


Summary

ConceptWhat It DoesWhen to UseWatch Out For
Singly linked listChain of nodes with next pointers onlyTeaching, interviews, simple forward-only chainsCannot delete a node without the previous reference
Doubly linked listNodes with prev and next pointersLRU caches, ordered collections, bidirectional traversalMust update 4 pointers per mutation, easy to miss backward links
Sentinel nodesDummy head/tail that are never deletedAny doubly linked list to eliminate None-check edge casesAdd ~100 bytes overhead, must be excluded from traversal output
LRU cache (linked list + dict)O(1) get, put, and eviction by recencyCaching expensive computations, database query resultsMust call move-to-front on every access, not just insertion
OrderedDictPython's built-in linked hash mapProduction LRU caches, ordered key-value storesRegular dict also preserves insertion order since Python 3.7
Four-pointer ruleEvery doubly linked list mutation updates exactly 4 pointersSelf-check during implementationMissing even one pointer update causes silent backward traversal corruption
Cache invalidationRemove stale cache entries when underlying data changesAny cache over mutable data"There are only two hard things in computer science: cache invalidation and naming things"

In Lesson 5, we move from linear structures to recursive ones. You will learn recursion patterns that actually work in production, including memoization, which is, under the hood, just an LRU cache for function calls. The pointer-tracing skills you built today, following a chain of references node by node, are exactly the skills you will need to trace recursive call stacks. The mental model is the same: each call frame points to the next, and losing track of a reference means losing your way entirely.


🟢 Too Easy? Here is your takeaway.

You have mastered the five core linked list bugs: missing pointer updates, wrong node references, broken backward links, missing policy enforcement (move-to-front), and semantic mismatches between code and comments. For next time, focus on implementing a skip list from scratch, which layers multiple linked lists for O(log n) search. Lesson 5 will introduce recursion and memoization, essentially the functional programming version of the LRU cache you built today.

🟡 Just Right? Try this reinforcement exercise.

Implement a reverse() method for the DoublyLinkedList class that reverses the list in place by swapping every node's prev and next pointers. You should also swap the head sentinel's next and the tail sentinel's prev. Test it by calling to_list() and to_list_reverse() before and after reversal. The key insight: reversing a doubly linked list is easier than reversing a singly linked list because you already have backward pointers. You only need to swap prev and next on every node, including sentinels.

🔴 Challenge? Here is a production-grade problem.

Implement a thread-safe LRU cache using threading.Lock. The tricky part: you need to hold the lock during the entire get-then-move-to-front sequence, not just during individual operations. If you lock get and move_to_front separately, another thread could evict the node between your get and your move. Write a test that spawns 10 threads, each doing 1000 random get/put operations, and verify that the cache never exceeds its capacity and never returns corrupted data. Bonus: measure the lock contention overhead and compare it to a lock-free approach using queue.Queue as a command buffer.

Code Playground

Python- **Here is the cumulative project code.** Everything from previous lessons is included, with the new LRU cache additions clearly marked. Copy this, paste it into a file, and run it.

Q&A