Stacks, Queues, and Deques: Controlled Access as a Design Tool

Lesson 367min8,092 chars

Learning Objectives

  • Implement stack-based undo/redo functionality for a stateful application
  • Apply collections.deque to build bounded activity feeds and sliding window computations
  • Solve bracket matching and expression evaluation problems using stack discipline
  • Compare time complexities of list vs deque for queue operations and choose the correct tool based on access patterns
  • Analyze amortized complexity of monotonic deque algorithms for sliding window problems

Stacks, Queues, and Deques: Controlled Access as a Design Tool

Last lesson, you gave your data fast access. This lesson, you take some of that access away on purpose.

That sounds backwards. In Lesson 2, we built hash tables that let you grab any value in O(1) time. We celebrated dict and set for their unrestricted, instant lookups. Now I am going to argue that sometimes the best thing you can do for your code is to deliberately restrict how data goes in and comes out. That is the core idea behind stacks, queues, and deques, and once you internalize it, you will see these patterns everywhere.

Here is your 60-second warm-up. Open a Python shell and type this:

Python
ops = []
ops.append("add_user_alice")
ops.append("add_user_bob")
ops.append("remove_user_alice")
last = ops.pop()

What is the value of last? And what is left in ops? If you answered "remove_user_alice" and ["add_user_alice", "add_user_bob"], congratulations, you already used a stack without knowing it. list.append() pushes to the top. list.pop() removes from the top. Last in, first out. That is the entire mental model for a stack, and it is the foundation of undo systems, expression parsers, and call stacks in every programming language you will ever touch.

But here is the thing that tripped me up early in my career at Google: using a Python list as a queue. I did list.pop(0) in a log processing pipeline, and it was fine with a hundred entries. Then the data grew to a million entries per batch, and our processing time exploded. Remember from Lesson 1 how inserting or removing at index 0 of a dynamic array forces every element to shift? That is O(n) per operation. Multiply that by n operations and you have an O(n squared) disaster hiding behind innocent-looking code. This lesson exists so that never happens to you.

💡 Key Insight: Choosing the right data structure is not about what you can do, it is about what you should do. Stacks and queues encode your intent in the data structure itself, making bugs easier to spot and code easier to reason about.


Why Constraints Are a Feature, Not a Bug

Unrestricted access sounds powerful, but it is actually dangerous. When any code can reach into any position of your data and modify it, you lose the ability to reason about program state. Think about it like a kitchen: a professional kitchen has stations. The grill cook does not reach into the pastry station and grab ingredients. That constraint is what makes a kitchen fast and predictable. Remove it, and you get chaos.

Stacks enforce a last-in, first-out (LIFO) discipline. You can only add to the top and remove from the top. This sounds limiting until you realize that undo/redo systems, browser back buttons, recursive call frames, and syntax parsers all depend on exactly this pattern. The constraint is not a limitation. It is the design. When you use a stack, you are telling every developer who reads your code: "the most recent item is always the one that matters next." That single guarantee eliminates entire categories of bugs.

Queues enforce a first-in, first-out (FIFO) discipline. You add to the back and remove from the front. Print queues, message brokers like RabbitMQ and Amazon SQS, breadth-first search in graphs, and task schedulers all use this pattern. When Netflix processes encoding jobs, each video enters the queue in order and gets processed in order. Fairness and ordering guarantees come for free because the data structure enforces them. You do not need to write extra logic to prevent queue-jumping. The structure itself makes it impossible.

Deques (double-ended queues) are the generalization. They allow O(1) insertion and removal at both ends. Python's collections.deque is the Swiss Army knife that can act as a stack, a queue, or something in between. It also supports a maxlen parameter that automatically evicts old items when the buffer is full, which is exactly what you need for activity feeds, sliding windows, and bounded caches. Spotify uses bounded deques for their "recently played" feature. Once the buffer hits its limit, the oldest track falls off automatically. No manual cleanup. No memory leaks.

The table below shows the fundamental operations and their costs. Study it, because every challenge in this lesson builds on these guarantees.

Data StructureAdd OperationRemove OperationAccess PatternPython Tool
Stackpush to top - O(1)pop from top - O(1)LIFOlist (append/pop)
Queueenqueue to back - O(1)dequeue from front - O(1)FIFOcollections.deque
DequeAdd either end - O(1)Remove either end - O(1)Both endscollections.deque

Notice that the queue row says O(1) for dequeue, but only if you use collections.deque. If you use a plain list and call pop(0), that "O(1)" becomes O(n) because every remaining element has to shift left. This is the single most common performance mistake I see in Python queue implementations, and it is exactly the trap I fell into at Google. The data structure you choose determines whether your guarantees hold.

🤔 Think about it: Why does list.pop() (no argument) run in O(1) but list.pop(0) runs in O(n)?

View Answer

list.pop() removes the last element. No other elements need to move. The dynamic array simply decrements its internal length counter. But list.pop(0) removes the first element, which means every element at indices 1 through n-1 must shift one position to the left to fill the gap at index 0. That is n-1 copy operations, making it O(n). This is a direct consequence of how dynamic arrays store elements in contiguous memory, which we covered in Lesson 1.

💡 Key Takeaway: Stacks, queues, and deques are not different data structures in the way that lists and dicts are different. They are access policies layered on top of underlying storage. The policy is the point.


Challenge 1: Bracket Matching (Guided)

Every code editor you have ever used solves this problem, and stacks are how they do it.

Here is the problem: given a string containing parentheses (), square brackets [], and curly braces {}, determine whether every opening bracket has a matching closing bracket in the correct order. For example, "({[]})" is valid, but "({[}])" is not, because the square bracket closes before the curly brace.

This is not just an academic exercise. When I was building a template engine at a startup, we had a bug where users could submit malformed template strings like "Hello {{name}" (missing closing brace). The server would try to parse these and crash. A bracket matcher was the first line of defense. Cloudflare's WAF (Web Application Firewall) uses similar pattern matching to detect injection attacks in HTTP requests. The bracket matching principle extends far beyond parentheses.

The key insight is that brackets must close in reverse order of how they opened. "Reverse order" is the magic phrase. Reverse order means LIFO. LIFO means stack. When you see an opening bracket, push it. When you see a closing bracket, pop the stack and check if they match. If they do not match, or if the stack is empty when you try to pop, the string is invalid. If you reach the end and the stack is empty, the string is valid.

Here is a hint to get you started: you will need a mapping from closing brackets to their opening counterparts. A dict is perfect for this, and you already know from Lesson 2 that dict lookups are O(1).

Hint 1: Setting up the mapping

Create a dictionary like matching = {')': '(', ']': '[', '}': '{'}. When you encounter a closing bracket, look up its expected partner in this dict. Then check if the top of your stack (the most recent unmatched opener) is that expected partner.

Hint 2: Edge cases to consider

What happens with "())"? You will try to pop from an empty stack when you hit the second ). Guard against this by checking if not stack before popping. Also, what about "(("? You reach the end with items still on the stack, so return False because those openers were never closed.

Here is the complete solution:

Python
def is_valid_brackets(text):
    """Check if all brackets in text are properly matched."""
    matching = {')': '(', ']': '[', '}': '{'}
    stack = []
    for char in text:
        if char in matching.values():
            stack.append(char)
        elif char in matching:
            if not stack or stack[-1] != matching[char]:
                return False
            stack.pop()
    return len(stack) == 0

Let me walk through the logic step by step. We iterate through each character. If it is an opening bracket (found in matching.values()), we push it onto the stack. If it is a closing bracket (found in matching as a key), we check two things: is the stack empty (which means there is no opener to match), and does the top of the stack match the expected opener for this closer? If either check fails, we return False immediately. At the end, we verify the stack is empty, because leftover openers mean the string is incomplete.

The time complexity is O(n) where n is the length of the string. We visit each character exactly once. Each push and pop is O(1). The space complexity is O(n) in the worst case, like "((((((" where every character is an opener that gets pushed. In practice, bracket strings rarely go that deep, so the stack stays small.

⚠️ Common Pitfall: I have seen students check stack[-1] before verifying the stack is not empty. In Python, stack[-1] on an empty list raises an IndexError. Always check if not stack first. This is a classic off-by-one-style bug that passes most test cases but crashes on edge inputs like ")".


Challenge 2: Undo/Redo with Two Stacks (Semi-Guided)

The undo/redo pattern is one of the most elegant applications of stacks in software, and it uses two of them.

Here is the scenario. You are building a text editor (or in our case, a network management tool). The user performs a series of operations: add a friend, remove a friend, change a setting. They want to be able to undo their last action, and then redo it if they change their mind. Every application from Microsoft Word to Adobe Photoshop to your terminal's command history implements some version of this pattern.

The design uses two stacks: an undo stack and a redo stack. When the user performs an action, you push a record of that action onto the undo stack and clear the redo stack (because the redo history is no longer valid after a new action). When the user hits undo, you pop from the undo stack, reverse the action, and push it onto the redo stack. When the user hits redo, you pop from the redo stack, re-apply the action, and push it back onto the undo stack.

Think of it like walking through a hallway. Each step forward goes on the undo stack. When you undo (walk backward), those steps move to the redo stack. But if you undo three steps and then take a new step in a different direction, the old forward path (redo stack) is gone. You are on a new path now. This is exactly how Git branches work conceptually, by the way. When you git reset and then make a new commit, the old commits are effectively orphaned.

Your approach hints: You need a class with three methods, execute(action), undo(), and redo(). Each action should be a dictionary with at least an "operation" key and whatever data the operation needs. The undo() method needs to know how to reverse each operation type, which means "add friend" becomes "remove friend" and vice versa. Think about what happens when the user calls undo() on an empty undo stack. What should you return?

🤔 Think about it: Why must we clear the redo stack when a new action is executed?

View Answer

Imagine you add Alice, add Bob, then undo Bob. The redo stack now contains "add Bob." If you then add Charlie without clearing redo, and later hit redo, you would re-add Bob, but that does not make sense in the new timeline where Charlie was added after Alice. The redo stack represents a future that no longer exists. Clearing it prevents impossible state transitions. This is the same reason Git does not let you git redo after resetting and making new commits. The old timeline is abandoned.

Full Solution
Python
class UndoRedoManager:
    """Manage undo/redo operations using two stacks."""
    def __init__(self):
        self.undo_stack = []
        self.redo_stack = []

    def execute(self, action):
        self.undo_stack.append(action)
        self.redo_stack.clear()
        return action

    def undo(self):
        if not self.undo_stack:
            return None
        action = self.undo_stack.pop()
        self.redo_stack.append(action)
        return action

    def redo(self):
        if not self.redo_stack:
            return None
        action = self.redo_stack.pop()
        self.undo_stack.append(action)
        return action

The execute method pushes onto the undo stack and clears redo. The undo method transfers from undo to redo. The redo method transfers back. Three methods, two stacks, zero ambiguity. The time complexity of every operation is O(1), since list.append() and list.pop() are both O(1) amortized, as we established in Lesson 1.

PEER TEACHING PROMPT: Explain the two-stack undo/redo pattern to a friend who has never heard of it, in exactly 3 sentences.

Model Answer

Every action you perform gets recorded on an undo stack. When you undo, that action moves to a separate redo stack so you can get it back later. If you perform a brand new action after undoing, the redo stack is wiped clean because that alternate future no longer exists.

💡 Key Takeaway: The two-stack undo/redo pattern works because undo and redo are mirror operations. What one stack gives up, the other stack receives. This symmetry makes the implementation almost trivially simple.


The collections.deque Deep Dive

Python's collections.deque is not just "a better list for queues." It is a fundamentally different data structure with different performance characteristics.

Under the hood, a deque is implemented as a doubly-linked list of fixed-size blocks (typically 64 items per block). This means that adding or removing from either end is genuinely O(1), not amortized O(1) like list.append(). There is no reallocation, no copying, and no shifting. The trade-off is that random access by index is O(n), because the deque has to traverse blocks to find the right one. This is the opposite trade-off from a list, which gives you O(1) random access but O(n) insertion at the front.

The maxlen parameter is where deque becomes magical for production systems. When you create deque(maxlen=50), the deque will never hold more than 50 items. When a 51st item is appended to the right, the leftmost item is automatically evicted. When a 51st item is appended to the left, the rightmost item is evicted. You do not need to write any cleanup code. You do not need to check the length. The data structure handles it. This is exactly how bounded buffers, rolling logs, and "recent activity" feeds work in production.

I learned about maxlen the hard way. At a startup, we had an in-memory activity log that stored every API request. We used a plain list. After three days in production, the process was consuming 4 GB of RAM and the ops team was not happy. We added manual trimming logic: if len(log) > 10000: log = log[-5000:]. That worked but created garbage collection spikes every time we trimmed. Switching to deque(maxlen=10000) eliminated both problems. Constant memory usage, no GC spikes, and we deleted 15 lines of trimming code. Slack's message buffer for their desktop app uses a similar bounded-deque strategy to keep memory predictable.

Here is a comparison table that you should bookmark mentally. It captures the fundamental trade-off between list and deque.

Operationlistcollections.dequeWinner
Append rightO(1) amortizedO(1) worst casedeque (no amortization spikes)
Pop rightO(1)O(1)Tie
Append leftO(n) - shifts all elementsO(1)deque by a mile
Pop leftO(n) - shifts all elementsO(1)deque by a mile
Index access [i]O(1)O(n)list
Insert middleO(n)O(n)Tie (both bad)
Bounded evictionManual code neededmaxlen built-indeque
Memory layoutContiguous arrayLinked blockslist for cache locality

The critical insight from this table is that your choice depends on your access pattern. If you mostly append and pop from the right and you frequently access elements by index, use a list. If you need fast operations on both ends and you do not need random access, use a deque. If you need automatic size limiting, deque(maxlen=N) is the only sane choice. There is no universal winner. There is only the right tool for your access pattern.

MYTH BUSTER: "Common belief: deque is always faster than list. Reality: deque is faster for front operations and bounded buffers, but list wins for random access by index and benefits from CPU cache locality." The myth persists because most tutorials only show the append-left benchmark, where deque crushes list. But if your code does data[i] frequently, a deque will actually be slower because it has to traverse its internal block structure. Always profile your actual access pattern.

Deep Dive: How deque's block structure works internally

CPython's deque uses a doubly-linked list of 64-slot blocks. Each block is a fixed-size array. When you appendleft, if the leftmost block has room, the item goes into the next available slot. If the block is full, a new block is allocated and linked in. This means append operations never touch existing elements. Contrast this with list, which stores everything in one contiguous array. When that array fills up, list allocates a larger array (typically 1.125x the current size, as we discussed in Lesson 1) and copies everything over. The deque approach avoids this reallocation cost entirely, at the expense of losing contiguous memory layout. For CPU-cache-sensitive workloads (tight numerical loops), the list's contiguous layout wins. For producer-consumer patterns with frequent front/back operations, the deque's block structure wins.

🤔 Think about it: You need to process log entries in FIFO order and also look up the log entry at position i frequently. Which data structure should you use, and why?

View Answer

This is a trick question with no perfect answer. FIFO processing favors deque (O(1) popleft vs O(n) list.pop(0)). But frequent index lookup favors list (O(1) vs O(n)). The practical solution depends on which operation dominates. If you process thousands of entries but only look up by index occasionally, use deque. If index lookups are in a hot loop, use list and accept the O(n) dequeue cost, or maintain both structures (a deque for ordering and a dict for lookup, just like we built dict-based indexes in Lesson 2).

💡 Key Takeaway: collections.deque is not a replacement for list. It is an alternative with a different set of O(1) operations. Choose based on which ends of the data you need to touch.


Challenge 3: Sliding Window Maximum (Independent)

This is a real interview question asked at Amazon, Google, and Microsoft, and the optimal solution uses a deque.

Given a list of integers and a window size k, find the maximum value in each sliding window of size k as it moves from left to right across the list. For example, given nums = [1, 3, -1, -3, 5, 3, 6, 7] and k = 3, the windows and their maximums are:

WindowElementsMaximum
1[1, 3, -1]3
2[3, -1, -3]3
3[-1, -3, 5]5
4[-3, 5, 3]5
5[5, 3, 6]6
6[3, 6, 7]7

The brute-force approach takes O(n times k) because you scan k elements for the maximum in each of n-k+1 windows. The optimal approach uses a deque to maintain a decreasing sequence of candidate maximums and runs in O(n) total. This is the kind of problem where the data structure is the algorithm.

Your task: implement sliding_window_max(nums, k) that returns the list of maximums. No hints this time. Think about what information the deque should store (values? indices? both?) and what invariant it should maintain.

Solution
Python
from collections import deque

def sliding_window_max(nums, k):
    """Find maximum in each sliding window of size k."""
    result = []
    dq = deque()  # stores indices, not values
    for i, val in enumerate(nums):
        # Remove indices outside the current window
        while dq and dq[0] < i - k + 1:
            dq.popleft()
        # Remove indices whose values are less than current
        while dq and nums[dq[-1]] <= val:
            dq.pop()
        dq.append(i)
        # Window is fully formed starting at index k-1
        if i >= k - 1:
            result.append(nums[dq[0]])
    return result

The deque stores indices in decreasing order of their values. When a new element arrives, we remove all indices from the back whose values are smaller (they can never be the maximum while the new, larger element is in the window). We also remove indices from the front that have fallen outside the window. The front of the deque always holds the index of the current window's maximum. Each index is added and removed at most once, so the total work is O(n) despite the nested while loops. This is an amortized analysis, similar to how we analyzed list.append() in Lesson 1.

MINI CASE STUDY: A fintech startup was calculating the rolling maximum stock price over 200-day windows for 5,000 stocks, updated every second during trading hours. Their initial implementation used the brute force approach: for each stock, scan the last 200 prices to find the max. With 5,000 stocks and 200 prices each, that was 1,000,000 comparisons per second. The system fell behind during volatile trading days when prices changed rapidly. Latency spiked from 50ms to 800ms, triggering alerts.

What would you do to fix this?

Analysis

Switch each stock's rolling window to the deque-based sliding window maximum. Instead of 200 comparisons per update per stock, each update does amortized O(1) work (one append, maybe a few pops). Total work drops from O(5000 times 200) = 1,000,000 per tick to O(5000) = 5,000 per tick, a 200x improvement. The fix is literally replacing a list-scan with a deque-maintained monotonic sequence. The startup deployed this change in 30 lines of code and latency dropped back to 50ms. This is the power of choosing the right data structure: the algorithm barely changed, but the performance characteristic changed completely.

📌 Remember: When you see "sliding window" in a problem, your first instinct should be to reach for collections.deque. It is the canonical tool for this pattern.


Bonus: Can You Do It in 3 Lines?

Python's standard library often has a one-liner hiding in plain sight, but understanding the deque solution is what matters.

For the bracket matching problem from Challenge 1, some developers reach for str.replace in a loop:

Python
def is_valid_short(s):
    """Bracket matching via repeated replacement (slow but short)."""
    while '()' in s or '[]' in s or '{}' in s:
        s = s.replace('()', '').replace('[]', '').replace('{}', '')
    return s == ''

This works by repeatedly stripping matched pairs until nothing is left. If the string is empty, it was valid. It is clever, it is short, and I would reject it in a code review. Why? The time complexity is O(n squared) in the worst case. Each replace scans the entire string, and you might need n/2 passes. The stack solution is O(n). For a 10-character string, nobody cares. For a 10-million-character configuration file (and yes, I have seen Kubernetes YAML files that large), the difference is between 10 milliseconds and 10 minutes.

⚠️ Common Pitfall: Clever code that sacrifices algorithmic complexity for brevity is a trap. In interviews and in production, the O(n) stack solution is always the right answer. Save the cute tricks for code golf competitions.

🤔 Think about it: The replace approach works because matched brackets are always adjacent after inner pairs are removed. Can you think of a string where this assumption seems wrong but still works?

View Answer

Consider "(([]))". The inner pair [] is removed first, giving "(())". Then () in the middle is removed, giving "()". Then the final pair is removed, giving "". At no point were the outer parentheses adjacent to each other initially, but the progressive removal of inner pairs eventually brings them together. The approach works because valid bracket strings are defined recursively: a valid string is either empty, a matched pair wrapping a valid string, or two valid strings concatenated. The replace method mirrors this recursive structure by collapsing from the inside out.


Solution Review: Complexity and Trade-offs

Every solution in this lesson makes a trade-off, and understanding those trade-offs is what separates a junior developer from a senior one.

Let me walk through the three challenges and their complexities. The bracket matcher runs in O(n) time and O(n) space, where n is the string length. The stack will never exceed n/2 elements (if every character is an opener), so the space bound is tight. There is no way to do better than O(n) time because you must examine every character at least once, which means this solution is optimal.

The undo/redo system runs in O(1) per operation with O(m) total space, where m is the number of operations stored. This is as good as it gets for undo/redo. Some implementations use a single list with a pointer (index) instead of two stacks, which saves one list object but adds complexity when the pointer logic gets confused during edge cases. I have always preferred two explicit stacks because the code reads like the design. Readability matters more than saving 48 bytes of list overhead.

The sliding window maximum is the most interesting complexity analysis. The nested while loops inside the for loop look like O(n times k) at first glance. But each index enters and leaves the deque exactly once across the entire run. That is at most 2n deque operations total, making it O(n). This amortized argument is the same pattern we used in Lesson 1 when analyzing list.append(): the occasional expensive operation (popping multiple elements) is paid for by the many cheap operations (single appends). Getting comfortable with amortized analysis is essential for algorithm interviews.

ChallengeTime ComplexitySpace ComplexityKey Data Structure
Bracket MatchingO(n)O(n)Stack (list)
Undo/RedoO(1) per opO(m) totalTwo stacks (lists)
Sliding Window MaxO(n) amortizedO(k)Deque (monotonic)
Bonus: ReplaceO(n squared) worstO(n)String (no stack)

The "Replace" row is a cautionary tale. Shorter code does not mean faster code. Always reason about complexity before choosing an approach. If the input size is bounded and small (like validating a regex pattern), the quadratic approach might be fine. If the input can grow unboundedly (like validating a streaming data format), the linear approach is the only responsible choice.

Deep Dive: Why monotonic deques appear in so many LeetCode problems

The sliding window maximum uses a "monotonic deque," a deque where elements are maintained in non-increasing (or non-decreasing) order. This pattern appears in at least 15 well-known problems: sliding window maximum, sliding window minimum, largest rectangle in histogram, maximal rectangle, shortest subarray with sum at least K, and more. The core idea is always the same: use the deque to track "candidates" for some optimal value, and maintain an invariant that lets you read the answer from one end in O(1). Once you internalize this pattern, an entire category of "hard" problems becomes medium. Amazon and Google both ask monotonic deque questions frequently in their coding interviews.

💡 Key Takeaway: Amortized O(n) with a deque beats naive O(nk) with a list. When you see "sliding" or "rolling" in a problem statement, think deque first.


Code Playground: The Complete Script

Here is the single comprehensive script that ties together every concept from this lesson. Copy, paste, and run it. Every section is commented. Every output is annotated.

Python
"""
Lesson 3: Stacks, Queues, and Deques - Complete Code Playground
Run this file to see all concepts in action.
"""

from collections import deque
import time


# --- Section 1: Stack Basics (LIFO) ---

print("=" * 60)
print("SECTION 1: Stack Operations (LIFO)")
print("=" * 60)

stack = []
for item in ["first", "second", "third"]:
    stack.append(item)  # push onto the stack
    print(f"  Pushed: {item:>8}  |  Stack: {stack}")

print()
while stack:
    removed = stack.pop()  # pop from the top (last in, first out)
    print(f"  Popped: {removed:>8}  |  Stack: {stack}")
# Output:
#   Pushed:    first  |  Stack: ['first']
#   Pushed:   second  |  Stack: ['first', 'second']
#   Pushed:    third  |  Stack: ['first', 'second', 'third']
#
#   Popped:    third  |  Stack: ['first', 'second']
#   Popped:   second  |  Stack: ['first']
#   Popped:    first  |  Stack: []


# --- Section 2: Bracket Matching with a Stack ---

print("\n" + "=" * 60)
print("SECTION 2: Bracket Matching")
print("=" * 60)

def is_valid_brackets(text):
    """Check if all brackets in text are properly matched using a stack."""
    matching = {')': '(', ']': '[', '}': '{'}
    stack = []

    for char in text:
        if char in matching.values():  # opening bracket
            stack.append(char)
        elif char in matching:  # closing bracket
            if not stack or stack[-1] != matching[char]:
                return False
            stack.pop()

    return len(stack) == 0  # valid only if all openers were closed

test_cases = [("({[]})", True),
    ("({[}])", False),
    ("((()))", True),
    (")(", False),
    ("", True),
    ("{[()()]}", True),]

for text, expected in test_cases:
    result = is_valid_brackets(text)
    status = "PASS" if result == expected else "FAIL"
    print(f"  [{status}] is_valid_brackets('{text}') = {result}")
# Output:
#   [PASS] is_valid_brackets('({[]})') = True
#   [PASS] is_valid_brackets('({[}])') = False
#   [PASS] is_valid_brackets('((()))') = True
#   [PASS] is_valid_brackets(')(') = False
#   [PASS] is_valid_brackets('') = True
#   [PASS] is_valid_brackets('{[()()]}') = True


# --- Section 3: Undo/Redo with Two Stacks ---

print("\n" + "=" * 60)
print("SECTION 3: Undo/Redo System")
print("=" * 60)

class UndoRedoManager:
    """Manage undo/redo operations using two stacks."""

    def __init__(self):
        self.undo_stack = []
        self.redo_stack = []

    def execute(self, action):
        """Record an action and clear redo history."""
        self.undo_stack.append(action)
        self.redo_stack.clear()  # new action invalidates redo history
        return action

    def undo(self):
        """Undo the most recent action. Returns the undone action or None."""
        if not self.undo_stack:
            return None
        action = self.undo_stack.pop()
        self.redo_stack.append(action)
        return action

    def redo(self):
        """Redo the most recently undone action. Returns the redone action or None."""
        if not self.redo_stack:
            return None
        action = self.redo_stack.pop()
        self.undo_stack.append(action)
        return action

manager = UndoRedoManager()

print("  Execute actions:")
manager.execute({"op": "add_friend", "user": "Alice", "friend": "Bob"})
print(f"    Action 1: add_friend Alice->Bob")
manager.execute({"op": "add_friend", "user": "Alice", "friend": "Charlie"})
print(f"    Action 2: add_friend Alice->Charlie")
manager.execute({"op": "remove_friend", "user": "Bob", "friend": "Dave"})
print(f"    Action 3: remove_friend Bob->Dave")

print(f"\n  Undo stack size: {len(manager.undo_stack)}")  # Output: 3
print(f"  Redo stack size: {len(manager.redo_stack)}")    # Output: 0

undone = manager.undo()
print(f"\n  Undo: {undone['op']} {undone['user']}->{undone['friend']}")
# Output: Undo: remove_friend Bob->Dave

print(f"  Undo stack size: {len(manager.undo_stack)}")  # Output: 2
print(f"  Redo stack size: {len(manager.redo_stack)}")   # Output: 1

redone = manager.redo()
print(f"\n  Redo: {redone['op']} {redone['user']}->{redone['friend']}")
# Output: Redo: remove_friend Bob->Dave

print(f"  Undo stack size: {len(manager.undo_stack)}")  # Output: 3
print(f"  Redo stack size: {len(manager.redo_stack)}")   # Output: 0


# --- Section 4: deque as Queue (FIFO) vs list.pop(0) ---

print("\n" + "=" * 60)
print("SECTION 4: deque vs list for FIFO queues")
print("=" * 60)

sizes = [1_000, 10_000, 100_000]

for n in sizes:
    # Benchmark list.pop(0)
    data_list = list(range(n))
    start = time.perf_counter()
    while data_list:
        data_list.pop(0)  # O(n) per pop - shifts all remaining elements
    list_time = time.perf_counter() - start

    # Benchmark deque.popleft()
    data_deque = deque(range(n))
    start = time.perf_counter()
    while data_deque:
        data_deque.popleft()  # O(1) per pop - no shifting needed
    deque_time = time.perf_counter() - start

    ratio = list_time / deque_time if deque_time > 0 else float('inf')
    print(f"  n={n:>7,}: list.pop(0)={list_time:.4f}s  "
          f"deque.popleft()={deque_time:.4f}s  "
          f"ratio={ratio:.1f}x")
# Output: (times vary by machine, but ratios grow with n)
#   n=  1,000: list.pop(0)=<time>s  deque.popleft()=<time>s  ratio=<varies>x
#   n= 10,000: list.pop(0)=<time>s  deque.popleft()=<time>s  ratio=<varies>x
#   n=100,000: list.pop(0)=<time>s  deque.popleft()=<time>s  ratio=<varies>x


# --- Section 5: Bounded deque with maxlen ---

print("\n" + "=" * 60)
print("SECTION 5: Bounded Activity Feed (deque with maxlen)")
print("=" * 60)

activity_feed = deque(maxlen=5)  # only keep 5 most recent events

events = ["Alice added Bob",
    "Bob added Charlie",
    "Charlie added Dave",
    "Dave added Eve",
    "Eve added Frank",
    "Frank added Grace",   # this pushes "Alice added Bob" out
    "Grace added Heidi",   # this pushes "Bob added Charlie" out]

for event in events:
    activity_feed.append(event)
    print(f"  Added: '{event}'")
    print(f"    Feed ({len(activity_feed)} items): {list(activity_feed)}")
# Output (final state):
#   Feed has 5 items, oldest events were auto-evicted:
#   ['Charlie added Dave', 'Dave added Eve', 'Eve added Frank',
#    'Frank added Grace', 'Grace added Heidi']

print(f"\n  Final feed ({len(activity_feed)}/{activity_feed.maxlen}):")
for i, event in enumerate(activity_feed, 1):
    print(f"    {i}. {event}")


# --- Section 6: Sliding Window Maximum with Monotonic Deque ---

print("\n" + "=" * 60)
print("SECTION 6: Sliding Window Maximum")
print("=" * 60)

def sliding_window_max(nums, k):
    """Find the maximum value in each sliding window of size k."""
    result = []
    dq = deque()  # stores indices in decreasing order of values

    for i, val in enumerate(nums):
        # Remove indices that are outside the current window
        while dq and dq[0] < i - k + 1:
            dq.popleft()

        # Remove indices whose values are smaller than current value
        # (they can never be the max while current value is in the window)
        while dq and nums[dq[-1]] <= val:
            dq.pop()

        dq.append(i)  # add current index

        # Start recording results once the first full window is formed
        if i >= k - 1:
            result.append(nums[dq[0]])  # front of deque is always the max

    return result

nums = [1, 3, -1, -3, 5, 3, 6, 7]
k = 3
result = sliding_window_max(nums, k)
print(f"  Input:  {nums}")
print(f"  k = {k}")
print(f"  Output: {result}")
# Output:
#   Input:  [1, 3, -1, -3, 5, 3, 6, 7]
#   k = 3
#   Output: [3, 3, 5, 5, 6, 7]

# Verify each window manually
print("\n  Verification:")
for i in range(len(nums) - k + 1):
    window = nums[i:i + k]
    print(f"    Window {i + 1}: {window} -> max = {max(window)}")
# Output:
#   Window 1: [1, 3, -1] -> max = 3
#   Window 2: [3, -1, -3] -> max = 3
#   Window 3: [-1, -3, 5] -> max = 5
#   Window 4: [-3, 5, 3] -> max = 5
#   Window 5: [5, 3, 6] -> max = 6
#   Window 6: [3, 6, 7] -> max = 7


print("\n" + "=" * 60)
print("All sections complete. Lesson 3 concepts demonstrated.")
print("=" * 60)

📌 Remember: This single script covers stacks (LIFO), bracket matching, undo/redo with two stacks, deque vs list benchmarking, bounded deques with maxlen, and the sliding window maximum with a monotonic deque. Run it, modify it, break it, and fix it. That is how you learn.


When to Use What: The Decision Framework

After three lessons of data structures, you now have four major tools in your belt. Knowing when to reach for each one is the real skill.

The decision is not about which data structure is "best." It is about which access pattern matches your problem. If you need to look up values by key, you reach for a dict (Lesson 2). If you need to check membership in a collection, you reach for a set (Lesson 2). If you need LIFO access, you reach for a stack implemented with list. If you need FIFO access or double-ended operations, you reach for collections.deque. If you need random access by index with dynamic sizing, you use a plain list (Lesson 1).

The mistake I see most often in production code reviews is not choosing the wrong data structure. It is not choosing at all. Developers default to list for everything because it is familiar. They use list.pop(0) for queues, they use linear search instead of dict lookups, and they manually trim lists instead of using deque(maxlen=N). Every one of these is a bug waiting for scale to trigger it. The performance characteristics are fine at small n, which means the tests pass, the code review goes through, and the problem does not surface until production traffic hits.

Problem PatternBest ToolWhy Not The Others
Undo/redo, backtrackingStack (list)Queues would process in wrong order
Task processing, BFSQueue (deque)Stacks would process most recent first (DFS)
Recent activity feeddeque(maxlen=N)List requires manual trimming code
Rolling window statsMonotonic dequeList-based scan is O(nk) instead of O(n)
Key-value lookupdictList search is O(n) vs O(1)
Set membershipsetList in operator is O(n) vs O(1)

Here is my personal rule of thumb after 15 years of Python. If data enters and leaves from the same end, it is a stack. If data enters at one end and leaves from the other, it is a queue. If you need both, it is a deque. If you need none of these and just need indexed storage, it is a list. If you need key-based access, it is a dict. Start from the access pattern, not from the data structure.

🤔 Think about it: You are building a web crawler that discovers URLs to visit. You want to visit all pages at depth 1 before moving to depth 2 (breadth-first). Which data structure should you use for the URL frontier, and why?

View Answer

Use a collections.deque as a FIFO queue. Breadth-first search requires processing URLs in the order they were discovered, which is FIFO. You append newly discovered URLs to the right and popleft the next URL to visit. Using a stack (LIFO) would give you depth-first search instead, diving deep into one path before exploring siblings. Google's original web crawler, described in the Brin and Page 1998 paper, used a FIFO queue for exactly this reason. We will build BFS properly in Lesson 9 when we cover graphs.

💡 Key Takeaway: The access pattern determines the data structure. Stack for LIFO. Queue/deque for FIFO. Dict for keyed lookup. Start from the pattern, not from what you are comfortable with.


Project Update

Here is the cumulative NetProbe project with this lesson's additions highlighted. The new code adds an undo/redo system for network edits and a bounded activity feed using deque(maxlen=50).

Python
"""
NetProbe - Network Analysis Tool (Lessons 1-3 Cumulative)
=========================================================
Lesson 1: Dynamic array foundations, user list
Lesson 2: Dict-based user index, set-based social graph, mutual friends
Lesson 3 (NEW): Undo/redo for edits, bounded activity feed, recent activity command
"""

from collections import deque


# --- Core Data Storage (Lesson 1 + 2) ---

users = []                    # dynamic array of user names
user_index = {}               # dict: name -> position in users list (Lesson 2)
social_graph = {}             # dict: name -> set of friends (Lesson 2)


# --- NEW in Lesson 3: Undo/Redo + Activity Feed ---

undo_stack = []               # stack of actions for undo
redo_stack = []               # stack of undone actions for redo
activity_feed = deque(maxlen=50)  # bounded feed of recent events


def add_user(name):
    """Add a user to the network."""
    if name in user_index:
        print(f"  User '{name}' already exists.")
        return

    user_index[name] = len(users)  # O(1) dict insertion (Lesson 2)
    users.append(name)              # O(1) amortized append (Lesson 1)
    social_graph[name] = set()      # empty friend set (Lesson 2)
    activity_feed.append(f"User '{name}' joined the network")  # NEW Lesson 3
    print(f"  Added user: {name}")


def _do_add_friend(user, friend):
    """Internal: add a friendship link (both directions)."""
    social_graph[user].add(friend)
    social_graph[friend].add(user)


def _do_remove_friend(user, friend):
    """Internal: remove a friendship link (both directions)."""
    social_graph[user].discard(friend)
    social_graph[friend].discard(user)


def add_friend(user, friend):
    """Add a friendship and record it for undo."""
    if user not in user_index or friend not in user_index:
        print(f"  Both users must exist first.")
        return
    if friend in social_graph[user]:
        print(f"  '{user}' and '{friend}' are already friends.")
        return

    _do_add_friend(user, friend)

    # Record action on undo stack, clear redo (NEW Lesson 3)
    action = {"op": "add_friend", "user": user, "friend": friend}
    undo_stack.append(action)
    redo_stack.clear()
    activity_feed.append(f"'{user}' and '{friend}' became friends")

    print(f"  Friendship added: {user} <-> {friend}")


def remove_friend(user, friend):
    """Remove a friendship and record it for undo."""
    if user not in user_index or friend not in user_index:
        print(f"  Both users must exist first.")
        return
    if friend not in social_graph[user]:
        print(f"  '{user}' and '{friend}' are not friends.")
        return

    _do_remove_friend(user, friend)

    # Record action on undo stack, clear redo (NEW Lesson 3)
    action = {"op": "remove_friend", "user": user, "friend": friend}
    undo_stack.append(action)
    redo_stack.clear()
    activity_feed.append(f"'{user}' and '{friend}' unfriended")

    print(f"  Friendship removed: {user} <-> {friend}")


def undo():
    """Undo the last friendship edit. (NEW Lesson 3)"""
    if not undo_stack:
        print("  Nothing to undo.")
        return

    action = undo_stack.pop()
    redo_stack.append(action)

    # Reverse the operation
    if action["op"] == "add_friend":
        _do_remove_friend(action["user"], action["friend"])
        activity_feed.append(
            f"UNDO: removed friendship {action['user']} <-> {action['friend']}"
        )
        print(f"  Undone: add_friend {action['user']} <-> {action['friend']}")
    elif action["op"] == "remove_friend":
        _do_add_friend(action["user"], action["friend"])
        activity_feed.append(
            f"UNDO: restored friendship {action['user']} <-> {action['friend']}"
        )
        print(f"  Undone: remove_friend {action['user']} <-> {action['friend']}")


def redo():
    """Redo the last undone edit. (NEW Lesson 3)"""
    if not redo_stack:
        print("  Nothing to redo.")
        return

    action = redo_stack.pop()
    undo_stack.append(action)

    # Re-apply the operation
    if action["op"] == "add_friend":
        _do_add_friend(action["user"], action["friend"])
        activity_feed.append(
            f"REDO: added friendship {action['user']} <-> {action['friend']}"
        )
        print(f"  Redone: add_friend {action['user']} <-> {action['friend']}")
    elif action["op"] == "remove_friend":
        _do_remove_friend(action["user"], action["friend"])
        activity_feed.append(
            f"REDO: removed friendship {action['user']} <-> {action['friend']}"
        )
        print(f"  Redone: remove_friend {action['user']} <-> {action['friend']}")


def mutual_friends(user_a, user_b):
    """Find mutual friends using set intersection. (Lesson 2)"""
    if user_a not in social_graph or user_b not in social_graph:
        return set()
    return social_graph[user_a] & social_graph[user_b]  # O(min(n,m))


def show_recent_activity(count=10):
    """Show the most recent network events. (NEW Lesson 3)"""
    print(f"\n  Recent Activity (last {min(count, len(activity_feed))} events):")
    # Iterate from newest to oldest
    items = list(activity_feed)
    for i, event in enumerate(reversed(items[-count:]), 1):
        print(f"    {i}. {event}")
    if not activity_feed:
        print("    (no activity yet)")


# --- Demo / CLI Simulation ---

if __name__ == "__main__":
    print("=" * 60)
    print("NetProbe v0.3 - Now with Undo/Redo and Activity Feed!")
    print("=" * 60)

    # Add users
    print("\n--- Adding Users ---")
    for name in ["Alice", "Bob", "Charlie", "Dave", "Eve"]:
        add_user(name)

    # Build some friendships
    print("\n--- Building Friendships ---")
    add_friend("Alice", "Bob")
    add_friend("Alice", "Charlie")
    add_friend("Bob", "Charlie")
    add_friend("Bob", "Dave")
    add_friend("Charlie", "Eve")

    # Show mutual friends (Lesson 2 feature)
    print("\n--- Mutual Friends ---")
    mf = mutual_friends("Alice", "Bob")
    print(f"  Alice & Bob mutual friends: {mf}")
    # Output: Alice & Bob mutual friends: {'Charlie'}

    # Demonstrate undo/redo (NEW Lesson 3)
    print("\n--- Undo/Redo Demo ---")
    print(f"  Bob's friends before undo: {social_graph['Bob']}")
    # Output: Bob's friends before undo: {'Alice', 'Charlie', 'Dave'}

    undo()  # undoes Charlie<->Eve (last action)
    undo()  # undoes Bob<->Dave

    print(f"  Bob's friends after 2 undos: {social_graph['Bob']}")
    # Output: Bob's friends after 2 undos: {'Alice', 'Charlie'}

    redo()  # redoes Bob<->Dave

    print(f"  Bob's friends after 1 redo: {social_graph['Bob']}")
    # Output: Bob's friends after 1 redo: {'Alice', 'Charlie', 'Dave'}

    # New action clears redo stack
    add_friend("Alice", "Eve")
    redo()  # nothing to redo - stack was cleared
    # Output: Nothing to redo.

    # Show activity feed (NEW Lesson 3)
    show_recent_activity(8)

    # Show network state
    print("\n--- Final Network State ---")
    for user in users:
        friends = sorted(social_graph[user])
        print(f"  {user}: {friends}")

    print(f"\n  Undo stack depth: {len(undo_stack)}")
    print(f"  Redo stack depth: {len(redo_stack)}")
    print(f"  Activity feed: {len(activity_feed)}/{activity_feed.maxlen} events")

    print("\n" + "=" * 60)
    print("NetProbe v0.3 demo complete.")
    print("=" * 60)

Run the project you've built so far. You should see users being added, friendships created, undo and redo working correctly (Bob's friend set shrinking and growing), the redo stack being cleared after a new action, and the activity feed showing recent events. The expected output is:

NetProbe v0.3 - Now with Undo/Redo and Activity Feed!
--- Adding Users ---
  Added user: Alice
  Added user: Bob
  Added user: Charlie
  Added user: Dave
  Added user: Eve
--- Building Friendships ---
  Friendship added: Alice <-> Bob
  Friendship added: Alice <-> Charlie
  Friendship added: Bob <-> Charlie
  Friendship added: Bob <-> Dave
  Friendship added: Charlie <-> Eve
--- Mutual Friends ---
  Alice & Bob mutual friends: {'Charlie'}
--- Undo/Redo Demo ---
  Bob's friends before undo: {'Alice', 'Charlie', 'Dave'}
  Undone: add_friend Charlie <-> Eve
  Undone: add_friend Bob <-> Dave
  Bob's friends after 2 undos: {'Alice', 'Charlie'}
  Redone: add_friend Bob <-> Dave
  Bob's friends after 1 redo: {'Alice', 'Charlie', 'Dave'}
  Friendship added: Alice <-> Eve
  Nothing to redo.
  (activity feed listing follows)
--- Final Network State ---
  (each user with their sorted friend list)

Note: Set ordering in the "before undo" output may vary since sets are unordered. The important thing is that the correct friends appear before and after each undo/redo operation.


Summary Table

ConceptWhat It DoesWhen to UseWatch Out For
Stack (LIFO)Last in, first out accessUndo/redo, bracket matching, DFS, call stacksUsing a list is fine for stacks since you only touch the right end
Queue (FIFO)First in, first out accessTask processing, BFS, message passingNever use list.pop(0) for queues. It is O(n). Use deque.popleft()
collections.dequeO(1) operations on both endsQueues, bounded buffers, sliding windowsRandom index access is O(n), not O(1) like lists
deque(maxlen=N)Auto-evicts oldest itemsActivity feeds, rolling logs, cachesThe evicted item is gone forever. No event, no callback
Monotonic dequeMaintains sorted candidate orderSliding window min/max problemsStores indices, not values. The invariant maintenance is the tricky part
Two-stack undo/redoUndo and redo stack mirror each otherAny stateful application with reversible actionsNew actions must clear the redo stack or you get impossible state

Next lesson, we tackle linked lists. You might wonder why, since Python does not have a built-in linked list type. The answer is that understanding pointer-based data structures is essential for debugging reference bugs, understanding how deque works internally, and solving an entire class of interview problems. Lesson 4 will also reveal the subtle bugs that lurk in node-based code, bugs that do not exist in array-based structures. If you have ever had a "but I updated the node, why did not the list change?" moment, Lesson 4 is for you.


Too Easy

You nailed the fundamentals. Here are your key takeaways: stacks enforce LIFO with append/pop, queues enforce FIFO with deque.appendleft/popleft (or more commonly append/popleft), bounded deques auto-evict old data, and monotonic deques solve sliding window problems in O(n). Next lesson covers linked lists and the pointer bugs that come with them. To prepare, try implementing a simple linked list with insert, delete, and find methods using Python classes.

Just Right

Think of stacks and queues as traffic rules for your data. A stack is a one-lane road where cars can only U-turn (LIFO). A queue is a tunnel where cars enter one end and exit the other (FIFO). A deque is a two-way street. The maxlen parameter on a deque is like a parking lot with a fixed number of spaces: when a new car arrives and the lot is full, the car that has been parked the longest gets towed automatically.

Extra practice: implement a MinStack that supports push, pop, top, and get_min, all in O(1) time. Hint: use a second stack that tracks the minimum at each level.

Challenge

Here is a production-level problem. You are building a rate limiter for an API. Each user can make at most 100 requests per 60-second window. For each incoming request, you need to: (1) check if the user has exceeded their limit, and (2) record the request timestamp. Design a solution using deque(maxlen=100) where each deque stores timestamps. When a request arrives, check if deque[0] (the oldest request in the window) is older than 60 seconds. If it is, the request is allowed. If all 100 slots are filled with timestamps from the last 60 seconds, reject the request.

Bonus: why might deque(maxlen=100) not be sufficient here, and what would you use instead? (Hint: think about what happens when some old timestamps should be evicted but the deque is not full yet.)

Code Playground

Python- **Here is the single comprehensive script that ties together every concept from this lesson.** Copy, paste, and run it. Every section is commented. Every output is annotated.

Q&A