Skip to main content

Command Palette

Search for a command to run...

Peterson's Solution to the Critical Section Problem

Updated
5 min read
J

I am Jyotiprakash, a deeply driven computer systems engineer, software developer, teacher, and philosopher. With a decade of professional experience, I have contributed to various cutting-edge software products in network security, mobile apps, and healthcare software at renowned companies like Oracle, Yahoo, and Epic. My academic journey has taken me to prestigious institutions such as the University of Wisconsin-Madison and BITS Pilani in India, where I consistently ranked among the top of my class.

At my core, I am a computer enthusiast with a profound interest in understanding the intricacies of computer programming. My skills are not limited to application programming in Java; I have also delved deeply into computer hardware, learning about various architectures, low-level assembly programming, Linux kernel implementation, and writing device drivers. The contributions of Linus Torvalds, Ken Thompson, and Dennis Ritchie—who revolutionized the computer industry—inspire me. I believe that real contributions to computer science are made by mastering all levels of abstraction and understanding systems inside out.

In addition to my professional pursuits, I am passionate about teaching and sharing knowledge. I have spent two years as a teaching assistant at UW Madison, where I taught complex concepts in operating systems, computer graphics, and data structures to both graduate and undergraduate students. Currently, I am an assistant professor at KIIT, Bhubaneswar, where I continue to teach computer science to undergraduate and graduate students. I am also working on writing a few free books on systems programming, as I believe in freely sharing knowledge to empower others.

Peterson's solution is a classical software-based method for achieving mutual exclusion in concurrent programming. It allows two processes to share a single-use resource without conflict, using only shared memory for communication. It works by having two shared variables: one indicating "interest" to enter the critical section and another indicating which process should be given priority if both are interested.

Let's assume we have two processes, P0 and P1. Here's how Peterson's Solution can be implemented for these processes:

Shared Variables:

  • flag: An array of boolean where flag[i] indicates if process Pi is interested in entering the critical section.

  • turn: An integer variable that indicates whose turn it is to enter the critical section.

Process P0:

flag[0] = True      # P0 is interested in entering the critical section
turn = 1            # Give the turn to P1

# Entry Section
while flag[1] and turn == 1:
    # Wait - do nothing

# Critical Section
# ... (code for the critical section goes here) ...

# Exit Section
flag[0] = False     # P0 leaves the critical section

# Remainder Section
# ... (remainder code goes here) ...

Process P1:

flag[1] = True      # P1 is interested in entering the critical section
turn = 0            # Give the turn to P0

# Entry Section
while flag[0] and turn == 0:
    # Wait - do nothing

# Critical Section
# ... (code for the critical section goes here) ...

# Exit Section
flag[1] = False     # P1 leaves the critical section

# Remainder Section
# ... (remainder code goes here) ...

Explanation

  • Entry Section: A process sets its flag to True to show its interest in entering the critical section and gives the turn to the other process. It then waits (busy wait) until the other process is either not interested (flag[other] == False) or its own turn comes up.

  • Critical Section: The critical code or the shared resource is accessed here.

  • Exit Section: The process sets its flag to False to indicate that it has finished using the shared resource and is no longer interested in the critical section.

  • Remainder Section: The non-critical portion of the process's code.

Satisfaction of Conditions

  • Mutual Exclusion: Is satisfied as at any point, the turn variable and the flag array prevent both processes from entering the critical section simultaneously. If both processes are interested, the one that did not set the turn last will wait.

  • Progress: When a process leaves the critical section, it does not interfere with the other process entering the critical section. Only processes that are trying to enter the critical section are involved in the decision, and the decision is made in a finite number of steps.

  • Bounded Waiting: The alternation of turns ensures that the wait is bounded. After a process expresses its interest and gives the turn to the other process, it will get its turn after the other process has finished its critical section or if the other process is not interested.

Peterson's solution is a fundamental approach in the study of synchronization in concurrent programming, but it's more of theoretical interest and is rarely used in modern systems due to its busy-waiting approach and issues related to modern CPU architectures, like caching and out-of-order execution.

Busy waiting and issues with caching and out-of-order execution can significantly impact the efficiency and correctness of synchronization solutions like Peterson's algorithm. Let's delve into these issues:

1. Busy Waiting

Busy waiting, also known as spin-waiting, occurs when a process repeatedly checks a condition in a loop until some change is noted. In Peterson's solution, a process waits in a loop until it's allowed to enter the critical section.

Issues with Busy Waiting:

  • Resource Wasting: The process consumes CPU cycles just to check a condition repeatedly, which could have been used by other processes or for saving power.

  • Inefficiency in Multicore Processors: On multicore systems, a spinning core might prevent the other core (running the other process) from updating the condition because of the way memory caching works.

  • Unsuitable for User-Level Processes: In user-level applications, busy waiting is inefficient since the operating system scheduler might put a spinning process to sleep, which delays the release of the lock.

2. Caching

Modern processors use caches to speed up access to frequently used data. However, caching can interfere with the working of Peterson's solution in multi-processor systems.

Issues with Caching:

  • Cache Coherence: In a multicore system, each core may have its own cache. A change in a shared variable (like flag or turn in Peterson's solution) by one process might not be immediately visible to another process because the value could be cached locally at each core.

  • Delayed Propagation: The delay in the propagation of the updated value might lead to both processes entering the critical section, violating mutual exclusion.

3. Out-of-Order Execution

Modern processors often perform out-of-order execution, where instructions are dynamically reordered to optimize performance. This reordering can disrupt the intended sequence of operations in synchronization algorithms.

Issues with Out-of-Order Execution:

  • Sequence Alteration: The CPU might reorder the instructions in a way that the flag variable is set to True after checking the condition in the entry section, leading to both processes entering the critical section.

  • Memory Barrier Requirements: Special instructions (memory barriers) are needed to prevent the CPU from reordering operations across the barrier, which adds complexity and can reduce performance.

Due to these issues, busy waiting and algorithms like Peterson's solution are generally not used in practical multi-processor systems. Instead, modern systems rely on atomic operations and hardware-supported synchronization primitives like mutexes and semaphores, which are more efficient and take into account the complexities of modern processor architectures.

More from this blog

Jyotiprakash's Blog

251 posts

I'm Jyotiprakash, a software dev and professor at KIIT, with expertise in system programming.