Processes

In the context of an Operating System (OS), a "process" refers to an instance of a computer program that is being executed. It includes the program code (also known as the text section), its current activity (represented by the program counter and processor registers), and a set of associated resources such as memory (which includes the stack and heap), file handles, and network connections.

A process is different from a "program" or an "executable file" in that a program becomes a process when it is loaded into memory and starts its execution. Conversely, a program is a static set of instructions, while a process is a dynamic execution environment.

Here are some real-world examples to illustrate the concept of a process:

  1. Word Processing Application: When you open a program like Microsoft Word, it becomes a process. This process has its own memory space where it loads the text you are working on, manages resources like fonts and images, and interacts with the operating system to display content on the screen.

  2. Web Browser: Every time you open a web browser like Chrome or Firefox, a new process is created. If you open multiple tabs, modern browsers often create a separate process for each tab to improve stability and security. Each of these processes manages its own web content, network connections, and user interactions.

  3. Online Gaming: When you play an online game, the game software runs as a process. This process handles tasks like rendering graphics, processing user inputs, and communicating with game servers over the Internet.

  4. Background Services: Operating systems run several background processes like system monitoring, security scans, or file indexing. For example, the Windows Update service runs in the background to check and install updates.

  5. Mobile Apps: On a smartphone, each app you open is a separate process. For example, when you use a navigation app, it runs as a process, using your phone's GPS, accessing maps from the internet, and displaying the route on the screen.

Each process is managed by the operating system, which allocates resources, schedules CPU time, and handles interactions between different processes, ensuring that the system remains stable and responsive.

A typical process in memory can be visualized as a segmented structure, extending from higher memory addresses to lower ones. The layout generally consists of the following major segments:

  1. Code Segment (Text Segment): This is where the compiled program code resides, which is read-only to prevent program corruption.

  2. Data Segment: Divided into initialized and uninitialized parts, this segment contains global and static variables.

  3. Heap: This segment is used for dynamic memory allocation. The heap grows upwards towards lower memory addresses as more memory is allocated.

  4. Stack: It contains function parameters, local variables, and return addresses. The stack grows downwards towards higher memory addresses.

Here is a representation of a typical process memory layout:

Higher Memory Addresses
---------------------------
|                         |
|       Stack             |  Grows downwards (↓)
|                         |
---------------------------
|                         |
|        Heap             |  Grows upwards (↑)
|                         |
---------------------------
|    Uninitialized Data   |
|        (BSS)            |
---------------------------
|    Initialized Data     |
---------------------------
|                         |
|        Code             |
|                         |
---------------------------
Lower Memory Addresses
  • Stack: Used for function calls, local variables. Grows downwards.

  • Heap: Dynamic memory allocation occurs here. Grows upwards.

  • Uninitialized Data (BSS): Global/static variables not explicitly initialized.

  • Initialized Data: Global/static variables initialized by the programmer.

  • Code: Contains the executable code of the process.

Sure, let's consider a simple C program to illustrate how different parts of the code are mapped into the process memory segments (i.e., code segment, stack, heap, initialized data, and uninitialized data).

#include <stdio.h>
#include <stdlib.h>

int initialized_global_var = 10; // Initialized data
int uninitialized_global_var;    // Uninitialized data (BSS)

void function(int arg) {
    int local_var = 5; // Local variable, goes on the stack
    printf("Function local variable: %d\n", local_var);
}

int main() {
    function(initialized_global_var);

    int* heap_var = (int*)malloc(sizeof(int)); // Dynamic allocation, goes on the heap
    *heap_var = 20;
    printf("Heap variable: %d\n", *heap_var);
    free(heap_var);

    return 0;
}

Here's how different parts of this program are stored in memory:

  1. Code Segment: All the code in your program, including functions like main, function, and the standard library functions like printf and malloc, are part of the compiled code and reside in the code segment.

  2. Initialized Data Segment: The global variable initialized_global_var is stored in the initialized data segment since it has a predefined value.

  3. Uninitialized Data Segment (BSS): The global variable uninitialized_global_var is stored in the BSS segment. It's uninitialized and will typically be zeroed out by the system.

  4. Heap: The variable heap_var points to a memory location in the heap. This memory is dynamically allocated during runtime using malloc.

  5. Stack: The main and function functions will use the stack for their execution. When function is called, its argument arg and the local variable local_var are stored in the stack. The stack also stores the return address for the function call and other housekeeping data for function execution.

The five-state model of process management in operating systems describes the various stages a process goes through during its lifecycle. These states are New, Ready, Running, Waiting (also called Blocked), and Terminated.

Here's a detailed explanation of each state:

  1. New: This is the state when a process is being created but is not yet ready to be moved to the 'Ready' state. In this state, the process is being loaded into memory and its control block is being initialized. This is the first step in the life cycle of a process.

  2. Ready: In the Ready state, the process is prepared to run but is waiting for CPU time to become available. It's loaded into memory and has all resources available to execute, except for the CPU. Processes can be moved in and out of the ready state as they are given or denied CPU time by the scheduler.

  3. Running: When the process is assigned to the CPU and its instructions are being executed, it is in the Running state. A process remains in this state until it either finishes its execution, requires an I/O operation, is interrupted by the scheduler to allocate CPU to another process, or encounters an error.

  4. Waiting/Blocked: A process moves into the Waiting state if it requires an event (like I/O operation, signal from another process, etc.) to proceed. While in this state, the process cannot execute, even if the CPU is available, because it is waiting for some external condition to be met.

  5. Terminated: Once the process finishes its execution or is killed due to an error or by the operating system, it enters the Terminated state. Here, the process has completed execution, and its resources and memory are deallocated. The process is then removed from the system.

           [New]
               |
               | Process creation complete
               V
           [Ready] <-----------------------------
               |                                |
               | CPU assigned                   |
               V                                |
           [Running]                            |
               |                                |
               | Process needs I/O or           |
               | waiting for event              | Time slice expired
               |                                | or higher priority process
               V                                |
           [Waiting/Blocked] --------------------|
               |
               | I/O or event complete
               V
          [Terminated]
    
  6. Transitions:

    • New to Ready: After the process is created and initialized.

    • Ready to Running: When the scheduler assigns CPU to the process.

    • Running to Waiting/Blocked: If the process needs to wait for I/O or some other event.

    • Running to Ready: If the process's time slice expires or a higher priority process needs the CPU.

    • Waiting/Blocked to Ready: Once the event/I/O the process is waiting for completes.

    • Running to Terminated: If the process completes or is killed due to an error.

These transitions and states ensure that the CPU is utilized efficiently and all processes get their fair share of CPU time. The operating system's scheduler manages these transitions based on various scheduling algorithms.

In an operating system (OS), processes are represented and managed through a data structure known as the Process Control Block (PCB). The PCB is critical for process management, as it contains all the information the OS needs to manage a process. Each process in the system has its own PCB.

The Process Control Block typically contains the following information:

  1. Process Identifier (PID): A unique identifier assigned by the operating system to each process. This ID is used to track and manage the process throughout its lifecycle.

  2. Process State: Indicates the current state of the process (e.g., New, Ready, Running, Waiting, Terminated). The OS uses this information to make decisions about scheduling and resource allocation.

  3. Program Counter: Stores the address of the next instruction to be executed for this process. When the process is resumed, execution starts from this point.

  4. CPU Registers: The state of all CPU registers for the process. This includes general purpose registers, index registers, stack pointers, etc. This information is essential when interrupting a process and resuming execution later (context switching).

  5. CPU Scheduling Information: Contains scheduling-related information like priority level, scheduling queue pointers, and other process scheduling information used by the OS scheduler.

  6. Memory Management Information: Includes information about the memory allocated to the process, such as the base and limit registers, page tables, or segment tables, depending on the memory management scheme used by the OS.

  7. Accounting Information: Stores administrative information about the process, such as process ID, user ID, process creation time, time spent in CPU execution, time limits, etc.

  8. I/O Status Information: Information about the I/O devices allocated to the process, list of open files, and other I/O related information.

  9. Interprocess Communication Information: Details about the communication and synchronization mechanisms used by the process, if any (e.g., signals, semaphores, messages).

The PCB acts as a repository for any information that might vary from process to process. It is created and maintained by the operating system and is essential for the efficient and fair management of multiple processes. The OS typically uses a PCB list to keep track of all the processes. When a process needs to be paused (during context switching), its state is saved in its PCB, and when it's resumed, the state is restored from the PCB. This mechanism enables the OS to handle multitasking effectively, ensuring each process receives adequate CPU time and resources.

A context switch is an essential feature of a multitasking operating system, involving the kernel suspending the execution of one process and resuming the execution of another. This is how an operating system can give the appearance of simultaneous execution of multiple processes.

When Does Context Switch Happen?

Context switching can occur in several situations, including:

  1. Time Slice Expiration: Most operating systems use a scheduling method called time-sharing, where each process is given a small time slot (time slice) to execute. When a process's time slice expires, the scheduler performs a context switch to allow another process to run.

  2. System Calls: When a process makes a system call, it might need to wait for the OS to provide resources or complete an I/O operation. During this waiting period, the OS might switch to another process.

  3. I/O Requests: When a process requests I/O, it may enter a waiting state until the I/O is complete. The OS can switch to another process to use the CPU efficiently.

  4. Interrupts: Hardware interrupts can cause the currently running process to be suspended and a higher-priority process or an interrupt handling routine to be executed.

  5. Multithreading: In multi-threaded applications, the OS may switch between threads of the same process, which is similar to process context switching.

Dispatch Latency

Dispatch latency is the time taken by the scheduler to stop one process and start another. It includes the time for the scheduler to decide which process to run next and the time taken to perform the context switch. Minimizing dispatch latency is crucial for real-time operating systems, where processes must respond to events without significant delay.

What Happens During a Context Switch?

During a context switch, the operating system performs a series of steps to ensure that the current process state is saved and that the next process to be executed is properly set up. This procedure becomes more intricate when the context switch is triggered by a system call or a hardware interrupt. Let's break down the entire process in detail:

1. Initial Trigger

  • System Call: When a process executes a system call (like a request for file access or I/O operation), it transitions from user mode to kernel mode, prompting the OS to take control.

  • Hardware Interrupt: If a hardware interrupt occurs (such as from a disk or network interface), the currently executing process is interrupted, and control is transferred to an interrupt handler in the OS.

2. Save Current Process State

  • The state of the currently running process (or the current state of the system if in kernel mode due to an interrupt) is saved. This includes the program counter, CPU registers, and other vital process-specific information. This data is stored in the Process Control Block (PCB) of the process.

3. Interrupt or System Call Handling

  • Handle Interrupt or System Call: The OS handles the interrupt or processes the system call. This might involve executing an interrupt service routine or the relevant system call function.

  • Determine Effects: The handling may result in changes like I/O operation initiation, memory being allocated or freed, or inter-process communication.

4. Scheduling Decision

  • After handling the system call or interrupt, the OS's scheduler decides whether to resume the interrupted process or switch to a different process. This decision is based on factors like process priorities, scheduling algorithms, and the state of other processes (e.g., whether a high-priority process has become ready to run).

5. Process Selection for Execution

  • If the scheduler decides to switch processes, it selects the next process to execute. This process would typically be in the "Ready" state and is chosen based on the scheduling policy of the OS.

6. Load Next Process State

  • The state of the next process to run is loaded from its PCB. This includes setting up the CPU registers, stack pointers, program counter, and any necessary memory management configurations.

7. Transition to User Mode

  • If the new process is a user-level process, the OS transitions from kernel mode to user mode, giving control back to the user process.

8. Resume Execution

  • Execution of the new process begins or resumes from the point where it was last paused.

A single executable program can manifest as multiple, unique processes when executed multiple times, either concurrently or at different times. Each of these processes is distinct, with its own code, data, stack, and heap sections, and can potentially take different execution paths based on various factors like user input, environmental variables, or internal logic. Let's explore this in detail:

Single Program, Multiple Processes

  1. Program on Disk: On disk, a program is a static entity - a compiled executable file. It's a set of instructions and data bundled together, but it isn't executing or consuming any CPU or memory resources.

  2. Process Creation: When you execute this program (for example, by double-clicking an icon or running a command in the terminal), the operating system loads the program into memory, creating a new process. This process includes the code of the program, but it also has its own memory space and system resources.

  3. Multiple Executions: If you execute the same program multiple times (either by opening several instances or by the program itself being designed to fork or spawn new processes), each execution is a separate process. Though they all originate from the same program on disk, each process is independent in the operating system.

Independence of Processes

  1. Separate Memory Spaces: Each process has its own memory space allocated by the operating system. This means that even though the code (the instructions of the program) might be the same, the data each process works on can be entirely different.

  2. Unique Process IDs: The operating system assigns a unique process identifier (PID) to each process, ensuring that each instance of the program is treated as a separate entity.

  3. Independent Execution Paths: Depending on user input, different instances of the same program can take different execution paths. For example, in a text editor program, one instance might be used to edit a document while another might be idle.

Code, Data, Stack, and Heap

  1. Code (Text) Section: This is generally shared among all instances of the program. Since the executable code doesn’t change, the operating system can optimize memory usage by allowing different processes to reference the same physical memory for the code.

  2. Data Section: Each process has its own data section. Global and static variables are unique per process. Thus, changes in one process do not affect another.

  3. Stack: Each process has its own stack, which contains function call frames, local variables, and control information. As each process has its own flow of execution, their stacks are independent.

  4. Heap: The heap is used for dynamic memory allocation, and each process has its own heap space. Different processes can allocate and manage memory in the heap independently.