Understanding Memory in Computing: From Physical to Virtual

In the realm of computing, memory plays a pivotal role in the storage and retrieval of information. It serves as the foundation upon which programs operate, manipulate data, and execute instructions. Central to understanding how computers manage memory are the concepts of physical and virtual memory, alongside the mechanisms that translate virtual addresses to physical ones.

Physical Memory: The Concrete Foundation

Physical memory, also known as RAM (Random Access Memory), refers to the actual hardware components -- memory chips -- that store data and program instructions temporarily while a computer is running. The addresses in physical memory are unique; each byte of memory can be directly accessed through a distinct physical memory address. This uniqueness is crucial for the precise location and manipulation of data within the hardware.

Virtual Memory: An Abstraction Layer

Virtual memory, on the other hand, is a sophisticated system that allows a computer to compensate for shortages of physical memory by temporarily transferring data from RAM to disk storage. This abstraction layer provides several benefits, including the ability to run large applications on limited physical memory and to isolate processes from each other, enhancing system security and stability.

Virtual memory addresses do not map directly to physical memory addresses. Instead, they are translated through a process that allows multiple processes to run simultaneously, each believing it has access to a continuous and exclusive memory space. This illusion is a fundamental advantage of virtual memory, enabling more efficient use of the computer's resources.

The Uniqueness Dilemma

While physical memory addresses are inherently unique due to their direct correlation with specific hardware locations, virtual memory addresses lack this uniqueness. In virtual memory systems, different processes can use the same virtual addresses to refer to their own data, which are mapped to distinct physical addresses. This multiplicity allows for the flexible allocation and management of memory resources.

The Role of the MMU: Bridging Virtual and Physical

The Memory Management Unit (MMU) is a critical hardware component that facilitates the translation of virtual memory addresses to physical memory addresses. It uses information stored in page tables or segment tables to perform this conversion. Each entry in these tables corresponds to a block of memory, providing the MMU with the necessary data to translate virtual addresses into their physical counterparts.

The MMU plays a key role in maintaining the illusion of a continuous virtual memory space for each process. By dynamically translating addresses based on the current mappings in the page or segment tables, it ensures that processes access the correct physical memory locations, even as the actual mappings change over time.

The Illusion of Unlimited Memory

One of the most significant advantages of virtual memory is the illusion it creates, making each process believe it has access to the entire memory space. This illusion simplifies programming and execution, as applications do not need to be aware of the underlying physical memory limitations or the presence of other running processes. However, only a portion of a process's virtual memory might be present in physical memory at any given time. Accessing data not currently in RAM triggers a process known as paging or swapping, where the necessary data is loaded into physical memory from secondary storage, such as a hard drive or SSD.

A virtual memory address is a reference to a memory location that is used by an application or a process running on a computer. It forms part of a virtual memory system, which abstracts the actual physical memory (RAM) available on a system. Virtual memory addresses allow an operating system to provide an application the illusion of having access to a large, contiguous block of memory, even if the physical memory is fragmented or not entirely available.

Matching Address Size with Machine Architecture

The size of a virtual memory address is directly related to the architecture of the machine it runs on. For example, a 32-bit machine uses 32 bits for its addresses, while a 64-bit machine uses 64 bits. The number of bits in the address determines the maximum size of memory that can be addressed by the system:

  • For a 32-bit system, with (2^{32}) unique addresses, up to 4 GB (gigabytes) of memory can be addressed ((2^{32}) addresses * 1 byte per address = 4 GB).

  • For a 64-bit system, with (2^{64}) unique addresses, the theoretical maximum addressable memory is 16 EB (exabytes), which is vastly larger than any current systems require or can physically support.

Virtual Address Structure: Page Number and Offset

A virtual address typically consists of two main parts: the page number and the offset. The division between these parts depends on how many bits are allocated to each. The page number identifies a specific page in virtual memory, while the offset specifies the exact byte within that page.

Example

Let's consider a simplified example with a small, hypothetical 16-bit address space. Assume we divide the virtual address into an 8-bit page number and an 8-bit offset. This means the system can address (2^{16}) = 65,536 unique addresses in total.

  • Page Number (left part): The first 8 bits (the left part) of the address specify the page number. With 8 bits for the page number, there can be (2^8) = 256 unique pages.

  • Offset (right part): The last 8 bits (the right part) of the address specify the offset within a page. With 8 bits for the offset, each page can contain (2^8) = 256 unique addresses (or bytes), determining the size of each page.

Virtual Address: 16 bits total
| Page Number (8 bits) | Offset (8 bits) |
|      00000001        |     00000100    |

In this example, the virtual address 0000000100000100 in binary corresponds to:

  • Page number: 00000001 (binary) = 2 (decimal)

  • Offset: 00000100 (binary) = 4 (decimal)

This means the address points to the 5th byte (since we start counting from 0) of the 3rd page (again, counting from 0) in virtual memory.

The translation of these virtual addresses to physical addresses is handled by the Memory Management Unit (MMU), using structures like page tables. The MMU uses the page number to look up the corresponding physical page in the page table and then adds the offset to find the exact physical address.

A page table is a data structure used by the operating system to store mappings between virtual addresses and physical addresses in a system that uses virtual memory. Each process running on a system has its own unique page table, ensuring that the virtual memory space appears contiguous and isolated to that process, even though physical memory may be fragmented and shared among many processes.

Page Table Entries (PTEs)

Page tables contain multiple entries, known as Page Table Entries (PTEs). Each PTE corresponds to a virtual page in the process's virtual memory space and contains information necessary to map that virtual page to a physical frame in RAM. A frame is the physical equivalent of a virtual page and has the same size, ensuring a 1:1 mapping in terms of memory space allocation.

Information in a PTE

A typical PTE includes:

  • Frame Number: Identifies the physical frame in RAM where the page resides. This is the most crucial piece of information for address translation.

  • Protection Bits: Indicate the access permissions for the page, such as read, write, execute, and sometimes others like copy-on-write. These bits help the operating system enforce memory protection, ensuring, for example, that a process cannot write to read-only memory or execute non-executable memory.

  • Valid Bit: Indicates whether the mapping between a virtual page and a physical frame is valid. A valid bit set to 0 might mean the page is not currently in physical memory (perhaps it's swapped out to disk), and accessing it would cause a page fault.

  • Other Flags/Bits: Might include information for page caching policies, whether the page has been accessed or written to (used for page replacement algorithms), and more.

Context Switches and Page Tables

When the operating system switches context from one process to another (a context switch), it needs to change the page table used by the memory management unit (MMU) to the one belonging to the next process. This switch ensures that the newly scheduled process accesses its own memory space, maintaining the isolation between processes.

Consider a simplified example of a page table with 4 PTEs. Each entry maps a virtual page to a physical frame and includes a valid bit and protection bits for simplicity.

Page Table for Process X:
+-------------+-------------+--------+-------------------+
| Virtual Page| Physical Frame | Valid | Protection        |
+-------------+-------------+--------+-------------------+
|      0      |       3       |   1    | Read/Write        |
|      1      |       6       |   1    | Read              |
|      2      |    Not Mapped |   0    | Not Applicable    |
|      3      |       2       |   1    | Read/Write/Execute|
+-------------+-------------+--------+-------------------+

In this table:

  • Virtual Page 0 is mapped to Physical Frame 3 with read and write permissions.

  • Virtual Page 1 is mapped to Physical Frame 6 with read-only permission.

  • Virtual Page 2 is not currently mapped to a physical frame, indicated by a valid bit of 0.

  • Virtual Page 3 is mapped to Physical Frame 2 with read, write, and execute permissions.

This example simplifies the real complexity found in modern systems but illustrates the basic concept of how page tables map virtual pages to physical frames and specify access permissions for each page.

Let's take an 8-bit virtual address space and divide it as follows: 3 bits for the page number (left part) and 5 bits for the offset (right part). This division allows for 8 pages (since (2^3 = 8)) and 32-byte pages (since (2^5 = 32)) in each virtual page.

Virtual Address Example

Consider a virtual address represented as 10110100 in binary.

  • Left part (Page Number): 101 (5 in decimal, indicating it's the 6th page since we start counting from 0)

  • Right part (Offset): 10100 (20 in decimal, indicating the 21st byte in the page)

Example Page Table with PTEs

Let's create a page table with entries for all 8 possible pages. Each entry will map the virtual page to a hypothetical physical frame. For simplicity, we'll assume each physical frame also has 32 bytes, and the page table entries include only the frame number.

Page Table:
+-------------+----------------+
| Virtual Page| Physical Frame |
+-------------+----------------+
|      0      |       2        |
|      1      |       4        |
|      2      |       1        |
|      3      |       7        |
|      4      |       5        |
|      5      |       0        |
|      6      |       3        |
|      7      |       6        |
+-------------+----------------+

Translation from Virtual to Physical Address

Given our virtual address 10110100, we identified the page number as 101 (5 in decimal) and the offset within the page as 10100 (20 in decimal).

Looking at our page table:

  • The virtual page 5 is mapped to physical frame 0.

  • Since the physical frame number is 0 and our offset within the page is 10100 (20 in decimal), the physical address starts at the beginning of frame 0 and moves 20 bytes into it.

Physical Memory Frame Calculation:
Physical Frame = 0, Offset = 10100 (20 in decimal)

Physical Address = Start of Frame 0 + Offset
                 = 00000000 + 10100
                 = 00010100 (in binary)

Thus, the virtual address 10110100 translates to a physical address 00010100 (20 in decimal, assuming the physical frame 0 starts at 00000000). This example illustrates how a virtual address is broken down into a page number and offset, which is then translated into a physical address using a page table.

Demand paging is a memory management scheme that reflects an efficient compromise between the limited physical memory available and the seemingly unlimited virtual memory needed by processes. It's a fundamental concept in modern operating systems, addressing the challenge where the cumulative virtual memory requirements of all processes exceed the available physical memory.

The Premise of Demand Paging

The core idea behind demand paging is that not all pages of a process need to be loaded into physical memory at once. Instead, pages are loaded on demand, i.e., only when they are needed. This approach is based on the observation that programs tend to use only a small fraction of their allocated memory actively at any given time—a phenomenon known as locality of reference.

How Demand Paging Works

In demand paging, when a process tries to access a page that is not currently in physical memory (a situation detected by the absence of a valid mapping in the page table), a page fault occurs. The operating system then:

  1. Identifies the missing page needed by the process.

  2. Selects a physical frame to use for this page. If all frames are in use, the system must choose one to replace, typically using a page replacement algorithm (like Least Recently Used, LRU).

  3. Loads the required page into the selected frame from secondary storage (disk).

  4. Updates the page table to reflect the new virtual-to-physical mapping.

  5. Resumes the interrupted process, which can now access the page as if it had always been in memory.

Managing Physical Memory

The key to demand paging is efficiently managing which pages are kept in physical memory. Since physical memory is much smaller than the virtual memory space of all processes, the operating system must make judicious decisions about which pages to keep in RAM and which to offload to disk.

Page Replacement

When a page needs to be brought into memory, and there is no free space available, the system must decide which resident page to evict. This decision is made using a page replacement algorithm, which tries to guess which existing page is least likely to be used in the near future. Common algorithms include:

  • Least Recently Used (LRU): Evicts the page that has not been accessed for the longest time.

  • FIFO (First In, First Out): Evicts the oldest page in memory.

  • Clock Algorithm: A more efficient version of LRU that approximates LRU behavior in a less resource-intensive way.

Advantages of Demand Paging

Demand paging significantly increases the efficiency of memory usage by:

  • Reducing load times: Only the necessary pages are loaded, speeding up the start time of applications.

  • Increasing the number of processes that can be run simultaneously: By using physical memory more efficiently, more processes can be kept in a runnable state.

  • Supporting larger applications: Applications that would otherwise exceed the available physical memory can run through the use of virtual memory.

The process of converting a virtual memory address to a physical address, while foundational to modern computing, inherently introduces a performance bottleneck due to the need for two memory accesses: one to retrieve the Page Table Entry (PTE) from the page table, and another to access the actual data in the physical memory location. To mitigate this overhead and improve memory access time, a Translation Lookaside Buffer (TLB) is employed.

The Role of the TLB

A TLB is a specialized cache used by the Memory Management Unit (MMU) to reduce the time taken for virtual-to-physical address translation. It is a form of associative memory that stores recent mappings from the page table. By keeping these mappings readily accessible, the TLB can significantly cut down on the number of memory accesses required for data retrieval.

How the TLB Works

When a virtual address needs to be translated to a physical address, the MMU first checks the TLB:

  • TLB Hit: If the corresponding page number is found in the TLB (indicating the PTE is cached), the translation can proceed directly, bypassing the page table lookup in main memory. This results in a significant reduction in memory access time.

  • TLB Miss: If the page number is not in the TLB, a page table lookup in main memory is necessary to retrieve the PTE. After the PTE is fetched, the MMU updates the TLB with this new translation, replacing an existing entry if the TLB is full. Subsequent accesses to this page may then result in TLB hits.

Context Switches and the TLB

Upon a context switch, where the CPU switches from executing one process to another, the TLB is usually flushed. This is because the mappings in the TLB for one process are not valid for another process, given that each process has its own page table and, therefore, its own set of virtual-to-physical address mappings. Flushing the TLB ensures that stale translations do not cause incorrect memory accesses.

The Importance of TLB Hits and Misses

The effectiveness of a TLB is often measured by its hit rate, which is the ratio of address translations that are resolved within the TLB to the total number of translations requested. A high TLB hit rate indicates that most memory accesses can bypass the more time-consuming page table lookup, enhancing overall system performance. Conversely, a TLB miss introduces additional latency, as it requires accessing the page table in main memory to resolve the address translation.

The concept of the working set of a process is fundamental to understanding memory management and performance optimization in computer systems. The working set defines the amount of memory that a process requires to operate efficiently without excessive paging, which is the swapping of data between physical memory and disk storage.

Working Set of a Process

The working set of a process is essentially a subset of the total pages in its virtual memory that it is actively using during a given period. This set changes over time as the process accesses different parts of its memory space. The idea, introduced by Peter Denning in the 1960s, is based on the principle of locality of reference, which observes that programs tend to access a relatively small portion of their memory space intensively for a period before moving on to another small portion, and so on.

Importance of Working Set Size

The size of the working set is crucial because it helps the operating system determine the minimum number of pages that need to be kept in physical memory to avoid excessive paging. If the pages within a process's working set are not all resident in physical memory, the process may frequently fault, leading to high paging activity as pages are continually swapped in and out of memory.

Thrashing: When Working Sets Do Not Fit in RAM

Thrashing occurs when there is insufficient RAM to hold the working sets of all active processes. When this happens, the system spends more time paging (swapping pages in and out of physical memory) than executing processes. This condition severely degrades system performance, as the overhead of handling page faults and swapping pages dominates the CPU's time.

Why Thrashing Happens

  1. Insufficient Physical Memory: As the number of running processes increases or as processes grow their working set sizes, the demand for physical memory can exceed its availability.

  2. Overcommitment of Resources: Operating systems often allow for more virtual memory to be allocated to processes than the total size of physical memory, relying on the fact that not all processes will need their full allocation at the same time. However, if too many processes try to use too much of their allocation simultaneously, the system can become overcommitted.

  3. Poor Memory Management: Inefficient algorithms for memory management, page replacement, or scheduling can also lead to situations where the working sets of processes are not efficiently maintained in RAM.

Dealing with Thrashing

To mitigate or prevent thrashing, operating systems may employ several strategies:

  • Working Set Model: Adjusting the size of the working set to ensure that the most frequently accessed pages are kept in physical memory.

  • Swapping: Temporarily moving entire processes from RAM to disk to reduce the load on physical memory.

  • Load Control: Limiting the number of processes competing for memory can help ensure that each has enough physical memory for its working set.

To derive the effective memory access time in a computing system that utilizes both a Translation Lookaside Buffer (TLB) and virtual memory, under the assumption that accessing the page table takes the same amount of time as a regular memory access, we can follow a systematic approach. This derivation takes into account several key factors:

Let's break down each of these terms, which are key components in calculating the effective memory access time in systems utilizing both a Translation Lookaside Buffer (TLB) and virtual memory.

$$(t_{TLB}) (TLB Access Time)$$

  • Definition: The time it takes to search the TLB for a virtual-to-physical address translation. This includes the time to determine if the address translation for a given virtual page number is present in the TLB and, if so, to retrieve it.

  • Impact: A shorter (t_{TLB}) can significantly improve the overall memory access time, especially for systems with high TLB hit rates. It represents the overhead added to every memory access operation due to the TLB lookup process.

$$(t_{memory}) (Memory Access Time)$$

  • Definition: The time required to access a location in physical memory. This is the time taken to read from or write to a physical memory address once the address has been translated from virtual to physical.

  • Impact: The base time for all memory operations. Efficient memory hardware and caching strategies aim to minimize (t_{memory}) to improve system performance.

$$(t_{fault}) (Page Fault Service Time)$$

  • Definition: The total time required to handle a page fault, including detecting the fault, reading the needed page from secondary storage (like a hard disk or SSD) into physical memory, and updating the page table to reflect the new page's location in physical memory.

  • Impact: Since accessing secondary storage is orders of magnitude slower than accessing physical memory, (t_{fault}) is substantially higher than other times. It's a critical performance factor in systems with frequent page faults.

$$(h_{TLB}) (TLB Hit Rate)$$

  • Definition: The probability that a given virtual-to-physical address translation is found in the TLB. It's a measure of how often the TLB lookup process successfully finds the needed translation without having to access the page table.

  • Impact: A higher (h_{TLB}) means that more memory accesses can be completed quickly, without the additional latency of accessing the page table. Optimizing (h_{TLB}) is crucial for minimizing effective memory access times.

$$(p_{fault}) (Page Fault Rate)$$

  • Definition: The probability that accessing a page in memory will result in a page fault, indicating that the page is not currently loaded into physical memory and must be fetched from secondary storage.

  • Impact: Like (t_{fault}), the (p_{fault}) significantly affects system performance. A lower (p_{fault}) reduces the likelihood of incurring the high cost of servicing page faults, thus improving overall system efficiency.

The derived formula for the effective memory access time in a computing system that utilizes both a Translation Lookaside Buffer (TLB) and virtual memory, including the time to access the TLB, is as follows:

$$T_{effective} = h_{TLB} \cdot p_{fault} \cdot t_{TLB} + h_{TLB} \cdot p_{fault} \cdot t_{memory} - h_{TLB} \cdot t_{memory} + p_{fault} \cdot t_{fault} + t_{TLB} + 2 \cdot t_{memory}$$

The formula for the effective memory access time in a computing system that includes both a Translation Lookaside Buffer (TLB) and virtual memory combines several individual contributors, each with its specific impact on overall memory access efficiency. Here's a brief description of each:

$$(h_{TLB} \cdot p_{fault} \cdot t_{TLB} + h_{TLB} \cdot p_{fault} \cdot t_{memory})$$

This term accounts for the scenarios where a TLB hit occurs but the page is not in memory, leading to a page fault. It combines the time to access the TLB and the memory access time during a page fault. This reflects the cost of resolving a page fault when the address translation is already available in the TLB.

$$(- h_{TLB} \cdot t_{memory})$$

This subtracted term corrects for the inclusion of the memory access time in the first term for both fault and no-fault cases, ensuring that the memory access time during a successful TLB hit (where no page fault occurs) is accurately represented.

$$(p_{fault} \cdot t_{fault})$$

This term represents the time taken to service a page fault, which includes the time to read the required page from disk into memory. The page fault rate indicates how frequently such faults occur, making this term critical in environments with higher rates of page faults.

$$(t_{TLB} + 2 \cdot t_{memory})$$

The base time for a memory access operation that includes accessing the TLB and the memory. This term covers the scenarios of TLB misses, where an additional memory access is required to read the page table after the TLB lookup, and TLB hits without page faults, ensuring that all possible paths to memory access are accounted for.

Overall Impact

  • TLB Access Time: Affects every memory access since the TLB is always checked first. Faster TLB access reduces the overall memory access time.

  • Memory Access Time: Fundamental to all operations, whether accessing data directly or reading from the page table during a TLB miss. Minimizing it is crucial for performance.

  • Page Fault Service Time: Significant only when page faults occur but can drastically increase the effective memory access time due to the slow nature of disk access compared to memory.

  • TLB Hit Rate and Page Fault Rate : Probabilistic factors that determine how often the system experiences the best-case scenario (TLB hit without page fault) versus more time-consuming scenarios (TLB misses and page faults).

Understanding the individual contributions to the effective memory access time helps in identifying potential bottlenecks and areas for optimization in the memory management system.

Here is a quick recap what actually happens behind the scenes in demand paging.

1. Instruction Execution & Virtual Address Generation

The CPU executes an instruction that requires access to a memory location. During this process, it generates a virtual address for the required data.

2. Deriving Page Number and Offset

The virtual address is divided into two parts:

  • Page Number (PN): Identifies which virtual page contains the data.

  • Offset: Specifies the exact location of the data within the page.

3. TLB Lookup

The CPU uses the Page Number to perform a lookup in the Translation Lookaside Buffer (TLB), a cache that holds a small subset of page table entries for rapid translation of virtual addresses to physical addresses.

If TLB Hit:

  • If the lookup is successful (a TLB hit), the TLB returns the corresponding Frame Number.

  • The CPU then combines this Frame Number with the Offset to form the complete physical address.

  • The physical address is used to access (read or store) the required data in physical memory.

If TLB Miss:

  • If the lookup fails (a TLB miss), the system needs to consult the page table in physical memory to find the mapping.

4. Page Table Lookup

The Page Number is used to find the corresponding entry in the page table, which is stored in physical memory.

5. Validity and Mapping Check

The page table entry (PTE) indicates whether the page is valid and mapped to physical memory.

  • If the page is valid and mapped, the process continues to access the physical memory.

  • If the page is not valid or not mapped (indicating the page is not currently in physical memory), a page fault is raised.

6. Handling Page Faults

When a page fault occurs, the operating system takes several steps:

  • Check Memory Allocation: The OS first checks if the memory the process is trying to access has been allocated. If not, this access is invalid, leading to a segmentation fault, and the process is terminated.

  • Page In: If the memory is allocated but not currently in physical memory, the OS locates the required page in the swap space on disk.

  • Eviction (if necessary): If physical memory is full, the OS selects a frame to evict. The selection is based on a page replacement algorithm (e.g., Least Recently Used).

  • Disk Read: The OS reads the needed page from disk into the selected frame in physical memory.

  • Page Table Update: The page table is updated with the new Frame Number for the just-loaded page, marking it as valid and mapped.

7. Restarting the Instruction

After the required page is brought into physical memory and the page table is updated, the instruction that caused the page fault is restarted.

8. TLB Update on Page Table Hit

If there was a TLB miss but the page was found in the page table (and no page fault occurred), the translation is added to the TLB. This ensures that subsequent accesses to the same page benefit from the faster TLB lookup.

Page replacement policies are crucial in operating systems that use demand paging to manage physical memory more efficiently. When a page fault occurs because a requested page is not in memory, and there is no free space to bring the new page in, the system must decide which existing page to evict. Different policies have different criteria for choosing this page. Here, we'll discuss three key policies: FIFO (First-In, First-Out), LRU (Least Recently Used), and Optimal Page Replacement, and why these policies are needed in the context of demand paging.

FIFO (First-In, First-Out)

FIFO is one of the simplest page replacement policies. It evicts the oldest page in memory, based on the assumption that the first page loaded into memory is the one that should be removed.

Example:

Page Frame: | 1 | 2 | 3 |
Initially:  | - | - | - |

Sequence of page requests: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

1. Load 1: | 1 | - | - |
2. Load 2: | 1 | 2 | - |
3. Load 3: | 1 | 2 | 3 |
4. Request 4, evict 1: | 4 | 2 | 3 |
5. Request 1, evict 2: | 4 | 1 | 3 |
...

LRU (Least Recently Used)

LRU page replacement evicts the page that has not been used for the longest period of time, under the assumption that pages used recently will likely be used again soon.

Example:

Page Frame: | 1 | 2 | 3 |
Initially:  | - | - | - |

Sequence of page requests: 1, 2, 3, 2, 4, 3, 2, 1, 4, 2

1. Load 1: | 1 | - | - |
2. Load 2: | 1 | 2 | - |
3. Load 3: | 1 | 2 | 3 |
4. Access 2 (no change, update usage): | 1 | 2 | 3 |
5. Request 4, evict 1 (oldest in terms of last use): | 4 | 2 | 3 |
...

Optimal Page Replacement

The Optimal Page Replacement algorithm evicts the page that will not be used for the longest period in the future. It is theoretically the best method for minimizing page faults but is not feasible in practice because it requires future knowledge of the reference string.

Example:

Page Frame: | 1 | 2 | 3 |
Initially:  | - | - | - |

Sequence of page requests: 1, 2, 3, 4, 2, 1, 5, 1, 2, 3, 4

1. Load 1: | 1 | - | - |
2. Load 2: | 1 | 2 | - |
3. Load 3: | 1 | 2 | 3 |
4. Request 4, evict 3 (not used until much later): | 1 | 2 | 4 |
...

Why These Policies are Needed

In the context of demand paging, where physical memory is limited, page replacement policies are essential for deciding which pages to keep in memory and which to swap out. Efficient page replacement can significantly affect a system's performance by reducing page faults and ensuring that the most important pages are kept in memory.

Belady's Anomaly

Belady's anomaly refers to the counterintuitive situation where, for some page replacement algorithms like FIFO, increasing the number of page frames can lead to an increase in the number of page faults. This phenomenon highlights the non-optimal nature of some replacement strategies.

Why Optimal is Only a Benchmark

The Optimal Page Replacement policy serves as a benchmark because it produces the lowest possible page fault rate. However, it's not feasible in real-world systems since it requires perfect knowledge of future requests, which is impossible to predict accurately. Thus, while it's used as a theoretical ideal to measure the efficiency of other page replacement policies, systems must rely on implementable algorithms like FIFO and LRU, which make decisions based on past and current information.

To illustrate a page table scenario with 32-bit addresses and 4 KB pages, let's first outline the basic parameters:

  • Address space size: With 32-bit addresses, we have (2^{32}) possible addresses.

  • Page size: 4 KB (2^12 bytes).

  • Total number of pages: (2^32 / 2^12 = 2^20) pages.

Memory used by the process: 1 MB, which equals 256 pages since

$$1 \text{MB} = 2^{20} \text{bytes},2^{20} / 2^{12} = 256$$

Each entry in the page table maps a page in the virtual address space to a page in physical memory. If a process uses only 1 MB of memory, only 256 of these entries will point to actual physical memory pages. The rest of the entries in the page table will be marked as not mapped or invalid.

Each PTE can either point to a physical page (if the virtual page is used by the process) or be marked as not mapped. In our case, only 256 PTEs would be mapped to actual physical memory, and the rest would not be used.

Now, let's calculate the percentage of the page table actually used for mapping versus how much is wasted:

  • Total PTEs: 2^20.

  • Used PTEs: 256.

  • Unused PTEs: (2^20) - 256.

Percentage used:

$$\text{Used %} = \left( \frac{256}{2^{20}} \right) \times 100$$

Percentage wasted:

$$\text{Wasted %} = 100 - \text{Used %}$$

In this scenario, only about 0.024% of the page table entries are actually used for mapping, while approximately 99.976% of the space allocated for the page table is wasted. This highlights a significant inefficiency in using a single, large page table for a process that utilizes a small fraction of the addressable memory space.

Hashed page tables use a hash table to manage the mapping between virtual page numbers (VPNs) and physical frame numbers (PFNs). This approach can be particularly efficient when the address space is sparsely populated, as is common in many operating systems. Let's break down how hashed page tables work and their pros and cons using some explanation.

In a hashed page table, each entry consists of a virtual page number, a pointer to the physical frame (if it exists), and possibly a pointer to the next entry in case of a hash collision. The virtual page number is hashed, and this hash value is used as an index into the page table.

    Hash Function
        |
        v
+---------------+       +-------------+
| Hashed Index  | ----> |  VPN | PFN  | ---> Collision List
+---------------+       +-------------+
                             |
                             v
                       Physical Memory

When a virtual address is used, the system:

  1. Extracts the VPN from the virtual address.

  2. Applies a hash function to the VPN to get a hashed index.

  3. Searches the linked list at that index for an entry with the matching VPN.

  4. If found, the corresponding PFN is used to access physical memory.

Advantages

  • Space Efficiency: Hashed page tables can be more space-efficient than traditional multi-level page tables for sparse address spaces because they only need entries for mapped pages.

  • Scalability: They can efficiently map large address spaces without dedicating large, contiguous blocks of memory for page tables.

Drawbacks

  • Collision Handling: Hash collisions require additional handling, typically through linked lists, which can increase memory access times due to the need to traverse these lists.

  • Variable Access Times: Access times can vary depending on the length of the collision list, leading to less predictable performance compared to traditional page tables.

  • Complexity: Implementation and management of hashed page tables are more complex, especially with regards to handling collisions and resizing the table.

Virtual Address Space
+---------------------------------+
|      Sparse Pages Mapped        |
+---------------------------------+
                 |
                 | Virtual Page Number (VPN)
                 v
           Hash Function
                 |
                 | Hashed Index
                 v
         +-------------------+        +-------------------+
Hashed ->| Index | Head Ptr  | -----> | VPN | PFN | Next |----> ...
Page     +-------------------+        +-------------------+       ...
Table          ...                        |      ^
                                 +--------+      |
                                 v               |
                            Physical Memory     ...

This diagram represents how a hashed page table maps sparse virtual pages to physical frames using hashing, with collision handling through linked lists. Each index in the table points to the head of a list of entries that hash to the same index, allowing efficient search within a potentially large address space.

Inverted page tables represent a different approach to page mapping, focusing on reducing the memory required for the page table itself, especially in systems with large physical memory. Instead of having an entry for every virtual page in the process's address space, an inverted page table has one entry for each frame of physical memory. This approach significantly reduces the size of the page table but requires a different method for finding the physical memory associated with a given virtual address.

In an inverted page table, each entry corresponds to a physical frame and contains information about which virtual page (if any) is stored in that frame. This information typically includes the virtual page number (VPN) and the process identifier (PID) that owns the page, allowing the same frame to be shared among different processes securely.

Physical Memory
+------+-------+-------+
|Frame |  VPN  |  PID  |
+------+-------+-------+
|  0   | VPN_0 | PID_A |
|  1   | VPN_1 | PID_A |
|  2   | VPN_2 | PID_B |
| ...  |  ...  |  ...  |
|  N   | VPN_N | PID_X |
+------+-------+-------+

When the system needs to translate a virtual address to a physical one:

  1. It takes the virtual address and the current process identifier.

  2. It then searches the inverted page table for an entry with a matching VPN and PID.

  3. If found, the index of that entry in the table directly corresponds to the physical frame number.

Advantages

  • Compact: The table size is proportional to the size of physical memory, not the potential virtual address space, making it more scalable for systems with large physical memory.

  • Shared Pages: It can easily handle shared pages between processes by including process identifiers in the table.

Drawbacks

  • Search Time: Searching for the right entry can be time-consuming because the table does not inherently index by VPN and PID. This is often mitigated by additional hardware or software structures, like hash tables.

  • Increased Complexity: Managing inverted page tables can be more complex, especially with the need for additional mechanisms to speed up address translation.

    Virtual Address Space                     Physical Memory
+---------------------------+            +-----------------------------+
|    Process A   | Process B|            | Frame |   VPN   |    PID     |
| +-------------+|+--------+|            +-----------------------------+
| | VPN_0 | VPN_1||VPN_2... |            |  0    | VPN_0  |  Process A |
| +-------------+|+--------+|            |  1    | VPN_1  |  Process A |
+---------------------------+            |  2    | VPN_2  |  Process B |
                 |                       |  ...  |  ...   |    ...     |
                 | VPN                   |  N    | VPN_N  |  Process X |
                 |                       +-----------------------------+
               Lookup
                 |
                 v
           +----------------+       
           | Inverted Page  |-----> Finds frame if VPN and PID match
           |     Table      |       
           +----------------+

This shows how an inverted page table links each physical frame to a virtual page and its owning process. The lookup process requires scanning the table for a matching VPN and PID, highlighting the trade-off between table compactness and the complexity of address translation.

Hierarchical paging, often referred to as multi-level paging, involves breaking the page table itself into multiple levels, reducing the need to allocate a large contiguous block of memory for a page table. This method is particularly useful for efficiently managing large address spaces, such as those found in 32-bit or 64-bit systems.

In hierarchical paging, the virtual address is divided into several parts: each part indexes into a different level of the page table. The last part of the address is used as the offset within the page. For a simple two-level paging system, the virtual address could be divided as follows (assuming a 32-bit address space for illustration):

  • The highest-order bits index into the first level page table (Directory).

  • The next set of bits indexes into the second level page table.

  • The lowest-order bits are used as the offset within the page.

Virtual Address Space
+---------------------------------------------+
| Dir Index | Table Index |    Page Offset    |
+---------------------------------------------+
                        |
        |               |----------------|
        | Dir Index                      | (indexes into this table)
        v                                v
+---------------------------+       +---------------------------+
| First Level Page Table    | --->  | Second Level Page Table   | ---> Physical Address
| (Page Directory)          | (base)| (Page Table Entries)      |
+---------------------------+       +---------------------------+
                                      |         |         |
                                      v         v         v
                                +------+------+------+
                                | Page | Page | Page | ... Physical Memory
                                +------+------+------+

Advantages of Hierarchical Paging

  • Space Efficiency: Only the parts of the page table that are actually used need to be allocated. This is particularly beneficial for sparse address spaces, where a process uses a small portion of the available address space.

  • Scalability: It allows for efficient use of large address spaces without requiring a single, massive page table.

  • Flexibility: By adjusting the number of levels and the number of entries at each level, the system can be tailored to different sizes of physical and virtual address spaces.

How It Saves Space

In a single-level page table system, the entire page table must be allocated even if only a small portion of the address space is used. In contrast, hierarchical paging only requires allocation of page table parts that map the actually used virtual address space. This can significantly reduce the memory required for page tables, especially in systems with large virtual address spaces but relatively small physical memory or sparse usage patterns.

Drawbacks

  • Increased Access Time: Accessing a physical address requires multiple memory accesses, one for each level of the page table, potentially increasing the time it takes to translate a virtual address to a physical address.

  • Complexity: Managing multiple levels of page tables adds complexity to the memory management subsystem of the operating system.

Hierarchical paging offers a compromise between space efficiency and access time, making it a widely adopted method in modern computer systems for managing the mapping between virtual and physical memory.

Page sharing between processes is a common technique used to optimize memory usage, especially for common code or data, such as shared libraries or the text segment of executables. When two (or more) processes share pages, their page tables contain entries that point to the same frames in physical memory. This can significantly reduce the overall memory footprint of these processes.

Consider two processes, Process A and Process B, which need to share a common set of pages (for example, a shared library). The operating system sets up their page tables such that certain entries in both tables point to the same physical frames. Here's a simplified illustration:

Process A Page Table                      Process B Page Table
+-------------------+                     +-------------------+
| Virtual Page | PFN |                    | Virtual Page | PFN |
+-------------------+                     +-------------------+
|       1      |  5  |                    |       3      |  5  |
|       2      |  6  |---+                |       4      |  6  |---+
+-------------------+   |                 +--------------------+   |    
                        |                                          |
                        |                                          | 
                        |<------------------------------------------
                        |
              Physical Memory Frames                          
                        |           
                        |           
                        |            
+-------------------+   |            
|       Frame       |<--+            
+-------------------+                     
|         6         | Shared Frame        
+-------------------+                    
|  ... Shared Data..|                    
+-------------------+

Detailed Explanation

  • PFN (Page Frame Number): Refers to the index of a frame in physical memory.

  • Virtual Page: An index used by a process to refer to its own address space. It gets translated to a PFN through the page table.

In the diagram above, both Process A and Process B have entries in their respective page tables that map their virtual pages to the same PFNs in physical memory (e.g., Process A's virtual page 1 maps to PFN 5, and Process B's virtual page 3 also maps to PFN 5). This setup means that both processes can access the data in these shared frames as if the data were part of their own virtual address space, even though the physical memory is shared.

Advantages of Page Sharing

  • Memory Efficiency: By sharing common code or data, the system uses less physical memory for these components, leaving more memory available for other uses.

  • Quick Inter-process Communication (IPC): Shared memory can serve as an efficient means for IPC, as changes made by one process in the shared memory are immediately visible to the other process(es) sharing that memory.

Considerations

  • Security and Isolation: The operating system must ensure that shared pages do not compromise the security and isolation between processes. Typically, shared pages are read-only (like shared libraries). If writable shared pages are needed, careful synchronization and access control are required.

  • Coherency: If any shared data can be changed, the system must ensure that all processes have a coherent view of that data. This often requires additional mechanisms, like locks or semaphores, to manage access to shared pages.

Page sharing is a powerful technique in memory management that allows for both memory usage optimization and efficient inter-process communication.

Segmentation is a memory management scheme that supports the logical view of memory as a collection of variable-sized segments. Each segment represents a logical unit such as a function, an array, or a module of a program. Segmentation allows programs to be more easily structured and accessed according to how they are logically organized.

Components of Segmentation

  • Segment Table: This table stores information about each segment in a process's address space. It includes the base address of the segment in physical memory and the length of the segment.

  • Base Address: The starting physical address of a segment in memory.

  • Limit (or Length): The size of the segment, which helps in checking for out-of-bounds accesses.

  • Segment Number (or ID): A unique identifier for each segment, used for addressing.

Address Translation in Segmentation

In segmentation, a logical address consists of two parts: a segment number and an offset. The segment number identifies which segment the address refers to, while the offset specifies the location within the segment.

Here’s how address translation works in segmentation:

Logical Address
+------------------------+
| Segment Number | Offset|
+------------------------+

          |
          | Segment Number
          v
+------------------------+      +------------------------+
|    Segment Table       | ---> | Base Address |  Limit  |
+------------------------+      +------------------------+
                                       |
                                       | Base Address + Offset
                                       v
                               +------------------------+
                               |    Physical Memory     |
                               +------------------------+

Detailed Explanation

  1. Segment Number: The logical address's segment part is used to index into the segment table and find the corresponding segment entry.

  2. Base Address: Once the segment is identified, its base address is retrieved from the segment table.

  3. Offset: The offset is then added to the base address to calculate the physical memory address. This address must be within the segment's bounds, as defined by the limit value in the segment table entry.

  4. Limit Check: If the offset is greater than the limit, it indicates an attempt to access memory outside the segment, which results in an error.

Advantages of Segmentation

  • Logical Organization: Segmentation reflects the logical organization of a program, making it easier for developers to manage their code and data.

  • Protection and Sharing: Different segments can have different protection levels (e.g., read-only, executable), and segments can be shared between processes.

  • Flexibility: The size of segments can vary, allowing for efficient use of memory by allocating exactly what is needed.

Considerations

  • Fragmentation: Unlike paging, segmentation can lead to external fragmentation, as free memory blocks may become scattered.

  • Segment Table Size: The segment table itself can grow large if a process has many segments, though this is typically less of an issue than the size of a page table in a paging system.

Segmentation and paging are both memory management schemes used by operating systems to manage applications' memory requests and the physical memory of a computer. While both methods aim to facilitate memory use and protection, they differ fundamentally in their approaches and underlying principles.

Quick comparision: Paging versus Segmentation

Concept: Segmentation views memory as a collection of variable-sized segments, each representing a logical unit of the application, such as functions, arrays, or program modules. Segments vary in size, reflecting the structure of the application.

Addressing: A logical address in segmentation consists of two parts: a segment number and an offset. The segment number identifies which segment the address refers to, and the offset specifies the location within that segment.

Memory Management: The operating system maintains a segment table for each process, with each entry containing the base address of a segment in physical memory and the segment's length. The OS uses the segment table to translate logical addresses into physical addresses and to enforce memory protection.

Advantages:

  • Reflects the logical structure of applications.

  • Can provide a high degree of protection and sharing at the granularity of segments.

  • Flexible, since segments can vary in size according to the needs of the application.

Disadvantages:

  • Can lead to external fragmentation, as free memory space gets fragmented over time.

  • Handling variable-sized segments can be more complex and less efficient than fixed-sized units.

Paging

Concept: Paging divides the computer's memory into fixed-size units called pages (in the virtual space) and frames (in the physical space). Paging abstracts the memory to give the appearance of a large, continuous address space.

Addressing: A logical (virtual) address in paging is divided into a page number and an offset. The page number is used to index into a page table to find the corresponding frame number in physical memory, to which the offset is then added.

Memory Management: The operating system maintains a page table for each process, which maps virtual pages to physical frames. This table is used to translate virtual addresses to physical addresses and to control access to memory.

Advantages:

  • Eliminates external fragmentation by using fixed-size pages and frames.

  • Simplifies memory management by abstracting the memory as a uniform array of storage.

  • Supports virtual memory, allowing processes to use more memory than physically available.

Disadvantages:

  • Can suffer from internal fragmentation, where the last page of a process may not be fully utilized.

  • The overhead of managing page tables, especially in systems with large virtual address spaces.

Key Differences

  • Granularity: Segmentation is variable-sized, matching the logical sections of a program, whereas paging uses fixed-sized pages.

  • Memory Management Approach: Segmentation manages memory based on logical constructs of a program, while paging abstracts memory into equal-sized blocks, disregarding the program's logical structure.

  • Fragmentation: Segmentation can lead to external fragmentation. Paging avoids external fragmentation but can suffer from internal fragmentation.

  • Address Translation: Segmentation requires a segment number and offset, with each segment having a different size. Paging uses a uniform page size, simplifying address translation but requiring additional structures (like page tables) for mapping.

  • Purpose: Segmentation aims to closely align memory management with the program's logical structure, enhancing protection and sharing. Paging focuses on efficiently utilizing physical memory and supporting virtual memory, making it suitable for systems with limited memory and those requiring isolation between processes.

Segmentation and paging are two fundamental memory management schemes used in operating systems to map logical addresses generated by programs to physical addresses in memory. Each method has its advantages and challenges, particularly in how they handle memory allocation and fragmentation. Understanding external and internal fragmentation is key to appreciating the differences between paging and segmentation.

External Fragmentation occurs when there is enough total memory space to satisfy a request but the available spaces are not contiguous; thus, the system cannot allocate a block of memory due to fragmentation of free space. This problem is more prevalent in systems that use variable-sized allocation units, such as segmentation.

  • Segmentation: Segmentation naturally leads to external fragmentation because programs are loaded into variable-sized segments in memory. Over time, as segments are loaded and unloaded, the memory space gets fragmented into small, non-contiguous free blocks, making it difficult to find contiguous space large enough for new segments, even when there is enough total free memory.

Internal Fragmentation occurs when allocated memory may exceed the requested memory, leading to wasted space within allocated regions. This phenomenon is more common in systems that allocate memory in fixed-sized blocks or pages, regardless of the exact amount of requested memory.

  • Paging: Paging suffers from internal fragmentation because memory is allocated in fixed-size blocks called pages. If the information stored in a page does not completely fill the page's capacity, the remaining space within the page is wasted since it cannot be used by other data or instructions. However, since all pages are the same size, paging does not suffer from external fragmentation; free memory is always in whole pages that can be used to satisfy future allocations.

Comparison in the Context of Fragmentation

  • Segmentation: Reflects the logical organization of programs into segments like code, data, and stack, with each segment growing or shrinking as required. This flexibility matches the natural structure of programs but can lead to external fragmentation as memory is consumed and released. Segmentation tries to minimize internal fragmentation by allocating exactly what is needed for a logical segment, but it cannot entirely avoid it, especially when segment sizes do not align well with the underlying physical memory architecture.

  • Paging: Simplifies memory management by ignoring the logical structure of programs, dividing physical memory into fixed-size blocks (pages) and virtual memory into blocks of the same size (page frames). Paging effectively eliminates external fragmentation by ensuring that any free page can satisfy a request for a new page. However, it can lead to internal fragmentation, especially when the process's memory requirement does not exactly match multiples of the page size.

Ways to Tackle External Fragmentation

  1. Compaction: Moving allocated segments or pages to consolidate free memory into a contiguous block. This is often costly in terms of processing time.

  2. Variable-sized Allocation: Using variable-sized allocation techniques like segmentation with dynamic relocation can somewhat mitigate external fragmentation but does not eliminate it.

  3. Memory Pooling: Dividing memory into pools based on the size of allocations can help reduce external fragmentation but may not be suitable for all applications.

Ways to Tackle Internal Fragmentation

  1. Slack Allocation: Allow a little more memory than requested to handle minor increases in need without causing a new allocation.

  2. Buddy System: Allocates memory from a fixed-size segment based on dividing it into halves to find a "best fit." While it can still lead to internal fragmentation, the buddy system aims to minimize it by choosing the closest-fit block.

  3. Page Size Selection: Choosing an appropriate page size that balances the overhead of managing many small pages with the waste of internal fragmentation in larger pages. Some systems use multiple page sizes to optimize for different use cases.

When allocating memory in systems that use dynamic memory allocation, such as in operating systems and their memory management routines, several strategies can be applied to decide how to allocate blocks of memory to processes or for various memory requests. The three common strategies are Best Fit, Worst Fit, and First Fit. Each strategy has its methodology for selecting a free block of memory from the pool of available memory.

Best Fit

  • How It Works: This strategy searches all available free memory blocks and allocates the smallest block that is large enough to fulfill the request. The goal is to find the "best fit" for the current request to minimize wasted space in the allocated block (minimizing internal fragmentation).

  • Allocation Process:

    1. Search the list of free memory blocks and find all blocks that are large enough to satisfy the memory request.

    2. From these blocks, choose the smallest one.

    3. If the selected block is significantly larger than the request, split it into two parts: one that satisfies the request and another that remains free.

    4. Mark the allocated block as in use and update the free memory list.

  • Pros and Cons: The main advantage of the best fit is that it attempts to minimize wasted space in each allocation. However, it can lead to a large number of very small unusable fragments over time (external fragmentation).

Worst Fit

  • How It Works: The worst fit method allocates memory by selecting the largest available block of free memory. The rationale behind this strategy is to leave as large a portion as possible free for future allocations, reducing the likelihood of not being able to satisfy a large request later.

  • Allocation Process:

    1. Search through the list of free blocks and select the largest block.

    2. If the block is much larger than needed, split it, allocate the required amount, and leave the rest free.

    3. Update the status of the allocated memory and the list of free blocks accordingly.

  • Pros and Cons: Worst fit can reduce external fragmentation by ensuring that the remaining free blocks are as large as possible after an allocation. However, it may increase internal fragmentation because large blocks are often split to satisfy small requests, leaving behind large free blocks that might not be optimally used.

First Fit

  • How It Works: This strategy allocates the first block of memory that is large enough to satisfy the request. The list of free memory blocks is searched sequentially until an adequately sized block is found.

  • Allocation Process:

    1. Traverse the free memory list from the beginning and select the first block that is large enough to fulfill the request.

    2. If necessary, split the block into allocated and remaining free portions.

    3. Adjust the memory allocation and free lists to reflect the allocation.

  • Pros and Cons: First fit is generally faster than best fit and worst fit because it does not need to search the entire list of free blocks before making an allocation. It offers a balance between minimizing fragmentation and allocation time. However, it can lead to poor memory utilization if many small allocations happen at the beginning of the memory, leaving larger free spaces at the end unused.

The choice of strategy depends on the specific requirements and constraints of the application or system. For example, real-time systems might prefer first fit due to its speed, whereas systems with a lot of dynamic memory allocation and deallocation might choose best fit to minimize wasted space. Implementing these strategies effectively requires maintaining a well-organized list of free memory blocks, possibly with additional structures to speed up searches for the best or worst fit.

In memory management systems, especially those dealing with paging and segmentation, certain hardware registers play critical roles in the address translation process and memory protection. Here's a detailed look at each of these registers:

PTBR (Page Table Base Register)

  • Purpose: Holds the base address of the page table in memory. In the case of single-level paging, it points directly to the page table. For multi-level paging systems (like hierarchical or inverted page tables), it typically points to the top-level table.

  • Function: The CPU uses the address in the PTBR to access the page table during the virtual-to-physical address translation process. By knowing where the page table starts, the system can calculate the location of any page table entry based on the virtual address it's translating.

PTLR (Page Table Length Register)

  • Purpose: Specifies the size of the page table or the maximum virtual page number (VPN) that can be translated using the current page table. It effectively limits the range of valid virtual addresses.

  • Function: It is used to check whether a virtual address is valid before attempting translation. If the page number part of a virtual address is greater than or equal to the value in the PTLR, it indicates an illegal or out-of-bounds access, leading to a segmentation fault or similar error.

STBR (Segment Table Base Register)

  • Purpose: Points to the base address of the segment table in memory. The segment table is crucial in segmentation-based memory management systems, where memory is divided into variable-sized segments.

  • Function: The CPU uses the address in the STBR to access the segment table when translating logical segment addresses into physical addresses. Each entry in the segment table contains the physical base address of a segment and its size, allowing the system to check if an access is valid and to compute the physical memory address for segment-based accesses.

STLR (Segment Table Length Register)

  • Purpose: Indicates the number of segments defined in the segment table or the maximum segment number that can be legally accessed. It sets the upper limit on segment numbers, serving a role similar to the PTLR but for segmentation.

  • Function: Before translating a segment-based logical address, the system checks if the segment number is less than the value in the STLR. If the segment number in a logical address is greater than or equal to the STLR's value, the access is considered illegal, leading to an error.

Handling of Illegal Accesses

  • Page Number ≥ PTLR Value: If the page number part of a virtual address is greater than or equal to the value in the PTLR, it means the address is attempting to access a page outside the range covered by the page table. This results in an error, such as a page fault, and the operating system takes appropriate action, which may include terminating the process or handling the fault if possible.

  • Segment Number ≥ STLR Value: Similarly, if the segment number part of a logical address is greater than or equal to the STLR's value, the address refers to a non-existent segment. This is also treated as an error, leading to a segmentation fault or another form of exception handling by the operating system.