SlideShare a Scribd company logo
Amrita
School
of
Engineering,
Bangalore
Ms. Harika Pudugosula
Lecturer
Department of Electronics & Communication Engineering
• Background
• Demand Paging
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
2
Introduction
• Virtual memory is a technique that allows the execution of processes that are not
completely in memory
• One major advantage of this scheme is that programs can be larger than physical
memory
• Virtual memory abstracts main memory into an extremely large, uniform array of
storage, separating logical memory as viewed by the user from physical memory
• This technique frees programmers from the concerns of memory-storage limitations
• Virtual memory also allows processes to share files easily and to implement shared
memory
• In addition, it provides an efficient mechanism for process creation
• Virtual memory is not easy to implement, however, and may substantially decrease
performance if it is used carelessly
3
Objectives
• To describe the benefits of a virtual memory system
• To explain the concepts of demand paging, page-replacement algorithms, and
allocation of page frames
• To discuss the principle of the working-set model
4
Background
5
• Process Code needs to be in memory to execute, but entire process code rarely used
at the same time.
• Error code, unusual routines, large data structures(An assembler/compiler symbol
table may have room for 3,000 symbols, although the average program has less than
200 symbols)
• Entire program code not needed at same time in physical memory
• Virtual memory is a technique that allows the execution of processes that are not
completely in memory
• Benefits of executing a program that is not completely in main memory
• Program no longer constrained by limits of physical memory(Programs can be
larger than physical memory)
• Each program takes less memory while running -> more programs run at the
same time
• Increased CPU utilization and throughput with no increase in response time
or turnaround time
• Less I/O needed to load or swap programs into memory -> each user program
runs faster
6
Background
7
• Virtual memory involves the separation of logical
memory as perceived by users from physical
memory
• This separation allows an extremely large virtual
memory to be provided for programmers when only
a smaller physical memory is available
• Virtual memory makes the task of programming
much easier, because the programmer no longer
needs to worry about the amount of physical
memory available
• The virtual address space of a process refers to the
logical (or virtual) view of how a process is stored in
memory
• This view is that a process begins at a certain logical
address—say, address 0—and exists in contiguous
memory
Background
8
• In addition to separating logical memory from physical memory, virtual memory
allows files and memory to be shared by two or more processes through page
sharing
• This leads to the following benefits -
• System libraries can be shared by several processes through mapping of the
shared object into a virtual address space. A library is mapped read-only into the
space of each process that is linked with it
• Virtual memory allows one process to create a region of memory that it can
share with another process
• Pages can be shared during process creation with the fork() system call, thus
speeding up process creation
9
Demand Paging
10
• Consider how an executable program might be loaded from disk into memory
• One option is to load the entire program in physical memory at program execution
time
• May not initially need the entire program in memory
• With Demand-paged virtual memory, pages are loaded only when they are
demanded during program execution
• Pages that are never accessed are thus never loaded into physical memory
• Processes reside in secondary memory (usually a disk), when we want to execute a
process, we swap it into memory
• A lazy swapper never swaps a page into memory unless that page will be needed
• In the context of a demand-paging system, use of the term “swapper” is technically
incorrect
• A swapper manipulates entire processes, whereas a pager is concerned with the
individual pages of a process
Demand Paging
11
Figure. Transfer of a paged memory to contiguous disk space
Basic Concepts
12
• The pager brings only those pages into memory. Thus, it avoids reading into
memory pages that will not be used anyway, decreasing the swap time and the
amount of physical memory needed
• Some form of hardware support to distinguish between the pages that are in
memory and the pages that are on the disk
• When this bit is set to “valid,” the associated page is both legal and in memory
• If the bit is set to “invalid,” the page either is not valid (that is, not in the logical
address space of the process) or is valid but is currently on the disk
• The page-table entry for a page that is brought into memory is set as usual, but the
page-table entry for a page that is not currently in memory is either simply marked
invalid or contains the address of the page on disk
• With each page table entry a valid–invalid bit is associated
(v --> in-memory – memory resident, i --> not-in-memory)
• Initially valid–invalid bit is set to i on all entries
Basic Concepts
13
Figure. Page table when some pages are not in main memory
Basic Concepts
14
• During MMU address translation, if valid–invalid bit in page table entry is i -->page
fault
• What happens if the process tries to access a page that was not brought into
memory?
page fault will occur
• Page fault - It occurs when CPU tries to access a page that does not exist in main
memory
• The procedure for handling this page fault is straightforward - (Steps to handle page
fault)
1. We check an internal table (usually kept with the process control block)
for this process to determine whether the reference was a valid or an invalid
memory access
2. If the reference was invalid, we terminate the process. If it was valid but we
have not yet brought in that page, we now page it in
Basic Concepts
15
3. We find a free frame (by taking one from the free-frame list, for example)
4. We schedule a disk operation to read the desired page into the newly allocated
frame
5. When the disk read is complete, we modify the internal table kept with the
process and the page table to indicate that the page is now in memory
6. We restart the instruction that was interrupted by the trap. The process can now
access the page as though it had always been in memory
• After this page is brought into memory, the process continues to execute, faulting as
necessary until every page that it needs is in memory
• At that point, it can execute with no more faults. This scheme is pure demand
paging - never bring a page into memory until it is required
• Some programs could access several new pages of memory with each instruction
execution, possibly causing multiple page faults per instruction - situation would
result in unacceptable system performance
• Programs tend to have locality of reference, which results in reasonable
performance from demand paging
Basic Concepts
16
Figure. Steps in handling a page fault
Basic Concepts
17
• The hardware to support demand paging is the same as the hardware for paging
and swapping -
• Page table -This table has the ability to mark an entry invalid through a valid–
invalid bit or a special value of protection bits
• Secondary memory - This memory holds those pages that are not present in
main memory. The secondary memory is usually a high-speed disk. It is known
as the swap device, and the section of disk used for this purpose is known as
swap space
• A crucial requirement for demand paging is the ability to restart any instruction
after a page fault
• Because we save the state (registers, condition code, instruction counter) of the
interrupted process when the page fault occurs, we must be able to restart the
process in exactly the same place and state, except that the desired page is now in
memory and is accessible
• Consider a three-address instruction such as ADD the content of A to B, placing the
result in C. These are the steps to execute this instruction
Basic Concepts
18
1. Fetch and decode the instruction (ADD).
2. Fetch A.
3. Fetch B.
4. Add A and B.
5. Store the sum in C.
• If we fault when we try to store in C ,we will have to get the desired page, bring it in,
correct the page table, and restart the instruction
• The restart will require fetching the instruction again, decoding it again, fetching the
two operands again, and then adding again
• The repetition is necessary only when a page fault occurs
Performance of Demand Paging
19
• Demand paging can significantly affect the performance of a computer system
• Let’s compute the effective access time for a demand-paged memory
• For most computer systems, the memory-access time, denoted ma, ranges from 10
to 200 nanoseconds
• As long as we have no page faults, the effective access time is equal to the
memory access time
• If a page fault occurs, we must first read the relevant page from disk and then
access the desired word
• Let p be the probability of a page fault (0 ≤ p ≤ 1)
• We would expect p to be close to zero—that is, we would expect to have only a
few page faults
• The effective access time is then
effective access time = (1 − p) × ma + p × page fault time
• To compute the effective access time, we must know how much time is needed to
service a page fault
Performance of Demand Paging
20
• A page fault causes the following sequence to occur -
1. Trap to the operating system
2. Save the user registers and process state
3. Determine that the interrupt was a page fault
4. Check that the page reference was legal and determine the location of the page
on the disk
5. Issue a read from the disk to a free frame -
a. Wait in a queue for this device until the read request is serviced
b. Wait for the device seek and/or latency time
c. Begin the transfer of the page to a free frame
6. While waiting, allocate the CPU to some other user (CPU scheduling, optional)
7. Receive an interrupt from the disk I/O subsystem (I/O completed)
8. Save the registers and process state for the other user (if step 6 is executed)
9. Determine that the interrupt was from the disk
Performance of Demand Paging
21
• A page fault causes the following sequence to occur -
10. Correct the page table and other tables to show that the desired page is now in
memory
11. Wait for the CPU to be allocated to this process again.
12. Restore the user registers, process state, and new page table, and then resume
the interrupted instruction
• Three major activities
• Service the interrupt – careful coding means just several hundred instructions
needed
• Read the page – lots of time
• Restart the process – again just a small amount of time
• Page Fault Rate 0 <=p<= 1
• if p = 0 no page faults
• if p = 1, every reference is a fault
• Effective Access Time (EAT)
EAT = (1 – p) x memory access + p (page fault overhead + swap page out + swap page
in)
Performance of Demand Paging
22
• With an average page-fault service time of 8 milliseconds and a memory access
time of 200 nanoseconds, the effective access time in nanoseconds is
effective access time = (1 − p) × (200) + p (8 milliseconds)
= (1 − p) × 200 + p × 8,000,000
= 200 − 200p + p × 8,000,000
= 200 + 7,999,800 ×p
• If one access out of 1,000 causes a page fault i.e., p=1000, the effective access time
is 8.2 microseconds
• The computer will be slowed down by a factor of 40 because of demand paging
• The effective access time is directly proportional to the page-fault rate
• If we want performance degradation to be less than 10 percent, we need to keep
the probability of page faults at the following level -
220 > 200 + 7,999,800 × p,
20 > 7,999,800 × p,
p < 0.0000025
Performance of Demand Paging
23
• Demand Paging Optimizations
• Handles and overall use of swap space -
• Disk I/O to swap space is generally faster than that to the file system
• It is a faster file system because swap space is allocated in much larger blocks, and
file lookups and indirect allocation methods are not used
24
References
1. Silberscatnz and Galvin, “Operating System Concepts,” Ninth Edition,
John Wiley and Sons, 2012.
25
Thank you

More Related Content

PDF
CH09.pdf
PPTX
Os Module 4_Virtual Memory Management.pptx
PDF
Virtual Memory.pdf
PPT
Mca ii os u-4 memory management
PPTX
Operating system 37 demand paging
PPT
Virtual memory Chapter 9 simple and easy
PPT
Virtual memory - Demand Paging
PDF
Virtual memory
CH09.pdf
Os Module 4_Virtual Memory Management.pptx
Virtual Memory.pdf
Mca ii os u-4 memory management
Operating system 37 demand paging
Virtual memory Chapter 9 simple and easy
Virtual memory - Demand Paging
Virtual memory

Similar to Virtual Memory Management Part - I.pdf (20)

PDF
Adobe Scan 06-Jan-2023.pdf demand paging document
PPTX
Virtual Memory Management
PPTX
Virtual Memory Managementddddddddffffffffffffff.pptx
PPT
Ch9 OS
 
PPT
PPTX
Demand Paging.pptx
DOCX
Module4
DOCX
virtual memory
PPT
Chapter 9 - Virtual Memory
PPT
virtual memory
PPTX
Demand paging
PPTX
virtual memory
PPT
Memory+management
PPT
Chapter 8 - Main Memory
PPT
PPT
Adobe Scan 06-Jan-2023.pdf demand paging document
Virtual Memory Management
Virtual Memory Managementddddddddffffffffffffff.pptx
Ch9 OS
 
Demand Paging.pptx
Module4
virtual memory
Chapter 9 - Virtual Memory
virtual memory
Demand paging
virtual memory
Memory+management
Chapter 8 - Main Memory
Ad

More from Harika Pudugosula (20)

PPTX
Artificial Neural Networks_Part-2.pptx
PPTX
Artificial Neural Networks_Part-1.pptx
PPTX
Introduction.pptx
PDF
CPU Scheduling Part-III.pdf
PDF
CPU Scheduling Part-II.pdf
PDF
CPU Scheduling Part-I.pdf
PDF
Multithreaded Programming Part- III.pdf
PDF
Multithreaded Programming Part- II.pdf
PDF
Multithreaded Programming Part- I.pdf
PDF
Deadlocks Part- III.pdf
PDF
Deadlocks Part- II.pdf
PDF
Deadlocks Part- I.pdf
PDF
Memory Management Strategies - IV.pdf
PDF
Memory Management Strategies - III.pdf
PDF
Memory Management Strategies - II.pdf
PDF
Memory Management Strategies - I.pdf
PDF
Virtual Memory Management Part - II.pdf
PDF
Operating System Structure Part-II.pdf
PDF
Operating System Structure Part-I.pdf
PDF
Lecture-4_Process Management.pdf
Artificial Neural Networks_Part-2.pptx
Artificial Neural Networks_Part-1.pptx
Introduction.pptx
CPU Scheduling Part-III.pdf
CPU Scheduling Part-II.pdf
CPU Scheduling Part-I.pdf
Multithreaded Programming Part- III.pdf
Multithreaded Programming Part- II.pdf
Multithreaded Programming Part- I.pdf
Deadlocks Part- III.pdf
Deadlocks Part- II.pdf
Deadlocks Part- I.pdf
Memory Management Strategies - IV.pdf
Memory Management Strategies - III.pdf
Memory Management Strategies - II.pdf
Memory Management Strategies - I.pdf
Virtual Memory Management Part - II.pdf
Operating System Structure Part-II.pdf
Operating System Structure Part-I.pdf
Lecture-4_Process Management.pdf
Ad

Recently uploaded (20)

PDF
PPT on Performance Review to get promotions
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PPTX
Sustainable Sites - Green Building Construction
PPT
Project quality management in manufacturing
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PDF
Digital Logic Computer Design lecture notes
PPTX
CH1 Production IntroductoryConcepts.pptx
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
DOCX
573137875-Attendance-Management-System-original
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPT on Performance Review to get promotions
Internet of Things (IOT) - A guide to understanding
Model Code of Practice - Construction Work - 21102022 .pdf
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Strings in CPP - Strings in C++ are sequences of characters used to store and...
Sustainable Sites - Green Building Construction
Project quality management in manufacturing
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
CYBER-CRIMES AND SECURITY A guide to understanding
Digital Logic Computer Design lecture notes
CH1 Production IntroductoryConcepts.pptx
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
573137875-Attendance-Management-System-original
Arduino robotics embedded978-1-4302-3184-4.pdf
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Embodied AI: Ushering in the Next Era of Intelligent Systems
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf

Virtual Memory Management Part - I.pdf

  • 2. • Background • Demand Paging • Copy-on-Write • Page Replacement • Allocation of Frames • Thrashing 2
  • 3. Introduction • Virtual memory is a technique that allows the execution of processes that are not completely in memory • One major advantage of this scheme is that programs can be larger than physical memory • Virtual memory abstracts main memory into an extremely large, uniform array of storage, separating logical memory as viewed by the user from physical memory • This technique frees programmers from the concerns of memory-storage limitations • Virtual memory also allows processes to share files easily and to implement shared memory • In addition, it provides an efficient mechanism for process creation • Virtual memory is not easy to implement, however, and may substantially decrease performance if it is used carelessly 3
  • 4. Objectives • To describe the benefits of a virtual memory system • To explain the concepts of demand paging, page-replacement algorithms, and allocation of page frames • To discuss the principle of the working-set model 4
  • 5. Background 5 • Process Code needs to be in memory to execute, but entire process code rarely used at the same time. • Error code, unusual routines, large data structures(An assembler/compiler symbol table may have room for 3,000 symbols, although the average program has less than 200 symbols) • Entire program code not needed at same time in physical memory • Virtual memory is a technique that allows the execution of processes that are not completely in memory • Benefits of executing a program that is not completely in main memory • Program no longer constrained by limits of physical memory(Programs can be larger than physical memory) • Each program takes less memory while running -> more programs run at the same time • Increased CPU utilization and throughput with no increase in response time or turnaround time • Less I/O needed to load or swap programs into memory -> each user program runs faster
  • 6. 6
  • 7. Background 7 • Virtual memory involves the separation of logical memory as perceived by users from physical memory • This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available • Virtual memory makes the task of programming much easier, because the programmer no longer needs to worry about the amount of physical memory available • The virtual address space of a process refers to the logical (or virtual) view of how a process is stored in memory • This view is that a process begins at a certain logical address—say, address 0—and exists in contiguous memory
  • 8. Background 8 • In addition to separating logical memory from physical memory, virtual memory allows files and memory to be shared by two or more processes through page sharing • This leads to the following benefits - • System libraries can be shared by several processes through mapping of the shared object into a virtual address space. A library is mapped read-only into the space of each process that is linked with it • Virtual memory allows one process to create a region of memory that it can share with another process • Pages can be shared during process creation with the fork() system call, thus speeding up process creation
  • 9. 9
  • 10. Demand Paging 10 • Consider how an executable program might be loaded from disk into memory • One option is to load the entire program in physical memory at program execution time • May not initially need the entire program in memory • With Demand-paged virtual memory, pages are loaded only when they are demanded during program execution • Pages that are never accessed are thus never loaded into physical memory • Processes reside in secondary memory (usually a disk), when we want to execute a process, we swap it into memory • A lazy swapper never swaps a page into memory unless that page will be needed • In the context of a demand-paging system, use of the term “swapper” is technically incorrect • A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process
  • 11. Demand Paging 11 Figure. Transfer of a paged memory to contiguous disk space
  • 12. Basic Concepts 12 • The pager brings only those pages into memory. Thus, it avoids reading into memory pages that will not be used anyway, decreasing the swap time and the amount of physical memory needed • Some form of hardware support to distinguish between the pages that are in memory and the pages that are on the disk • When this bit is set to “valid,” the associated page is both legal and in memory • If the bit is set to “invalid,” the page either is not valid (that is, not in the logical address space of the process) or is valid but is currently on the disk • The page-table entry for a page that is brought into memory is set as usual, but the page-table entry for a page that is not currently in memory is either simply marked invalid or contains the address of the page on disk • With each page table entry a valid–invalid bit is associated (v --> in-memory – memory resident, i --> not-in-memory) • Initially valid–invalid bit is set to i on all entries
  • 13. Basic Concepts 13 Figure. Page table when some pages are not in main memory
  • 14. Basic Concepts 14 • During MMU address translation, if valid–invalid bit in page table entry is i -->page fault • What happens if the process tries to access a page that was not brought into memory? page fault will occur • Page fault - It occurs when CPU tries to access a page that does not exist in main memory • The procedure for handling this page fault is straightforward - (Steps to handle page fault) 1. We check an internal table (usually kept with the process control block) for this process to determine whether the reference was a valid or an invalid memory access 2. If the reference was invalid, we terminate the process. If it was valid but we have not yet brought in that page, we now page it in
  • 15. Basic Concepts 15 3. We find a free frame (by taking one from the free-frame list, for example) 4. We schedule a disk operation to read the desired page into the newly allocated frame 5. When the disk read is complete, we modify the internal table kept with the process and the page table to indicate that the page is now in memory 6. We restart the instruction that was interrupted by the trap. The process can now access the page as though it had always been in memory • After this page is brought into memory, the process continues to execute, faulting as necessary until every page that it needs is in memory • At that point, it can execute with no more faults. This scheme is pure demand paging - never bring a page into memory until it is required • Some programs could access several new pages of memory with each instruction execution, possibly causing multiple page faults per instruction - situation would result in unacceptable system performance • Programs tend to have locality of reference, which results in reasonable performance from demand paging
  • 16. Basic Concepts 16 Figure. Steps in handling a page fault
  • 17. Basic Concepts 17 • The hardware to support demand paging is the same as the hardware for paging and swapping - • Page table -This table has the ability to mark an entry invalid through a valid– invalid bit or a special value of protection bits • Secondary memory - This memory holds those pages that are not present in main memory. The secondary memory is usually a high-speed disk. It is known as the swap device, and the section of disk used for this purpose is known as swap space • A crucial requirement for demand paging is the ability to restart any instruction after a page fault • Because we save the state (registers, condition code, instruction counter) of the interrupted process when the page fault occurs, we must be able to restart the process in exactly the same place and state, except that the desired page is now in memory and is accessible • Consider a three-address instruction such as ADD the content of A to B, placing the result in C. These are the steps to execute this instruction
  • 18. Basic Concepts 18 1. Fetch and decode the instruction (ADD). 2. Fetch A. 3. Fetch B. 4. Add A and B. 5. Store the sum in C. • If we fault when we try to store in C ,we will have to get the desired page, bring it in, correct the page table, and restart the instruction • The restart will require fetching the instruction again, decoding it again, fetching the two operands again, and then adding again • The repetition is necessary only when a page fault occurs
  • 19. Performance of Demand Paging 19 • Demand paging can significantly affect the performance of a computer system • Let’s compute the effective access time for a demand-paged memory • For most computer systems, the memory-access time, denoted ma, ranges from 10 to 200 nanoseconds • As long as we have no page faults, the effective access time is equal to the memory access time • If a page fault occurs, we must first read the relevant page from disk and then access the desired word • Let p be the probability of a page fault (0 ≤ p ≤ 1) • We would expect p to be close to zero—that is, we would expect to have only a few page faults • The effective access time is then effective access time = (1 − p) × ma + p × page fault time • To compute the effective access time, we must know how much time is needed to service a page fault
  • 20. Performance of Demand Paging 20 • A page fault causes the following sequence to occur - 1. Trap to the operating system 2. Save the user registers and process state 3. Determine that the interrupt was a page fault 4. Check that the page reference was legal and determine the location of the page on the disk 5. Issue a read from the disk to a free frame - a. Wait in a queue for this device until the read request is serviced b. Wait for the device seek and/or latency time c. Begin the transfer of the page to a free frame 6. While waiting, allocate the CPU to some other user (CPU scheduling, optional) 7. Receive an interrupt from the disk I/O subsystem (I/O completed) 8. Save the registers and process state for the other user (if step 6 is executed) 9. Determine that the interrupt was from the disk
  • 21. Performance of Demand Paging 21 • A page fault causes the following sequence to occur - 10. Correct the page table and other tables to show that the desired page is now in memory 11. Wait for the CPU to be allocated to this process again. 12. Restore the user registers, process state, and new page table, and then resume the interrupted instruction • Three major activities • Service the interrupt – careful coding means just several hundred instructions needed • Read the page – lots of time • Restart the process – again just a small amount of time • Page Fault Rate 0 <=p<= 1 • if p = 0 no page faults • if p = 1, every reference is a fault • Effective Access Time (EAT) EAT = (1 – p) x memory access + p (page fault overhead + swap page out + swap page in)
  • 22. Performance of Demand Paging 22 • With an average page-fault service time of 8 milliseconds and a memory access time of 200 nanoseconds, the effective access time in nanoseconds is effective access time = (1 − p) × (200) + p (8 milliseconds) = (1 − p) × 200 + p × 8,000,000 = 200 − 200p + p × 8,000,000 = 200 + 7,999,800 ×p • If one access out of 1,000 causes a page fault i.e., p=1000, the effective access time is 8.2 microseconds • The computer will be slowed down by a factor of 40 because of demand paging • The effective access time is directly proportional to the page-fault rate • If we want performance degradation to be less than 10 percent, we need to keep the probability of page faults at the following level - 220 > 200 + 7,999,800 × p, 20 > 7,999,800 × p, p < 0.0000025
  • 23. Performance of Demand Paging 23 • Demand Paging Optimizations • Handles and overall use of swap space - • Disk I/O to swap space is generally faster than that to the file system • It is a faster file system because swap space is allocated in much larger blocks, and file lookups and indirect allocation methods are not used
  • 24. 24 References 1. Silberscatnz and Galvin, “Operating System Concepts,” Ninth Edition, John Wiley and Sons, 2012.