VIRTUAL MEMORY
MANAGEMENT
VIRTUAL MEMORY MANAGEMENT
 Virtual memory is a technique that allows for the execution of partially loaded process.
 When a program is executed, not all of its code and data need to be loaded into physical
memory (RAM) at once.
 Instead, portions of the program are loaded into memory as needed, and other parts may
be temporarily stored on disk until they are required.
 Advantages:
 Programs are not restricted by the available physical memory. They can use a
larger virtual memory space, allowing more extensive programs to run.
 Since each program uses less physical memory, multiple programs can run
simultaneously, enhancing throughput and CPU utilization..
 Virtual memory is the separation of users logical memory from physical memory.
This separation enables a large virtual memory to be available even with limited
physical memory, providing more flexibility and efficiency in memory usage.
 Logical memory separation also allows for files and memory to be shared among
different processes through page sharing, promoting better resource utilization.
operating system notes about virtual memory 4.pptx
 Virtual memory is implemented using Demand Paging.
 Virtual address space: Every process has a virtual address space i.e used as the stack or
heap grows in size.
DEMAND PAGING
 A demand-paging system is similar to a paging system with swapping where processes
reside in secondary memory (usually a disk).
 When we want to execute a process, we swap it into main memory.
 Rather than swapping the entire process into memory, however, we use a lazy swapper
 A lazy swapper never swaps a page into memory unless that page will be needed.
 Since we are now viewing a process as a sequence of pages, rather than as one large
contiguous address space, use of the term swapper is technically incorrect.
 A swapper manipulates entire processes, whereas a is concerned with the individual
pages of a process.
 We thus use pager, rather than swapper, in connection with demand paging.
operating system notes about virtual memory 4.pptx
Basic concept
 Instead of swapping the whole process the pager swaps only the necessary pages in to
memory.
 Thus it avoids reading unused pages and decreases the swap time and amount of
physical memory needed.
 The valid-invalid bit scheme can be used to distinguish between the pages that are on
the disk and that are in memory.
 With each page table entry a valid–invalid bit is associated
 (v ⇒ in-memory, i⇒not-in-memory)
 During address translation, if valid–invalid bit in page table entry is i ⇒ page fault.
 If the bit is valid then the page is both legal and is in memory.
 If the bit is invalid then either page is not valid or is valid but is currently on the disk.
operating system notes about virtual memory 4.pptx
Page Fault
 If a page is needed that was not originally loaded up, then a page fault trap is generated.
 Steps in Handling a Page Fault
1. The memory address requested is first checked, to make sure it was a valid memory
request.
2. If the reference is to an invalid page, the process is terminated. Otherwise, if the page is not
present in memory, it must be paged in.
3. A free frame is located, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the necessary page from disk.
5. After the page is loaded to memory, the process's page table is updated with the new frame
number, and the invalid bit is changed to indicate that this is now a valid page reference.
6. The instruction that caused the page fault must now be restarted from the beginning.
operating system notes about virtual memory 4.pptx
COPY-ON-WRITE
 Technique initially allows the parent and the child to share the same pages.
 These pages are marked as copy on- write pages i.e., if either process writes to a shared
page, a copy of shared page is created.
 Eg:-If a child process try to modify a page containing portions of the stack; the OS
recognizes them as a copy-on-write page and create a copy of this page and maps it on
to the address space of the child process.
 So the child process will modify its copied page and not the page belonging to parent.
The new pages are obtained from the pool of free pages.
 The previous contents of pages are erased before getting them into main memory. This
is called Zero – on fill demand.
operating system notes about virtual memory 4.pptx
operating system notes about virtual memory 4.pptx
PAGE REPLACEMENT
 Page replacement policy deals with the solution of pages in memory to be
replaced by a new page that must be brought in. When a user process is executing
a page fault occurs.
 The hardware traps to the operating system, which checks the internal table to see
that this is a page fault and not an illegal memory access.
 The operating system determines where the derived page is residing on the disk,
and this finds that there are no free frames on the list of free frames.
 When all the frames are in main memory, it is necessary to bring a new page to
satisfy the page fault, replacement policy is concerned with selecting a page
currently in memory to be replaced.
 The page i,e to be removed should be the page i,e least likely to be referenced in
future.
operating system notes about virtual memory 4.pptx
Working of Page Replacement Algorithm
1. Find the location of derived page on the disk.
2. Find a free frame x If there is a free frame, use it. Otherwise, use a
replacement algorithm to select a victim.
 Write the victim page to the disk.
 Change the page and frame tables accordingly.
3. Read the desired page into the free frame; change the page and frame tables.
4. Restart the user process.
Victim Page
 The page that is supported out of physical memory is called victim page.
 Each page or frame may have a dirty (modify) bit associated with the
hardware.
 The modify bit for a page is set by the hardware whenever any word or
byte in the page is written into, indicating that the page has been modified.
 When we select the page for replacement, we check its modify bit. If the bit is
set, then the page is modified since it was read from the disk.
 If the bit was not set, the page has not been modified since it was read into
memory.
 Therefore, if the copy of the page has not been modified we can avoid writing
the memory page to the disk, if it is already there. Sum pages cannot be
modified.
Modify bit/ Dirty bit :
 Each page/frame has a modify bit associated with it.
 If the page is not modified (read-only) then one can discard
such page without writing it onto the disk. Modify bit of such
page is set to 0.
 Modify bit is set to 1, if the page has been modified. Such
pages must be written to the disk.
 Modify bit is used to reduce overhead of page transfers – only
modified pages are written to disk
PAGE REPLACEMENT ALGORITHMS
 Evaluate algorithm by running it on a particular string of memory references
(reference string) and computing the number of page faults on that string
 In all our examples, the reference string is
 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
FIFO replacement algorithm
 This is the simplest page replacement algorithm.
 A FIFO replacement algorithm associates each page the time when that page was
brought into memory.
 When a Page is to be replaced the oldest one is selected.
 We replace the queue at the head of the queue. When a page is brought into memory,
we insert it at the tail of the queue.
 In the following example, a reference string is given and there are 3 free frames. There
are 20 page requests, which results in 15 page faults
operating system notes about virtual memory 4.pptx
Belady’s Anomaly
 For some page replacement algorithm, the page fault may increase as the number of allocated
frames increases. FIFO replacement algorithm may face this problem.
 more frames more page faults
⇒
 To illustrate the problems that are possible with a FIFO page-replacement algorithm, we
consider the following reference string:
 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 Notice that the number of faults for four frames (ten) is greater than the number of faults for
three frames (nine)! This most unexpected result is known as Beladys Anomaly .
 for some page-replacement algorithms, the page-fault rate may increase as the number of
allocated frames increases.
 We would expect that giving more memory to a process would improve its performance.
 In some early research, investigators noticed that this assumption was not always true.
Belady's anomaly was discovered as a result.
Optimal Algorithm
 Optimal page replacement algorithm is mainly to solve the problem of Belady’s Anomaly.
 Optimal page replacement algorithm has the lowest page fault rate of all algorithms.
 An optimal page replacement algorithm exists and has been called OPT.
 The working is simple “Replace the page that will not be used for the longest period of time”
 Example: consider the following reference string
 The first three references cause faults that fill the three empty frames.
 The references to page 2 replaces page 7, because 7 will not be used until reference 18.
x The page 0 will be used at 5 and page 1 at 14.
 With only 9 page faults, optimal replacement is much better than a FIFO, which had 15
faults. This algorithm is difficult to implement because it requires future knowledge of
reference strings.
 Replace page that will not be used for longest period of time
operating system notes about virtual memory 4.pptx
Least Recently Used (LRU) Algorithm
 The LRU (Least Recently Used) algorithm, predicts that the page that has not
been used in the longest time is the one that will not be used again in the near
future.
 Some view LRU as analogous to OPT, but here we look backwards in time
instead of forwards.
 LRU replacement associates with each page the time of that page's last use.
 When a page must be replaced, LRU chooses the page that has not been used for the
longest period of time.
 We can think of this strategy as the optimal page-replacement algorithm looking
backward in time, rather than forward.
operating system notes about virtual memory 4.pptx
 The Least Recently Used (LRU) page replacement algorithm is a commonly used
method in operating systems to manage memory efficiently. It replaces the least
recently used page when new pages need to be brought into memory.
 Implementing LRU efficiently can indeed require hardware assistance, especially in
systems where the number of pages or frames is large.
 Two common implementations of the LRU algorithm are based on counters and
stacks:
 Counters: In this implementation, each page/frame has an associated counter that
keeps track of when it was last accessed. Every time a page is accessed, its counter is
updated to reflect the current time. When a new page needs to be brought into memory,
the algorithm selects the page with the smallest counter value, indicating that it was
accessed the longest time ago, and replaces it with the new page.
 Stacks: In this implementation, a stack data structure is used to maintain the order of
pages based on their last access time. Whenever a page is accessed, it is moved to the
top of the stack. When a new page needs to be brought into memory, the algorithm
selects the page at the bottom of the stack, which represents the page that was accessed
least recently, and replaces it with the new page.
LRU-Approximation Page Replacement
 Many systems offer some degree of hardware support, enough to approximate LRU.
 In particular, many systems provide a reference bit for every entry in a page table, which is
set anytime that page is accessed. Initially all bits are set to zero, and they can also all be
cleared at any time. One bit distinguishes pages that have been accessed since the last clear
from those that have not been accessed.
 Additional-Reference-Bits Algorithm
 An 8-bit byte (reference bit) is stored for each page in a table in memory.
 At regular intervals (say, every 100 milliseconds), a timer interrupt transfers control to
the operating system. The operating system shifts the reference bit for each page into the
high-order bit of its 8-bit byte, shifting the other bits right by 1 bit and discarding the
low- order bit.
 These 8-bit shift registers contain the history of page use for the last eight time periods.
 If the shift register contains 00000000, then the page has not been used for eight time
periods.
 A page with a history register value of 11000100 has been used more recently than one
with a value of 01110111.
Second- chance (clock) page
replacement algorithm
 The second chance algorithm is a FIFO replacement algorithm, except the reference
bit is used to give pages a second chance at staying in the page table.
 When a page must be replaced, the page table is scanned in a FIFO (circular
queue) manner.
 If a page is found with its reference bit as ‘0’, then that page is selected as the
next victim.
 If the reference bit value is‘1’, then the page is given a second chance and its
reference bit value is cleared (assigned as‘0’).
 Thus, a page that is given a second chance will not be replaced until all other pages
have been replaced (or given second chances). In addition, if a page is used often,
then it sets its reference bit again.
 This algorithm is also known as the clock algorithm.
operating system notes about virtual memory 4.pptx
Enhanced Second-Chance Algorithm
 The enhanced second chance algorithm looks at the reference bit and the modify bit
( dirty bit ) as an ordered page, and classifies pages into one of four classes:
1. ( 0, 0 ) - Neither recently used nor modified.
2. ( 0, 1 ) - Not recently used, but modified.
3. ( 1, 0 ) - Recently used, but clean.
4. ( 1, 1 ) - Recently used and modified.
 This algorithm searches the page table in a circular fashion, looking for the first page it
can find in the lowest numbered category. i.e. it first makes a pass looking for a ( 0,
0 ), and then if it can't find one, it makes another pass looking for a(0,1),etc.
 The main difference between this algorithm and the previous one is the preference for
replacing clean pages if possible.
Count Based Page Replacement
 There is many other algorithms that can be used for page replacement, we can keep
a counter of the number of references that has made to a page.
a) LFU (least frequently used):
 This causes the page with the smallest count to be replaced. The reason for this
selection is that actively used page should have a large reference count.
 This algorithm suffers from the situation in which a page is used heavily during
the initial phase of a process but never used again. Since it was used heavily, it
has a large count and remains in memory even though it is no longer needed.
a) MFU Algorithm:
 based on the argument that the page with the smallest count was probably just
brought in and has yet to be used
ALLOCATION OF FRAMES
 1. Equal Allocation- If there are m frames available and n processes to share
them, each process gets m / n frames, and the left over’s are kept in a free-
frame buffer pool.
 2. Proportional Allocation - Allocate the frames proportionally depending
on the size of the process
 . If the size of process i is Si, and S is the sum of size of all processes in the
system, then the allocation for process
 Pi is ai= m * Si/ S. where m is the free frames available in the system.
 In local replacement, each process is allocated a fixed number of physical frames.
When a page fault occurs, the operating system replaces a page only among the pages
that belong to the process causing the fault.
 In global replacement, any page in the system can be selected as the victim for
replacement, regardless of which process owns the page.
THRASHING
 If the number of frames allocated to a low-priority process falls below the minimum number
required by the computer architecture then we suspend the process execution.
 A process is thrashing if it is spending more time in paging than executing.
 If the processes do not have enough number of frames, it will quickly page fault. During this
it must replace some page that is not currently in use. Consequently it quickly faults again
and again.
 The process continues to fault, replacing pages for which it then faults and brings back. This
high paging activity is called thrashing. The phenomenon of excessively moving pages back
and forth b/w memory and secondary has been called thrashing.
 Cause of Thrashing
 Thrashing results in severe performance problem.
 The operating system monitors the cpu utilization is low. We increase the degree of
multi programming by introducing new process to the system.
 A global page replacement algorithm replaces pages with no regards to the process to which they
belong.
Cause of Thrashing
 Thrashing results in severe performance problem.
 The operating system monitors the cpu utilization is low. We increase the
degree of multi programming by introducing new process to the system.
 A global page replacement algorithm replaces pages with no regards to the
process to which they belong reached. If the degree of multi programming is
increased further thrashing sets in and the cpu utilization drops sharply.
 At this point, to increases CPU utilization and stop thrashing, we must
increase degree of multiprogramming.
 we can limit the effect of thrashing by using a local replacement algorithm.
To prevent thrashing, we must provide a process as many frames as it needs.

More Related Content

PPT
Chapter 9 - Virtual Memory
PPT
Operating System
PPTX
Demand paging
PPT
PPTX
Page replacement
PPTX
virtual memory Operating system
PPTX
Virtual memory management in Operating System
PPT
Ch10 OS
 
Chapter 9 - Virtual Memory
Operating System
Demand paging
Page replacement
virtual memory Operating system
Virtual memory management in Operating System
Ch10 OS
 

Similar to operating system notes about virtual memory 4.pptx (20)

PPT
PPTX
Virtual Memory in Operating System
PDF
Virtual Memory.pdf
PPTX
Virtual Memory Management
PDF
Virtual Memory Management Part - II.pdf
PPT
Virtual memory
PDF
Distributed Operating System_3
PPT
Virtual Memory sjkdhikejv vsdkjnksnv vkjhfvk
PPT
Page Replacement
PDF
381 ccs chapter7_updated(1)
PPTX
Operating Systems -Module-5 Presentation
PPTX
operating system virtual memory and logical memory
PPT
Explain about replacement algorithms from these slides
PPT
Replacement.ppt operating system in BCA cu
PDF
Sucet os module_4_notes
DOCX
Virtual memory
PPTX
3_page_replacement_algorithms computer system and architecture
PPT
Page replacement
Virtual Memory in Operating System
Virtual Memory.pdf
Virtual Memory Management
Virtual Memory Management Part - II.pdf
Virtual memory
Distributed Operating System_3
Virtual Memory sjkdhikejv vsdkjnksnv vkjhfvk
Page Replacement
381 ccs chapter7_updated(1)
Operating Systems -Module-5 Presentation
operating system virtual memory and logical memory
Explain about replacement algorithms from these slides
Replacement.ppt operating system in BCA cu
Sucet os module_4_notes
Virtual memory
3_page_replacement_algorithms computer system and architecture
Page replacement
Ad

More from panditestmail (7)

PPTX
Introduction to Agile Technology part 1.pptx
PPTX
Overview AI and Cloud Security part 1.pptx
PPTX
Operating system concepts overview .pptx
PPTX
Operating system fundamentals - OS .pptx
PPTX
operating system notes for file managment.pptx
PPTX
operating system notes about deadlock 3.pptx
PPTX
Blue and Yellow Illustrative Benefits of Learning English Presentation.pptx
Introduction to Agile Technology part 1.pptx
Overview AI and Cloud Security part 1.pptx
Operating system concepts overview .pptx
Operating system fundamentals - OS .pptx
operating system notes for file managment.pptx
operating system notes about deadlock 3.pptx
Blue and Yellow Illustrative Benefits of Learning English Presentation.pptx
Ad

Recently uploaded (20)

DOCX
search engine optimization ppt fir known well about this
PDF
Getting started with AI Agents and Multi-Agent Systems
PDF
A contest of sentiment analysis: k-nearest neighbor versus neural network
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
PPTX
Modernising the Digital Integration Hub
PDF
Getting Started with Data Integration: FME Form 101
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PDF
WOOl fibre morphology and structure.pdf for textiles
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PDF
A Late Bloomer's Guide to GenAI: Ethics, Bias, and Effective Prompting - Boha...
PDF
DP Operators-handbook-extract for the Mautical Institute
PPTX
Benefits of Physical activity for teenagers.pptx
PDF
Hybrid model detection and classification of lung cancer
PPT
What is a Computer? Input Devices /output devices
PDF
Unlock new opportunities with location data.pdf
PDF
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
PPTX
O2C Customer Invoices to Receipt V15A.pptx
PPTX
The various Industrial Revolutions .pptx
PPT
Module 1.ppt Iot fundamentals and Architecture
PDF
August Patch Tuesday
search engine optimization ppt fir known well about this
Getting started with AI Agents and Multi-Agent Systems
A contest of sentiment analysis: k-nearest neighbor versus neural network
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
Modernising the Digital Integration Hub
Getting Started with Data Integration: FME Form 101
Univ-Connecticut-ChatGPT-Presentaion.pdf
WOOl fibre morphology and structure.pdf for textiles
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
A Late Bloomer's Guide to GenAI: Ethics, Bias, and Effective Prompting - Boha...
DP Operators-handbook-extract for the Mautical Institute
Benefits of Physical activity for teenagers.pptx
Hybrid model detection and classification of lung cancer
What is a Computer? Input Devices /output devices
Unlock new opportunities with location data.pdf
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
O2C Customer Invoices to Receipt V15A.pptx
The various Industrial Revolutions .pptx
Module 1.ppt Iot fundamentals and Architecture
August Patch Tuesday

operating system notes about virtual memory 4.pptx

  • 2. VIRTUAL MEMORY MANAGEMENT  Virtual memory is a technique that allows for the execution of partially loaded process.  When a program is executed, not all of its code and data need to be loaded into physical memory (RAM) at once.  Instead, portions of the program are loaded into memory as needed, and other parts may be temporarily stored on disk until they are required.  Advantages:  Programs are not restricted by the available physical memory. They can use a larger virtual memory space, allowing more extensive programs to run.  Since each program uses less physical memory, multiple programs can run simultaneously, enhancing throughput and CPU utilization..  Virtual memory is the separation of users logical memory from physical memory. This separation enables a large virtual memory to be available even with limited physical memory, providing more flexibility and efficiency in memory usage.  Logical memory separation also allows for files and memory to be shared among different processes through page sharing, promoting better resource utilization.
  • 4.  Virtual memory is implemented using Demand Paging.  Virtual address space: Every process has a virtual address space i.e used as the stack or heap grows in size.
  • 5. DEMAND PAGING  A demand-paging system is similar to a paging system with swapping where processes reside in secondary memory (usually a disk).  When we want to execute a process, we swap it into main memory.  Rather than swapping the entire process into memory, however, we use a lazy swapper  A lazy swapper never swaps a page into memory unless that page will be needed.  Since we are now viewing a process as a sequence of pages, rather than as one large contiguous address space, use of the term swapper is technically incorrect.  A swapper manipulates entire processes, whereas a is concerned with the individual pages of a process.  We thus use pager, rather than swapper, in connection with demand paging.
  • 7. Basic concept  Instead of swapping the whole process the pager swaps only the necessary pages in to memory.  Thus it avoids reading unused pages and decreases the swap time and amount of physical memory needed.  The valid-invalid bit scheme can be used to distinguish between the pages that are on the disk and that are in memory.  With each page table entry a valid–invalid bit is associated  (v ⇒ in-memory, i⇒not-in-memory)  During address translation, if valid–invalid bit in page table entry is i ⇒ page fault.  If the bit is valid then the page is both legal and is in memory.  If the bit is invalid then either page is not valid or is valid but is currently on the disk.
  • 9. Page Fault  If a page is needed that was not originally loaded up, then a page fault trap is generated.  Steps in Handling a Page Fault 1. The memory address requested is first checked, to make sure it was a valid memory request. 2. If the reference is to an invalid page, the process is terminated. Otherwise, if the page is not present in memory, it must be paged in. 3. A free frame is located, possibly from a free-frame list. 4. A disk operation is scheduled to bring in the necessary page from disk. 5. After the page is loaded to memory, the process's page table is updated with the new frame number, and the invalid bit is changed to indicate that this is now a valid page reference. 6. The instruction that caused the page fault must now be restarted from the beginning.
  • 11. COPY-ON-WRITE  Technique initially allows the parent and the child to share the same pages.  These pages are marked as copy on- write pages i.e., if either process writes to a shared page, a copy of shared page is created.  Eg:-If a child process try to modify a page containing portions of the stack; the OS recognizes them as a copy-on-write page and create a copy of this page and maps it on to the address space of the child process.  So the child process will modify its copied page and not the page belonging to parent. The new pages are obtained from the pool of free pages.  The previous contents of pages are erased before getting them into main memory. This is called Zero – on fill demand.
  • 14. PAGE REPLACEMENT  Page replacement policy deals with the solution of pages in memory to be replaced by a new page that must be brought in. When a user process is executing a page fault occurs.  The hardware traps to the operating system, which checks the internal table to see that this is a page fault and not an illegal memory access.  The operating system determines where the derived page is residing on the disk, and this finds that there are no free frames on the list of free frames.  When all the frames are in main memory, it is necessary to bring a new page to satisfy the page fault, replacement policy is concerned with selecting a page currently in memory to be replaced.  The page i,e to be removed should be the page i,e least likely to be referenced in future.
  • 16. Working of Page Replacement Algorithm 1. Find the location of derived page on the disk. 2. Find a free frame x If there is a free frame, use it. Otherwise, use a replacement algorithm to select a victim.  Write the victim page to the disk.  Change the page and frame tables accordingly. 3. Read the desired page into the free frame; change the page and frame tables. 4. Restart the user process.
  • 17. Victim Page  The page that is supported out of physical memory is called victim page.  Each page or frame may have a dirty (modify) bit associated with the hardware.  The modify bit for a page is set by the hardware whenever any word or byte in the page is written into, indicating that the page has been modified.  When we select the page for replacement, we check its modify bit. If the bit is set, then the page is modified since it was read from the disk.  If the bit was not set, the page has not been modified since it was read into memory.  Therefore, if the copy of the page has not been modified we can avoid writing the memory page to the disk, if it is already there. Sum pages cannot be modified.
  • 18. Modify bit/ Dirty bit :  Each page/frame has a modify bit associated with it.  If the page is not modified (read-only) then one can discard such page without writing it onto the disk. Modify bit of such page is set to 0.  Modify bit is set to 1, if the page has been modified. Such pages must be written to the disk.  Modify bit is used to reduce overhead of page transfers – only modified pages are written to disk
  • 19. PAGE REPLACEMENT ALGORITHMS  Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string  In all our examples, the reference string is  1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 FIFO replacement algorithm  This is the simplest page replacement algorithm.  A FIFO replacement algorithm associates each page the time when that page was brought into memory.  When a Page is to be replaced the oldest one is selected.  We replace the queue at the head of the queue. When a page is brought into memory, we insert it at the tail of the queue.  In the following example, a reference string is given and there are 3 free frames. There are 20 page requests, which results in 15 page faults
  • 21. Belady’s Anomaly  For some page replacement algorithm, the page fault may increase as the number of allocated frames increases. FIFO replacement algorithm may face this problem.  more frames more page faults ⇒  To illustrate the problems that are possible with a FIFO page-replacement algorithm, we consider the following reference string:  1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5  Notice that the number of faults for four frames (ten) is greater than the number of faults for three frames (nine)! This most unexpected result is known as Beladys Anomaly .  for some page-replacement algorithms, the page-fault rate may increase as the number of allocated frames increases.  We would expect that giving more memory to a process would improve its performance.  In some early research, investigators noticed that this assumption was not always true. Belady's anomaly was discovered as a result.
  • 22. Optimal Algorithm  Optimal page replacement algorithm is mainly to solve the problem of Belady’s Anomaly.  Optimal page replacement algorithm has the lowest page fault rate of all algorithms.  An optimal page replacement algorithm exists and has been called OPT.  The working is simple “Replace the page that will not be used for the longest period of time”  Example: consider the following reference string  The first three references cause faults that fill the three empty frames.  The references to page 2 replaces page 7, because 7 will not be used until reference 18. x The page 0 will be used at 5 and page 1 at 14.  With only 9 page faults, optimal replacement is much better than a FIFO, which had 15 faults. This algorithm is difficult to implement because it requires future knowledge of reference strings.  Replace page that will not be used for longest period of time
  • 24. Least Recently Used (LRU) Algorithm  The LRU (Least Recently Used) algorithm, predicts that the page that has not been used in the longest time is the one that will not be used again in the near future.  Some view LRU as analogous to OPT, but here we look backwards in time instead of forwards.  LRU replacement associates with each page the time of that page's last use.  When a page must be replaced, LRU chooses the page that has not been used for the longest period of time.  We can think of this strategy as the optimal page-replacement algorithm looking backward in time, rather than forward.
  • 26.  The Least Recently Used (LRU) page replacement algorithm is a commonly used method in operating systems to manage memory efficiently. It replaces the least recently used page when new pages need to be brought into memory.  Implementing LRU efficiently can indeed require hardware assistance, especially in systems where the number of pages or frames is large.  Two common implementations of the LRU algorithm are based on counters and stacks:  Counters: In this implementation, each page/frame has an associated counter that keeps track of when it was last accessed. Every time a page is accessed, its counter is updated to reflect the current time. When a new page needs to be brought into memory, the algorithm selects the page with the smallest counter value, indicating that it was accessed the longest time ago, and replaces it with the new page.  Stacks: In this implementation, a stack data structure is used to maintain the order of pages based on their last access time. Whenever a page is accessed, it is moved to the top of the stack. When a new page needs to be brought into memory, the algorithm selects the page at the bottom of the stack, which represents the page that was accessed least recently, and replaces it with the new page.
  • 27. LRU-Approximation Page Replacement  Many systems offer some degree of hardware support, enough to approximate LRU.  In particular, many systems provide a reference bit for every entry in a page table, which is set anytime that page is accessed. Initially all bits are set to zero, and they can also all be cleared at any time. One bit distinguishes pages that have been accessed since the last clear from those that have not been accessed.  Additional-Reference-Bits Algorithm  An 8-bit byte (reference bit) is stored for each page in a table in memory.  At regular intervals (say, every 100 milliseconds), a timer interrupt transfers control to the operating system. The operating system shifts the reference bit for each page into the high-order bit of its 8-bit byte, shifting the other bits right by 1 bit and discarding the low- order bit.  These 8-bit shift registers contain the history of page use for the last eight time periods.  If the shift register contains 00000000, then the page has not been used for eight time periods.  A page with a history register value of 11000100 has been used more recently than one with a value of 01110111.
  • 28. Second- chance (clock) page replacement algorithm  The second chance algorithm is a FIFO replacement algorithm, except the reference bit is used to give pages a second chance at staying in the page table.  When a page must be replaced, the page table is scanned in a FIFO (circular queue) manner.  If a page is found with its reference bit as ‘0’, then that page is selected as the next victim.  If the reference bit value is‘1’, then the page is given a second chance and its reference bit value is cleared (assigned as‘0’).  Thus, a page that is given a second chance will not be replaced until all other pages have been replaced (or given second chances). In addition, if a page is used often, then it sets its reference bit again.  This algorithm is also known as the clock algorithm.
  • 30. Enhanced Second-Chance Algorithm  The enhanced second chance algorithm looks at the reference bit and the modify bit ( dirty bit ) as an ordered page, and classifies pages into one of four classes: 1. ( 0, 0 ) - Neither recently used nor modified. 2. ( 0, 1 ) - Not recently used, but modified. 3. ( 1, 0 ) - Recently used, but clean. 4. ( 1, 1 ) - Recently used and modified.  This algorithm searches the page table in a circular fashion, looking for the first page it can find in the lowest numbered category. i.e. it first makes a pass looking for a ( 0, 0 ), and then if it can't find one, it makes another pass looking for a(0,1),etc.  The main difference between this algorithm and the previous one is the preference for replacing clean pages if possible.
  • 31. Count Based Page Replacement  There is many other algorithms that can be used for page replacement, we can keep a counter of the number of references that has made to a page. a) LFU (least frequently used):  This causes the page with the smallest count to be replaced. The reason for this selection is that actively used page should have a large reference count.  This algorithm suffers from the situation in which a page is used heavily during the initial phase of a process but never used again. Since it was used heavily, it has a large count and remains in memory even though it is no longer needed. a) MFU Algorithm:  based on the argument that the page with the smallest count was probably just brought in and has yet to be used
  • 32. ALLOCATION OF FRAMES  1. Equal Allocation- If there are m frames available and n processes to share them, each process gets m / n frames, and the left over’s are kept in a free- frame buffer pool.  2. Proportional Allocation - Allocate the frames proportionally depending on the size of the process  . If the size of process i is Si, and S is the sum of size of all processes in the system, then the allocation for process  Pi is ai= m * Si/ S. where m is the free frames available in the system.  In local replacement, each process is allocated a fixed number of physical frames. When a page fault occurs, the operating system replaces a page only among the pages that belong to the process causing the fault.  In global replacement, any page in the system can be selected as the victim for replacement, regardless of which process owns the page.
  • 33. THRASHING  If the number of frames allocated to a low-priority process falls below the minimum number required by the computer architecture then we suspend the process execution.  A process is thrashing if it is spending more time in paging than executing.  If the processes do not have enough number of frames, it will quickly page fault. During this it must replace some page that is not currently in use. Consequently it quickly faults again and again.  The process continues to fault, replacing pages for which it then faults and brings back. This high paging activity is called thrashing. The phenomenon of excessively moving pages back and forth b/w memory and secondary has been called thrashing.  Cause of Thrashing  Thrashing results in severe performance problem.  The operating system monitors the cpu utilization is low. We increase the degree of multi programming by introducing new process to the system.  A global page replacement algorithm replaces pages with no regards to the process to which they belong.
  • 34. Cause of Thrashing  Thrashing results in severe performance problem.  The operating system monitors the cpu utilization is low. We increase the degree of multi programming by introducing new process to the system.  A global page replacement algorithm replaces pages with no regards to the process to which they belong reached. If the degree of multi programming is increased further thrashing sets in and the cpu utilization drops sharply.  At this point, to increases CPU utilization and stop thrashing, we must increase degree of multiprogramming.  we can limit the effect of thrashing by using a local replacement algorithm. To prevent thrashing, we must provide a process as many frames as it needs.