SlideShare a Scribd company logo
Principles  of  Virtual Memory Virtual Memory, Paging, Segmentation
Overview Virtual Memory Paging Virtual Memory  a nd Linux
1. Virtual Memory 1.1 Why Virtual Memory (VM)? 1.2 What is VM ?  1.3 The Mapping Process 1.4 VM: Features 1.5 VM: Advantages 1.6 VM: Disadvantages 1.7 VM: Implementation
1.1 Why Virtual Memory (VM)? S hortage  of memory Efficient memory  management needed OS Process 3 Process 1 Process 2 Process 4 Memory Process may be  too big for physical memory More  active  processes than physical memory can hold Requirements  of m ulti programming   Efficient p rotection  scheme Simple way of s haring
1.2 What  i s VM? Program: .... Mov AX, 0xA0F4 .... Virtual   Memory Physical   Memory Virtual  Address Physical  Address „ Piece“ of Virtual Memory „ Piece“ of Physical Memory Note:  It does not matter at which physical address a „piece“ of VM is placed, since the corresponding addresses are mapped by the mapping unit. 0xA0F4 0xC0F4 Mapping Unit (MMU) Table (one per Process)
1.3 The Mapping Process Usually every process has its own mapping table    own virtual address space (assumed from now on) virtual address check using mapping table Not every „piece“ of VM has to be present in PM „ Pieces“ may be loaded from HDD as they are referenced Rarely used „pieces“ may be discarded or written out to disk (   swapping) MMU piece in physical memory? memory access fault OS brings „ piece“ in from HDD physical address OS adjusts mapping table translate address yes
1.4 VM: Features Swapping Danger:  Thrashing:  „ Piece“ just swapped out is immediately requested again System swaps in / out all the time ,  no  real  work  is  done Thus: „piece“   for  swap out  has to  be chosen carefully K eep track of  „piece“  usage („age  of piece “) Hopefully  „piece“  used frequently lately will be used again in near future  ( principle of locality !) lack of memory no need to swap out complete process!!! find rarely used  „piece" adjust mapping table „ piece“ modified? save  HDD location of „piece“ discard „ piece“ no write  „piece“ out  to disk yes
1.4 VM: Features Protection Each process has  its  own virtual address space Processes invisible to each other Process cannot access another processes memory  MMU checks protection bits on memory access  (during address mapping) „ Pieces“  can be protected from being written to or  being  executed  or even being read System can distinguish different protection levels (user / kernel mode) Write protection can be used to implement copy on write (   Sharing)
1.4 VM: Features Sharing „ Pieces“  of different processes mapped to one single  „piece“  of physical memory Allows sharing of code (saves memory), e. g . libraries Copy on write:  „piece“  may be used by several processes until  one  writ es  to  it  (then  that  process gets  its  own copy) Simplifies  interprocess-communication (IPC) shared memory P iece  2 P iece  1 Virtual memory Process 1 P iece  0 Piece   1 Piece   2 Piece   0 Physical  memory Virtual memory Process 2 P iece   1 P iece   0 P iece   2
1.5 VM: Advantages (1) VM supports Swapping Rarely used  „pieces“ can be  discarded or swapped out „ Piece“ can be swapped back in to any free piece of physical memory large enough, mapping unit translates addresses Protection Sharing Common data or code may be shared to save memory Process need not be in memory as a whole No need for complicated overlay techniques (OS does job) Process may even be larger than all of physical memory Data / code  can be  read from disk as needed
Code can be placed anywhere in physical memory without relocation  (adresses are mapped!) Increase d  cpu utilization more processes can be held in memory (in part)     more processes in ready state   (consider: 80% HDD I/O wait time not uncommon) 1.5 VM: Advantages (2)
1.6 VM: Disadvantages Memory requirements (mapping tables) Longer memory access times ( mapping  table lookup) Can  be   improved  using   TLB
1.7 VM: Implementation VM may be implemented using Paging  Segmentation  (no) Combination of both (no)
2. Paging 2.1 What  i s Paging? 2.2 Paging: Implementation 2.3 Paging: Features 2. 4  Paging :  Advantage s 2. 5  Paging :  Disadvantages 2. 6  Summary: Conversion  o f  a  Virtual Address
Valid-Invalid Bit With each page table entry a valid–invalid bit is associated ( v     in-memory,   i     not-in-memory) Initially valid–invalid bit is set to  i  on all entries Example of a page table snapshot: v v v v i i i … . Frame # valid-invalid bit page table
2.1 What  i s Paging? Page 7 Page 5 Page 4 Page 3 Page 2 Page 1 Page 6 Virtual memory (divided into equal  size pages) Page 0 0x00 Page Table (one per process, one entry per page   maintained by OS) Page 7 Page 5 Page 4 Page 3 Page 2 Page 1 Page 6 Page 0 v v v Frame 0 Frame 1 Frame 3 Frame 0 Frame 1 Frame 2 Frame 3 Physical memory (divided into  equal   size page frames) 0x00 v v v
2.2 Paging: Implementation   Typical Page Table Entry other Page Frame # execute x write w read r r w x valid v r w x v referenced re v re modified m re m shared s m s caching disabled c s c super - page su c su process id pid su pid guard data (extended) guard gd g pid g gd g gd other
2.2 Paging: Implementation Singlelevel Page Tables Problem:   Page tables can get  very large , e.g. 32 bit address space, 4KB pages     2^20 entries  per process     4MB at 4B per entry 64 bit    16777216 GB page table!!!! one entry per page one table per process 0x14 0x 2 Virtual address Page # Offset Physical address 0x14 Offset 0x8 Frame # Page Table Base Register (PTBR) Page Table 0x8 ... 0x0 * L 0x1 * L 0x2 * L L : size of entry
2.2 Paging: Implementation Multilevel Page Tables not all need be present saves memory table size can be restricted to one page Offset Page #1 Page #2 Page #3 Page Directory Page Middle Directory Page Table Page Frame # Offset Offset Frame # Oversized Super -P age v=0
Page Fault If there is a reference to a page, first reference to that page will trap to operating system: page fault Operating system looks at another table to decide: Invalid reference    abort Just not in memory Get empty frame Swap page into frame Reset tables Set validation bit =  v Restart the instruction that caused the page fault
Steps in Handling a Page Fault
2. 3 Paging: Features Prepaging Process requests consecutive pages (or just one)   OS loads following pages into memory as well  (expecting they will also be needed) Saves time when large contiguous structures are used (e.g. huge arrays) Wastes memory and time case pages not needed VM referenced by process prepaged by OS
2. 3 Paging: Features Demand Paging On process startup only first page is loaded into physical memory Pages are then loaded as referenced Saves memory  But:  may cause frequent page faults until process has its working set in physical memory. OS may adjust its policy (demand / prepaging) dependent on  Available free physical memory Process types and history
2. 3 Paging: Features Simplified Swapping Process requires 3 frames swap out 3 most seldomly used pages Swapping out the 3 most seldomly used „pieces“ will not work     Swap algo must try to create free pieces as big as possible (costly!) rarely used Process requires memory Paging VM system Non-Paging VM system 1 „piece“ 3 pages PM PM
2. 4  Paging :  Advantages Allocating memory is easy and cheap Any free page is ok, OS can take first one out of list it keeps Eliminates external fragmentation Data (page frames) can be scattered all over PM   pages are mapped appropriately anyway Allows demand paging  and prepaging More  efficient swapping No need for considerations about fragmentation Just swap out page least likely to be used
2. 5  Paging :  Disadvantages Longer memory access times (page table lookup) Can  be   improved  using TLB Guarded page tables Inverted page tables Memory requirements (one entry per VM page) Improve using Multilevel page tables and variable page sizes (super - pages) Guarded page tables Page Table Length Register (PTLR) to limit virtual memory size Internal fragmentation Yet only an average of about ½ page per  contiguous address range
Translation Lookaside Buffer Each virtual memory reference can cause two physical memory accesses one to fetch the page table one to fetch the data To overcome this problem a high-speed cache is set up for page table entries called the TLB - Translation Lookaside Buffer
Translation Lookaside Buffer Contains page table entries that have been most recently used Functions same way as a  memory cache
Translation Lookaside Buffer Given a virtual address, processor examines the TLB If page table entry is present (a hit), the frame number is retrieved and the real address is formed If page table entry is not found in the TLB (a miss), the page number is used to index the process page table
Translation Lookaside Buffer First checks if page is already in main memory  if not in main memory a page fault is issued The TLB is updated to include the new page entry
2. 6  Summary: Conversion  o f a  Virtual Address Virtual  address Physical address OS Hard ware no protection fault reference legal? copy on  write? HDD I/O  complete: interrupt memory full? update page table process into ready state copy page process into blocking state yes page fault no exception  to process no TLB page table miss page in mem? hit access rights? yes update TLB yes exception  to process no swap out a page yes HDD I/O  read req. no yes bring in page from HDD ! process into blocking state
5.  Virtual memory   a nd Linux 5.1 Why VM under Linux? 5.2 The Linux VM System 5.3 The Linux Protection Scheme 5.4 The Linux Paging System
5.1 Why VM under Linux? Linux is a multitasking, multiuser OS. It requires: Protection Ability to ensure pseudo-parallel execution (even if cumulated process size greater than physical memory) Efficient IPC methods (sharing) Good solution: virtual memory
5.2 The Linux VM System Kernel runs in physical addressing mode, maintains VM system Basically a paging system Some remainders of a CoSP scheme present: Process memory segmented into kernel/user memory Process in user mode (  5.3) may not access kernel memory V 2.0 defined code and data segments for each kernel and user mode V 2.2 still defines those segments but for complete virtual address space
5.3 The Linux Protection Scheme Linux uses two modes:  kernel and user mode Makes no use of elaborate protection scheme x86 processors provide (only uses ring 0 (kernel) and ring 3 (user)) Programs are all started in user mode Program needs to use system resources    must make system call (per software interrupt)    kernel code is executed on behalf of process Kernel processes permanently run in kernel mode
5.4 The Linux Paging System Linux employs architecture independent memory management code Linux uses 3-level paging system Intel x86 system: only 2-level paging Entry in page directory is treated as page middle directory with only one entry 4 Mb pages on some Intel systems are used (e.g. for graphics memory) Linux uses valid, protection, referenced, modified bits Employs copy on write and demand paging
Windows Paging Policy Demand paging without pre-paging Maintain a certain number of free page frames For 32-bit machine, each process has 4 GB of virtual address space Backing store – disk space is not assigned to page until it is paged out Uses working sets (per process) Consists of pages mapped into memory and can be accessed without page fault Has min/max size range that changes over time If page fault occurs and working set < min, add page If page fault occurs and working set > max, evict page from working set and add new page If too many page faults, then increase size of working set When evicting pages,  Evict from large processes that have been idle for a long time before small active processes Consider foreground process last
Virtual-address Space
Shared Library Using Virtual Memory
Caches If you were to implement a system using the above theoretical model then it would work, but not particularly efficiently. Both operating system and processor designers try hard to extract more performance from the system. Apart from making the processors, memory and so on faster the best approach is to maintain caches of useful information and data that make some operations faster. Linux uses a number of memory management related caches:
Buffer Cache The buffer cache contains data buffers that are used by the block device drivers.  Page Cache This is used to speed up access to images and data on disk.  Swap Cache Only modified (or  dirty ) pages are saved in the swap file.  Hardware Caches   Hardware Caches
Linux Page Tables Linux assumes that there are three levels of page tables

More Related Content

PDF
Course 102: Lecture 26: FileSystems in Linux (Part 1)
ODP
Memory management in Linux
PPTX
Memory Management
PDF
Linux Memory Management
PPT
Chapter 9 - Virtual Memory
PPTX
Linux Memory Management
PPTX
file system in operating system
PPTX
Storage Management
Course 102: Lecture 26: FileSystems in Linux (Part 1)
Memory management in Linux
Memory Management
Linux Memory Management
Chapter 9 - Virtual Memory
Linux Memory Management
file system in operating system
Storage Management

What's hot (20)

PPTX
Memory Management in OS
PPT
Chapter 8 - Main Memory
PPT
System call
PDF
Course 102: Lecture 18: Process Life Cycle
PPT
Process management in os
PPTX
Os unit 3 , process management
PDF
Linux Kernel - Virtual File System
PDF
8 memory management strategies
PDF
07-MemoryManagement.ppt
PPTX
Virtual memory presentation
PDF
Virtual memory
PDF
Memory management
PDF
LCU14 500 ARM Trusted Firmware
PDF
Address Binding Scheme
PPT
Linux memory
PPTX
Memory model
PDF
malloc & vmalloc in Linux
PDF
Linux scheduler
PPT
Internal representation of files ppt
PPTX
Demand paging
Memory Management in OS
Chapter 8 - Main Memory
System call
Course 102: Lecture 18: Process Life Cycle
Process management in os
Os unit 3 , process management
Linux Kernel - Virtual File System
8 memory management strategies
07-MemoryManagement.ppt
Virtual memory presentation
Virtual memory
Memory management
LCU14 500 ARM Trusted Firmware
Address Binding Scheme
Linux memory
Memory model
malloc & vmalloc in Linux
Linux scheduler
Internal representation of files ppt
Demand paging
Ad

Similar to Linux Memory (20)

PPTX
VIRTUAL MEMORY
PPTX
Computer architecture virtual memory
PPT
Os8 2
PPT
memory management and Virtual Memory.ppt
PPT
Cache replacement policies,cache miss,writingtechniques
PPTX
Linux Kernel Booting Process (2) - For NLKB
PPTX
coafinal1-copy-150430204758-conversion-gate01.pptx
PDF
Virtual memory 20070222-en
PPT
Chap8 Virtual Memory. 1997-2003.ppt
PPT
Power Point Presentation on Virtual Memory.ppt
PPT
Computer memory management
PPT
PPT
Chapter 04
PPT
chap.4.memory.manag.ppt
PPT
4 (1)
PPT
Virtual memory Chapter 9 simple and easy
PDF
The life and times
PPT
Mca ii os u-4 memory management
PPT
Virtual Memory
PDF
CH09.pdf
VIRTUAL MEMORY
Computer architecture virtual memory
Os8 2
memory management and Virtual Memory.ppt
Cache replacement policies,cache miss,writingtechniques
Linux Kernel Booting Process (2) - For NLKB
coafinal1-copy-150430204758-conversion-gate01.pptx
Virtual memory 20070222-en
Chap8 Virtual Memory. 1997-2003.ppt
Power Point Presentation on Virtual Memory.ppt
Computer memory management
Chapter 04
chap.4.memory.manag.ppt
4 (1)
Virtual memory Chapter 9 simple and easy
The life and times
Mca ii os u-4 memory management
Virtual Memory
CH09.pdf
Ad

Recently uploaded (20)

DOCX
Business Management - unit 1 and 2
PDF
kom-180-proposal-for-a-directive-amending-directive-2014-45-eu-and-directive-...
PDF
pdfcoffee.com-opt-b1plus-sb-answers.pdfvi
PDF
Laughter Yoga Basic Learning Workshop Manual
PDF
SIMNET Inc – 2023’s Most Trusted IT Services & Solution Provider
PPTX
AI-assistance in Knowledge Collection and Curation supporting Safe and Sustai...
PDF
Ôn tập tiếng anh trong kinh doanh nâng cao
PDF
Digital Marketing & E-commerce Certificate Glossary.pdf.................
PPTX
Dragon_Fruit_Cultivation_in Nepal ppt.pptx
PDF
Unit 1 Cost Accounting - Cost sheet
PDF
Elevate Cleaning Efficiency Using Tallfly Hair Remover Roller Factory Expertise
PDF
Solara Labs: Empowering Health through Innovative Nutraceutical Solutions
PDF
Tata consultancy services case study shri Sharda college, basrur
PPTX
2025 Product Deck V1.0.pptxCATALOGTCLCIA
PDF
A Brief Introduction About Julia Allison
PPT
Chapter four Project-Preparation material
PPTX
Belch_12e_PPT_Ch18_Accessible_university.pptx
PPTX
CkgxkgxydkydyldylydlydyldlyddolydyoyyU2.pptx
PDF
IFRS Notes in your pocket for study all the time
PPTX
Lecture (1)-Introduction.pptx business communication
Business Management - unit 1 and 2
kom-180-proposal-for-a-directive-amending-directive-2014-45-eu-and-directive-...
pdfcoffee.com-opt-b1plus-sb-answers.pdfvi
Laughter Yoga Basic Learning Workshop Manual
SIMNET Inc – 2023’s Most Trusted IT Services & Solution Provider
AI-assistance in Knowledge Collection and Curation supporting Safe and Sustai...
Ôn tập tiếng anh trong kinh doanh nâng cao
Digital Marketing & E-commerce Certificate Glossary.pdf.................
Dragon_Fruit_Cultivation_in Nepal ppt.pptx
Unit 1 Cost Accounting - Cost sheet
Elevate Cleaning Efficiency Using Tallfly Hair Remover Roller Factory Expertise
Solara Labs: Empowering Health through Innovative Nutraceutical Solutions
Tata consultancy services case study shri Sharda college, basrur
2025 Product Deck V1.0.pptxCATALOGTCLCIA
A Brief Introduction About Julia Allison
Chapter four Project-Preparation material
Belch_12e_PPT_Ch18_Accessible_university.pptx
CkgxkgxydkydyldylydlydyldlyddolydyoyyU2.pptx
IFRS Notes in your pocket for study all the time
Lecture (1)-Introduction.pptx business communication

Linux Memory

  • 1. Principles of Virtual Memory Virtual Memory, Paging, Segmentation
  • 2. Overview Virtual Memory Paging Virtual Memory a nd Linux
  • 3. 1. Virtual Memory 1.1 Why Virtual Memory (VM)? 1.2 What is VM ? 1.3 The Mapping Process 1.4 VM: Features 1.5 VM: Advantages 1.6 VM: Disadvantages 1.7 VM: Implementation
  • 4. 1.1 Why Virtual Memory (VM)? S hortage of memory Efficient memory management needed OS Process 3 Process 1 Process 2 Process 4 Memory Process may be too big for physical memory More active processes than physical memory can hold Requirements of m ulti programming Efficient p rotection scheme Simple way of s haring
  • 5. 1.2 What i s VM? Program: .... Mov AX, 0xA0F4 .... Virtual Memory Physical Memory Virtual Address Physical Address „ Piece“ of Virtual Memory „ Piece“ of Physical Memory Note: It does not matter at which physical address a „piece“ of VM is placed, since the corresponding addresses are mapped by the mapping unit. 0xA0F4 0xC0F4 Mapping Unit (MMU) Table (one per Process)
  • 6. 1.3 The Mapping Process Usually every process has its own mapping table  own virtual address space (assumed from now on) virtual address check using mapping table Not every „piece“ of VM has to be present in PM „ Pieces“ may be loaded from HDD as they are referenced Rarely used „pieces“ may be discarded or written out to disk (  swapping) MMU piece in physical memory? memory access fault OS brings „ piece“ in from HDD physical address OS adjusts mapping table translate address yes
  • 7. 1.4 VM: Features Swapping Danger: Thrashing: „ Piece“ just swapped out is immediately requested again System swaps in / out all the time , no real work is done Thus: „piece“ for swap out has to be chosen carefully K eep track of „piece“ usage („age of piece “) Hopefully „piece“ used frequently lately will be used again in near future ( principle of locality !) lack of memory no need to swap out complete process!!! find rarely used „piece&quot; adjust mapping table „ piece“ modified? save HDD location of „piece“ discard „ piece“ no write „piece“ out to disk yes
  • 8. 1.4 VM: Features Protection Each process has its own virtual address space Processes invisible to each other Process cannot access another processes memory MMU checks protection bits on memory access (during address mapping) „ Pieces“ can be protected from being written to or being executed or even being read System can distinguish different protection levels (user / kernel mode) Write protection can be used to implement copy on write (  Sharing)
  • 9. 1.4 VM: Features Sharing „ Pieces“ of different processes mapped to one single „piece“ of physical memory Allows sharing of code (saves memory), e. g . libraries Copy on write: „piece“ may be used by several processes until one writ es to it (then that process gets its own copy) Simplifies interprocess-communication (IPC) shared memory P iece 2 P iece 1 Virtual memory Process 1 P iece 0 Piece 1 Piece 2 Piece 0 Physical memory Virtual memory Process 2 P iece 1 P iece 0 P iece 2
  • 10. 1.5 VM: Advantages (1) VM supports Swapping Rarely used „pieces“ can be discarded or swapped out „ Piece“ can be swapped back in to any free piece of physical memory large enough, mapping unit translates addresses Protection Sharing Common data or code may be shared to save memory Process need not be in memory as a whole No need for complicated overlay techniques (OS does job) Process may even be larger than all of physical memory Data / code can be read from disk as needed
  • 11. Code can be placed anywhere in physical memory without relocation (adresses are mapped!) Increase d cpu utilization more processes can be held in memory (in part)  more processes in ready state (consider: 80% HDD I/O wait time not uncommon) 1.5 VM: Advantages (2)
  • 12. 1.6 VM: Disadvantages Memory requirements (mapping tables) Longer memory access times ( mapping table lookup) Can be improved using TLB
  • 13. 1.7 VM: Implementation VM may be implemented using Paging Segmentation (no) Combination of both (no)
  • 14. 2. Paging 2.1 What i s Paging? 2.2 Paging: Implementation 2.3 Paging: Features 2. 4 Paging : Advantage s 2. 5 Paging : Disadvantages 2. 6 Summary: Conversion o f a Virtual Address
  • 15. Valid-Invalid Bit With each page table entry a valid–invalid bit is associated ( v  in-memory, i  not-in-memory) Initially valid–invalid bit is set to i on all entries Example of a page table snapshot: v v v v i i i … . Frame # valid-invalid bit page table
  • 16. 2.1 What i s Paging? Page 7 Page 5 Page 4 Page 3 Page 2 Page 1 Page 6 Virtual memory (divided into equal size pages) Page 0 0x00 Page Table (one per process, one entry per page maintained by OS) Page 7 Page 5 Page 4 Page 3 Page 2 Page 1 Page 6 Page 0 v v v Frame 0 Frame 1 Frame 3 Frame 0 Frame 1 Frame 2 Frame 3 Physical memory (divided into equal size page frames) 0x00 v v v
  • 17. 2.2 Paging: Implementation Typical Page Table Entry other Page Frame # execute x write w read r r w x valid v r w x v referenced re v re modified m re m shared s m s caching disabled c s c super - page su c su process id pid su pid guard data (extended) guard gd g pid g gd g gd other
  • 18. 2.2 Paging: Implementation Singlelevel Page Tables Problem: Page tables can get very large , e.g. 32 bit address space, 4KB pages  2^20 entries per process  4MB at 4B per entry 64 bit  16777216 GB page table!!!! one entry per page one table per process 0x14 0x 2 Virtual address Page # Offset Physical address 0x14 Offset 0x8 Frame # Page Table Base Register (PTBR) Page Table 0x8 ... 0x0 * L 0x1 * L 0x2 * L L : size of entry
  • 19. 2.2 Paging: Implementation Multilevel Page Tables not all need be present saves memory table size can be restricted to one page Offset Page #1 Page #2 Page #3 Page Directory Page Middle Directory Page Table Page Frame # Offset Offset Frame # Oversized Super -P age v=0
  • 20. Page Fault If there is a reference to a page, first reference to that page will trap to operating system: page fault Operating system looks at another table to decide: Invalid reference  abort Just not in memory Get empty frame Swap page into frame Reset tables Set validation bit = v Restart the instruction that caused the page fault
  • 21. Steps in Handling a Page Fault
  • 22. 2. 3 Paging: Features Prepaging Process requests consecutive pages (or just one)  OS loads following pages into memory as well (expecting they will also be needed) Saves time when large contiguous structures are used (e.g. huge arrays) Wastes memory and time case pages not needed VM referenced by process prepaged by OS
  • 23. 2. 3 Paging: Features Demand Paging On process startup only first page is loaded into physical memory Pages are then loaded as referenced Saves memory But: may cause frequent page faults until process has its working set in physical memory. OS may adjust its policy (demand / prepaging) dependent on Available free physical memory Process types and history
  • 24. 2. 3 Paging: Features Simplified Swapping Process requires 3 frames swap out 3 most seldomly used pages Swapping out the 3 most seldomly used „pieces“ will not work  Swap algo must try to create free pieces as big as possible (costly!) rarely used Process requires memory Paging VM system Non-Paging VM system 1 „piece“ 3 pages PM PM
  • 25. 2. 4 Paging : Advantages Allocating memory is easy and cheap Any free page is ok, OS can take first one out of list it keeps Eliminates external fragmentation Data (page frames) can be scattered all over PM  pages are mapped appropriately anyway Allows demand paging and prepaging More efficient swapping No need for considerations about fragmentation Just swap out page least likely to be used
  • 26. 2. 5 Paging : Disadvantages Longer memory access times (page table lookup) Can be improved using TLB Guarded page tables Inverted page tables Memory requirements (one entry per VM page) Improve using Multilevel page tables and variable page sizes (super - pages) Guarded page tables Page Table Length Register (PTLR) to limit virtual memory size Internal fragmentation Yet only an average of about ½ page per contiguous address range
  • 27. Translation Lookaside Buffer Each virtual memory reference can cause two physical memory accesses one to fetch the page table one to fetch the data To overcome this problem a high-speed cache is set up for page table entries called the TLB - Translation Lookaside Buffer
  • 28. Translation Lookaside Buffer Contains page table entries that have been most recently used Functions same way as a memory cache
  • 29. Translation Lookaside Buffer Given a virtual address, processor examines the TLB If page table entry is present (a hit), the frame number is retrieved and the real address is formed If page table entry is not found in the TLB (a miss), the page number is used to index the process page table
  • 30. Translation Lookaside Buffer First checks if page is already in main memory if not in main memory a page fault is issued The TLB is updated to include the new page entry
  • 31. 2. 6 Summary: Conversion o f a Virtual Address Virtual address Physical address OS Hard ware no protection fault reference legal? copy on write? HDD I/O complete: interrupt memory full? update page table process into ready state copy page process into blocking state yes page fault no exception to process no TLB page table miss page in mem? hit access rights? yes update TLB yes exception to process no swap out a page yes HDD I/O read req. no yes bring in page from HDD ! process into blocking state
  • 32. 5. Virtual memory a nd Linux 5.1 Why VM under Linux? 5.2 The Linux VM System 5.3 The Linux Protection Scheme 5.4 The Linux Paging System
  • 33. 5.1 Why VM under Linux? Linux is a multitasking, multiuser OS. It requires: Protection Ability to ensure pseudo-parallel execution (even if cumulated process size greater than physical memory) Efficient IPC methods (sharing) Good solution: virtual memory
  • 34. 5.2 The Linux VM System Kernel runs in physical addressing mode, maintains VM system Basically a paging system Some remainders of a CoSP scheme present: Process memory segmented into kernel/user memory Process in user mode (  5.3) may not access kernel memory V 2.0 defined code and data segments for each kernel and user mode V 2.2 still defines those segments but for complete virtual address space
  • 35. 5.3 The Linux Protection Scheme Linux uses two modes: kernel and user mode Makes no use of elaborate protection scheme x86 processors provide (only uses ring 0 (kernel) and ring 3 (user)) Programs are all started in user mode Program needs to use system resources  must make system call (per software interrupt)  kernel code is executed on behalf of process Kernel processes permanently run in kernel mode
  • 36. 5.4 The Linux Paging System Linux employs architecture independent memory management code Linux uses 3-level paging system Intel x86 system: only 2-level paging Entry in page directory is treated as page middle directory with only one entry 4 Mb pages on some Intel systems are used (e.g. for graphics memory) Linux uses valid, protection, referenced, modified bits Employs copy on write and demand paging
  • 37. Windows Paging Policy Demand paging without pre-paging Maintain a certain number of free page frames For 32-bit machine, each process has 4 GB of virtual address space Backing store – disk space is not assigned to page until it is paged out Uses working sets (per process) Consists of pages mapped into memory and can be accessed without page fault Has min/max size range that changes over time If page fault occurs and working set < min, add page If page fault occurs and working set > max, evict page from working set and add new page If too many page faults, then increase size of working set When evicting pages, Evict from large processes that have been idle for a long time before small active processes Consider foreground process last
  • 39. Shared Library Using Virtual Memory
  • 40. Caches If you were to implement a system using the above theoretical model then it would work, but not particularly efficiently. Both operating system and processor designers try hard to extract more performance from the system. Apart from making the processors, memory and so on faster the best approach is to maintain caches of useful information and data that make some operations faster. Linux uses a number of memory management related caches:
  • 41. Buffer Cache The buffer cache contains data buffers that are used by the block device drivers. Page Cache This is used to speed up access to images and data on disk. Swap Cache Only modified (or dirty ) pages are saved in the swap file. Hardware Caches Hardware Caches
  • 42. Linux Page Tables Linux assumes that there are three levels of page tables