SlideShare a Scribd company logo
Amrita
School
of
Engineering,
Bangalore
Ms. Harika Pudugosula
Lecturer
Department of Electronics & Communication Engineering
• Background
• Swapping
• Contiguous Memory Allocation
• Segmentation
• Paging
• Structure of the Page Table
2
Structure of the Page Table
3
• Memory structures for paging can get huge using straight-forward methods
• Consider a 32-bit logical address space as on modern computers
• Page size of 4 KB (212)
• Page table would have 1 million entries (232 / 212)= 2^20
• If each entry is 4 bytes -> 4 MB of physical address space / memory for page table
alone
• That amount of memory used to cost a lot
• Don’t want to allocate that contiguously in main memory
• Hierarchical Paging
• Hashed Page Tables
• Inverted Page Tables
Hierarchical Paging
4
• Assumming that each entry consists of 4 bytes, each process may need up to 4 MB of
physical address space for the page table alone
• Clearly, we would not want to allocate the page table contiguously in main memory
• One simple solution to this problem is to divide the page table into smaller pieces
• One way is to use a two-level paging algorithm, in which the page table itself is also
paged
Hierarchical Paging
5
Figure. A two-level page-table scheme
• A logical address (on 32-bit machine with
1K page size) is divided into:
• a page number consisting of 22 bits
• a page offset consisting of 10 bits
• Since the page table is paged, the page
number is further divided into:
• a 12-bit page number
• a 10-bit page offset
• Thus, a logical address is as follows:
where p1 is an index into the outer page
table, and p2 is the displacement within the
page of the inner page table
Hierarchical Paging
6
Figure. Address translation for a two-level 32-bit paging architecture
• Address translation works from the outer page table inward, this scheme is
also known as a forward-mapped page table
Hierarchical Paging
7
• Even two-level paging scheme not sufficient
• If page size is 4 KB (212)
• Then page table has 252 entries
• If two level scheme, inner page tables could be
210 4-byte entries
• Address would look like
• Outer page table has 242 entries or 244 bytes
• One solution is to add a 2nd outer page table
• But in the following example the 2nd outer
page table is still 234 bytes in size
• And possibly 4 memory access to get to
one physical memory location
Hashed Page Tables
8
• Common in address spaces -> 32 bits
• The virtual page number is hashed into a page table
• This page table contains a chain of elements hashing to the same location
• Each element contains (1) the virtual page number (2) the value of the mapped
page frame (3) a pointer to the next element
• Virtual page numbers are compared in this chain searching for a match
• If a match is found, the corresponding physical frame is extracted
• Variation for 64-bit addresses is clustered page tables
• Similar to hashed but each entry refers to several pages (such as 16) rather
than 1
• Especially useful for sparse address spaces (where memory references are non-
contiguous and scattered)
Hashed Page Tables
9
Figure. Hashed Page table
Inverted Page Tables
10
• Rather than each process having a page table and keeping track of all possible
logical pages, track all physical pages
• One entry for each real page of memory
• Entry consists of the virtual address of the page stored in that real memory location,
with information about the process that owns that page
• Decreases memory needed to store each page table, but increases time needed to
search the table when a page reference occurs
• Use hash table to limit the search to one — or at most a few — page-table entries
• TLB can accelerate access
• But how to implement shared memory?
• One mapping of a virtual address to the shared physical address
Inverted Page Tables
11
Figure. Inverted Page table
• Consider modern, 64-bit operating system example with tightly integrated
• Goals are efficiency, low overhead
• Based on hashing, but more complex
• Two hash tables
• One kernel and one for all user processes
• Each maps memory addresses from virtual to physical memory
• Each entry represents a contiguous area of mapped virtual memory,
• More efficient than having a separate hash-table entry for each page
• Each entry has base address and span (indicating the number of pages the
entry represents)
Oracle SPARC Solaris
12
• TLB holds translation table entries (TTEs) for fast hardware lookups
• A cache of TTEs reside in a translation storage buffer (TSB)
• Includes an entry per recently accessed page
• Virtual address reference causes TLB search
• If miss, hardware walks the in-memory TSB looking for the TTE corresponding to
the address
• If match found, the CPU copies the TSB entry into the TLB and translation
completes
• If no match found, kernel interrupted to search the hash table
• The kernel then creates a TTE from the appropriate hash table and stores
it in the TSB, Interrupt handler returns control to the MMU, which
completes the address translation.
13
Oracle SPARC Solaris
14
References
1. Silberscatnz and Galvin, “Operating System Concepts,” Ninth Edition,
John Wiley and Sons, 2012.
15
Thank you

More Related Content

PPTX
PPTX
STRUCTURE OF PAGE TABLE IN OPERATING SYSTEM
PPTX
Structure of the page table
PDF
Memory Management Strategies - III.pdf
PPTX
Kalpana Devi/Operating System/Paging.pptx
PPT
Segmentation with paging methods and techniques
PPTX
Operating System- Multilevel Paging, Inverted Page Table
PPT
Linux Memory Management
STRUCTURE OF PAGE TABLE IN OPERATING SYSTEM
Structure of the page table
Memory Management Strategies - III.pdf
Kalpana Devi/Operating System/Paging.pptx
Segmentation with paging methods and techniques
Operating System- Multilevel Paging, Inverted Page Table
Linux Memory Management

Similar to Memory Management Strategies - IV.pdf (20)

PDF
Hardware implementation of page table
PPT
4 (1)
PPT
chap.4.memory.manag.ppt
PPT
PPT
Computer memory management
PPTX
Implementation of page table
PPT
Paging and Segmentation
PPT
Os Swapping, Paging, Segmentation and Virtual Memory
PPTX
CAO-Unit-III.pptx
PDF
AOS Lab 7: Page tables
PPT
Memory organization including cache and RAM.ppt
PPTX
Virtual memory translation.pptx
PPTX
Operating system 35 paging
PPT
cache memory
PPT
unit-4 class (2).ppt,Memory managements part-1
PPT
04 cache memory
PDF
Virtual Memory.pdf
Hardware implementation of page table
4 (1)
chap.4.memory.manag.ppt
Computer memory management
Implementation of page table
Paging and Segmentation
Os Swapping, Paging, Segmentation and Virtual Memory
CAO-Unit-III.pptx
AOS Lab 7: Page tables
Memory organization including cache and RAM.ppt
Virtual memory translation.pptx
Operating system 35 paging
cache memory
unit-4 class (2).ppt,Memory managements part-1
04 cache memory
Virtual Memory.pdf
Ad

More from Harika Pudugosula (20)

PPTX
Artificial Neural Networks_Part-2.pptx
PPTX
Artificial Neural Networks_Part-1.pptx
PPTX
Introduction.pptx
PDF
CPU Scheduling Part-III.pdf
PDF
CPU Scheduling Part-II.pdf
PDF
CPU Scheduling Part-I.pdf
PDF
Multithreaded Programming Part- III.pdf
PDF
Multithreaded Programming Part- II.pdf
PDF
Multithreaded Programming Part- I.pdf
PDF
Deadlocks Part- III.pdf
PDF
Deadlocks Part- II.pdf
PDF
Deadlocks Part- I.pdf
PDF
Memory Management Strategies - II.pdf
PDF
Memory Management Strategies - I.pdf
PDF
Virtual Memory Management Part - II.pdf
PDF
Virtual Memory Management Part - I.pdf
PDF
Operating System Structure Part-II.pdf
PDF
Operating System Structure Part-I.pdf
PDF
Lecture-4_Process Management.pdf
PDF
Lecture_3-Process Management.pdf
Artificial Neural Networks_Part-2.pptx
Artificial Neural Networks_Part-1.pptx
Introduction.pptx
CPU Scheduling Part-III.pdf
CPU Scheduling Part-II.pdf
CPU Scheduling Part-I.pdf
Multithreaded Programming Part- III.pdf
Multithreaded Programming Part- II.pdf
Multithreaded Programming Part- I.pdf
Deadlocks Part- III.pdf
Deadlocks Part- II.pdf
Deadlocks Part- I.pdf
Memory Management Strategies - II.pdf
Memory Management Strategies - I.pdf
Virtual Memory Management Part - II.pdf
Virtual Memory Management Part - I.pdf
Operating System Structure Part-II.pdf
Operating System Structure Part-I.pdf
Lecture-4_Process Management.pdf
Lecture_3-Process Management.pdf
Ad

Recently uploaded (20)

PDF
Design Guidelines and solutions for Plastics parts
PDF
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
PPTX
Module 8- Technological and Communication Skills.pptx
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
PDF
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
PDF
Categorization of Factors Affecting Classification Algorithms Selection
PDF
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
introduction to high performance computing
PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
PPTX
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
PPTX
Safety Seminar civil to be ensured for safe working.
PDF
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PDF
Abrasive, erosive and cavitation wear.pdf
PDF
distributed database system" (DDBS) is often used to refer to both the distri...
PDF
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
PPT
Total quality management ppt for engineering students
PDF
August 2025 - Top 10 Read Articles in Network Security & Its Applications
PDF
Exploratory_Data_Analysis_Fundamentals.pdf
Design Guidelines and solutions for Plastics parts
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
Module 8- Technological and Communication Skills.pptx
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
Categorization of Factors Affecting Classification Algorithms Selection
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
Automation-in-Manufacturing-Chapter-Introduction.pdf
introduction to high performance computing
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
Safety Seminar civil to be ensured for safe working.
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
Fundamentals of safety and accident prevention -final (1).pptx
Abrasive, erosive and cavitation wear.pdf
distributed database system" (DDBS) is often used to refer to both the distri...
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
Total quality management ppt for engineering students
August 2025 - Top 10 Read Articles in Network Security & Its Applications
Exploratory_Data_Analysis_Fundamentals.pdf

Memory Management Strategies - IV.pdf

  • 2. • Background • Swapping • Contiguous Memory Allocation • Segmentation • Paging • Structure of the Page Table 2
  • 3. Structure of the Page Table 3 • Memory structures for paging can get huge using straight-forward methods • Consider a 32-bit logical address space as on modern computers • Page size of 4 KB (212) • Page table would have 1 million entries (232 / 212)= 2^20 • If each entry is 4 bytes -> 4 MB of physical address space / memory for page table alone • That amount of memory used to cost a lot • Don’t want to allocate that contiguously in main memory • Hierarchical Paging • Hashed Page Tables • Inverted Page Tables
  • 4. Hierarchical Paging 4 • Assumming that each entry consists of 4 bytes, each process may need up to 4 MB of physical address space for the page table alone • Clearly, we would not want to allocate the page table contiguously in main memory • One simple solution to this problem is to divide the page table into smaller pieces • One way is to use a two-level paging algorithm, in which the page table itself is also paged
  • 5. Hierarchical Paging 5 Figure. A two-level page-table scheme • A logical address (on 32-bit machine with 1K page size) is divided into: • a page number consisting of 22 bits • a page offset consisting of 10 bits • Since the page table is paged, the page number is further divided into: • a 12-bit page number • a 10-bit page offset • Thus, a logical address is as follows: where p1 is an index into the outer page table, and p2 is the displacement within the page of the inner page table
  • 6. Hierarchical Paging 6 Figure. Address translation for a two-level 32-bit paging architecture • Address translation works from the outer page table inward, this scheme is also known as a forward-mapped page table
  • 7. Hierarchical Paging 7 • Even two-level paging scheme not sufficient • If page size is 4 KB (212) • Then page table has 252 entries • If two level scheme, inner page tables could be 210 4-byte entries • Address would look like • Outer page table has 242 entries or 244 bytes • One solution is to add a 2nd outer page table • But in the following example the 2nd outer page table is still 234 bytes in size • And possibly 4 memory access to get to one physical memory location
  • 8. Hashed Page Tables 8 • Common in address spaces -> 32 bits • The virtual page number is hashed into a page table • This page table contains a chain of elements hashing to the same location • Each element contains (1) the virtual page number (2) the value of the mapped page frame (3) a pointer to the next element • Virtual page numbers are compared in this chain searching for a match • If a match is found, the corresponding physical frame is extracted • Variation for 64-bit addresses is clustered page tables • Similar to hashed but each entry refers to several pages (such as 16) rather than 1 • Especially useful for sparse address spaces (where memory references are non- contiguous and scattered)
  • 9. Hashed Page Tables 9 Figure. Hashed Page table
  • 10. Inverted Page Tables 10 • Rather than each process having a page table and keeping track of all possible logical pages, track all physical pages • One entry for each real page of memory • Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page • Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs • Use hash table to limit the search to one — or at most a few — page-table entries • TLB can accelerate access • But how to implement shared memory? • One mapping of a virtual address to the shared physical address
  • 11. Inverted Page Tables 11 Figure. Inverted Page table
  • 12. • Consider modern, 64-bit operating system example with tightly integrated • Goals are efficiency, low overhead • Based on hashing, but more complex • Two hash tables • One kernel and one for all user processes • Each maps memory addresses from virtual to physical memory • Each entry represents a contiguous area of mapped virtual memory, • More efficient than having a separate hash-table entry for each page • Each entry has base address and span (indicating the number of pages the entry represents) Oracle SPARC Solaris 12
  • 13. • TLB holds translation table entries (TTEs) for fast hardware lookups • A cache of TTEs reside in a translation storage buffer (TSB) • Includes an entry per recently accessed page • Virtual address reference causes TLB search • If miss, hardware walks the in-memory TSB looking for the TTE corresponding to the address • If match found, the CPU copies the TSB entry into the TLB and translation completes • If no match found, kernel interrupted to search the hash table • The kernel then creates a TTE from the appropriate hash table and stores it in the TSB, Interrupt handler returns control to the MMU, which completes the address translation. 13 Oracle SPARC Solaris
  • 14. 14 References 1. Silberscatnz and Galvin, “Operating System Concepts,” Ninth Edition, John Wiley and Sons, 2012.