SlideShare a Scribd company logo
Chapter 5 Large and Fast: Exploiting Memory Hierarchy
Multilevel Cache Considerations Primary cache Focus on minimal hit time L-2 cache Focus on low miss rate to avoid main memory access Hit time has less overall impact Results L-1 cache usually smaller than a single cache L-1 block size smaller than L-2 block size Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Interactions with Advanced CPUs Out-of-order CPUs can execute instructions during cache miss Pending store stays in load/store unit Dependent instructions wait in reservation stations Independent instructions continue Effect of miss depends on program data flow Much harder to analyse Use system simulation Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Interactions with Software Misses depend on memory access patterns Algorithm behavior Compiler optimization for memory access Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main memory Each gets a private virtual address space holding its frequently used code and data Protected from other programs CPU and OS translate virtual addresses to physical addresses VM “block” is called a page VM translation “miss” is called a page fault Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  §5.4 Virtual Memory
Address Translation Fixed-size pages (e.g., 4K) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Page Fault Penalty On page fault, the page must be fetched from disk Takes millions of clock cycles Handled by OS code Try to minimize page fault rate Fully associative placement Smart replacement algorithms Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Page Tables Stores placement information Array of page table entries, indexed by virtual page number Page table register in CPU points to page table in physical memory If page is present in memory PTE stores the physical page number Plus other status bits (referenced, dirty, …) If page is not present PTE can refer to location in swap space on disk Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Translation Using a Page Table Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Mapping Pages to Storage Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Replacement and Writes To reduce page fault rate, prefer least-recently used (LRU) replacement Reference bit (aka use bit) in PTE set to 1 on access to page Periodically cleared to 0 by OS A page with reference bit = 0 has not been used recently Disk writes take millions of cycles Block at once, not individual locations Write through is impractical Use write-back Dirty bit in PTE set when page is written Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Fast Translation Using a TLB Address translation would appear to require extra memory references One to access the PTE Then the actual memory access But access to page tables has good locality So use a fast cache of PTEs within the CPU Called a Translation Look-aside Buffer (TLB) Typical: 16–512 PTEs, 0.5–1 cycle for hit, 10–100 cycles for miss, 0.01%–1% miss rate Misses could be handled by hardware or software Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Fast Translation Using a TLB Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
TLB Misses If page is in memory Load the PTE from memory and retry Could be handled in hardware Can get complex for more complicated page table structures Or in software Raise a special exception, with optimized handler If page is not in memory (page fault) OS handles fetching the page and updating the page table Then restart the faulting instruction Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
TLB Miss Handler TLB miss indicates Page present, but PTE not in TLB Page not preset Must recognize TLB miss before destination register overwritten Raise exception Handler copies PTE from memory to TLB Then restarts instruction If page not present, page fault will occur Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Page Fault Handler Use faulting virtual address to find PTE Locate page on disk Choose page to replace If dirty, write to disk first Read page into memory and update page table Make process runnable again Restart from faulting instruction Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
TLB and Cache Interaction If cache tag uses physical address Need to translate before cache lookup Alternative: use virtual address tag Complications due to aliasing Different virtual addresses for shared physical address Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Memory Protection Different tasks can share parts of their virtual address spaces But need to protect against errant access Requires OS assistance Hardware support for OS protection Privileged supervisor mode (aka kernel mode) Privileged instructions Page tables and other state information only accessible in supervisor mode System call exception (e.g., syscall in MIPS) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
The Memory Hierarchy Common principles apply at all levels of the memory hierarchy Based on notions of caching At each level in the hierarchy Block placement Finding a block Replacement on a miss Write policy Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  §5.5 A Common Framework for Memory Hierarchies The BIG Picture
Block Placement Determined by associativity Direct mapped (1-way associative) One choice for placement n-way set associative n choices within a set Fully associative Any location Higher associativity reduces miss rate Increases complexity, cost, and access time Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Finding a Block Hardware caches Reduce comparisons to reduce cost Virtual memory Full table lookup makes full associativity feasible Benefit in reduced miss rate Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  Associativity Location method Tag comparisons Direct mapped Index 1 n-way set associative Set index, then search entries within the set n Fully associative Search all entries #entries Full lookup table 0
Replacement Choice of entry to replace on a miss Least recently used (LRU) Complex and costly hardware for high associativity Random Close to LRU, easier to implement Virtual memory LRU approximation with hardware support Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Write Policy Write-through Update both upper and lower levels Simplifies replacement, but may require write buffer Write-back Update upper level only Update lower level when block is replaced Need to keep more state Virtual memory Only write-back is feasible, given disk write latency  Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Sources of Misses Compulsory misses (aka cold start misses) First access to a block Capacity misses Due to finite cache size A replaced block is later accessed again Conflict misses (aka collision misses) In a non-fully associative cache Due to competition for entries in a set Would not occur in a fully associative cache of the same total size Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Cache Design Trade-offs Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  Design change Effect on miss rate Negative performance effect Increase cache size Decrease capacity misses May increase access time Increase associativity Decrease conflict misses May increase access time Increase block size Decrease compulsory misses Increases miss penalty. For very large block size, may increase miss rate due to pollution.
Virtual Machines Host computer emulates guest operating system and machine resources Improved isolation of multiple guests Avoids security and reliability problems Aids sharing of resources Virtualization has some performance impact Feasible with modern high-performance comptuers Examples IBM VM/370 (1970s technology!) VMWare Microsoft Virtual PC Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  §5.6 Virtual Machines
Virtual Machine Monitor Maps virtual resources to physical resources Memory, I/O devices, CPUs Guest code runs on native machine in user mode Traps to VMM on privileged instructions and access to protected resources Guest OS may be different from host OS VMM handles real I/O devices Emulates generic virtual I/O devices for guest Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Example: Timer Virtualization In native machine, on timer interrupt OS suspends current process, handles interrupt, selects and resumes next process With Virtual Machine Monitor VMM suspends current VM, handles interrupt, selects and resumes next VM If a VM requires timer interrupts VMM emulates a virtual timer Emulates interrupt for VM when physical timer interrupt occurs Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Instruction Set Support User and System modes Privileged instructions only available in system mode Trap to system if executed in user mode All physical resources only accessible using privileged instructions Including page tables, interrupt controls, I/O registers Renaissance of virtualization support Current ISAs (e.g., x86) adapting Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Cache Control Example cache characteristics Direct-mapped, write-back, write allocate Block size: 4 words (16 bytes) Cache size: 16 KB (1024 blocks) 32-bit byte addresses Valid bit and dirty bit per block Blocking cache CPU waits until access is complete Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  §5.7  Using a Finite State Machine to Control A Simple Cache Tag Index Offset 0 3 4 9 10 31 4 bits 10 bits 18 bits
Interface Signals Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  Cache CPU Memory Read/Write Valid Address Write Data Read Data Ready 32 32 32 Read/Write Valid Address Write Data Read Data Ready 32 128 128 Multiple cycles per access
Finite State Machines Use an FSM to sequence control steps Set of states, transition on each clock edge State values are binary encoded Current state stored in a register Next state =  f n  (current state, current inputs) Control output signals =  f o  (current state) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Cache Controller FSM Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  Could partition into separate states to reduce clock cycle time
Cache Coherence Problem Suppose two CPU cores share a physical address space Write-through caches Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  §5.8  Parallelism and Memory Hierarchies: Cache Coherence Time step Event CPU A’s cache CPU B’s cache Memory 0 0 1 CPU A reads X 0 0 2 CPU B reads X 0 0 0 3 CPU A writes 1 to X 1 0 1
Coherence Defined Informally: Reads return most recently written value Formally: P writes X; P reads X (no intervening writes)   read returns written value P 1  writes X; P 2  reads X (sufficiently later)   read returns written value c.f. CPU B reading X after step 3 in example P 1  writes X, P 2  writes X   all processors see writes in the same order End up with the same final value for X Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Cache Coherence Protocols Operations performed by caches in multiprocessors to ensure coherence Migration of data to local caches Reduces bandwidth for shared memory Replication of read-shared data Reduces contention for access Snooping protocols Each cache monitors bus reads/writes Directory-based protocols Caches and memory record sharing status of blocks in a directory Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Invalidating Snooping Protocols Cache gets exclusive access to a block when it is to be written Broadcasts an invalidate message on the bus Subsequent read in another cache misses Owning cache supplies updated value Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  CPU activity Bus activity CPU A’s cache CPU B’s cache Memory 0 CPU A reads X Cache miss for X 0 0 CPU B reads X Cache miss for X 0 0 0 CPU A writes 1 to X Invalidate for X 1 0 CPU B read X Cache miss for X 1 1 1
Memory Consistency When are writes seen by other processors “ Seen” means a read returns the written value Can’t be instantaneously Assumptions A write completes only when all processors have seen it A processor does not reorder writes with other accesses Consequence P writes X then writes Y   all processors that see new Y also see new X Processors can reorder reads, but not writes Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Multilevel On-Chip Caches Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  §5.10 Real Stuff: The AMD Opteron X4 and Intel Nehalem Per core: 32KB L1 I-cache, 32KB L1 D-cache, 512KB L2 cache Intel Nehalem 4-core processor
2-Level TLB Organization Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  Intel Nehalem AMD Opteron X4 Virtual addr 48 bits 48 bits Physical addr 44 bits 48 bits Page size 4KB, 2/4MB 4KB, 2/4MB L1 TLB (per core) L1 I-TLB: 128 entries for small pages, 7 per thread (2 × ) for large pages L1 D-TLB: 64 entries for small pages, 32 for large pages Both 4-way, LRU replacement L1 I-TLB: 48 entries L1 D-TLB: 48 entries Both fully associative, LRU replacement L2 TLB (per core) Single L2 TLB: 512 entries 4-way, LRU replacement L2 I-TLB: 512 entries L2 D-TLB: 512 entries Both 4-way, round-robin LRU TLB misses Handled in hardware Handled in hardware
3-Level Cache Organization Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  Intel Nehalem AMD Opteron X4 L1 caches (per core) L1 I-cache: 32KB, 64-byte blocks, 4-way, approx LRU replacement, hit time n/a L1 D-cache: 32KB, 64-byte blocks, 8-way, approx LRU replacement, write-back/allocate, hit time n/a L1 I-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, hit time 3 cycles L1 D-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, write-back/allocate, hit time 9 cycles L2 unified cache (per core) 256KB, 64-byte blocks, 8-way, approx LRU replacement, write-back/allocate, hit time n/a 512KB, 64-byte blocks, 16-way, approx LRU replacement, write-back/allocate, hit time n/a L3 unified cache (shared) 8MB, 64-byte blocks, 16-way, replacement n/a, write-back/allocate, hit time n/a 2MB, 64-byte blocks, 32-way, replace block shared by fewest cores, write-back/allocate, hit time 32 cycles n/a: data not available
Mis Penalty Reduction Return requested word first Then back-fill rest of block Non-blocking miss processing Hit under miss: allow hits to proceed Mis under miss: allow multiple outstanding misses Hardware prefetch: instructions and data Opteron X4: bank interleaved L1 D-cache Two concurrent accesses per cycle Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Pitfalls Byte vs. word addressing Example: 32-byte direct-mapped cache, 4-byte blocks Byte 36 maps to block 1 Word 36 maps to block 4 Ignoring memory system effects when writing or generating code Example: iterating over rows vs. columns of arrays Large strides result in poor locality Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  §5.11 Fallacies and Pitfalls
Pitfalls In multiprocessor with shared L2 or L3 cache Less associativity than cores results in conflict misses More cores    need to increase associativity Using AMAT to evaluate performance of out-of-order processors Ignores effect of non-blocked accesses Instead, evaluate performance by simulation Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Pitfalls Extending address range using segments E.g., Intel 80286 But a segment is not always big enough Makes address arithmetic complicated Implementing a VMM on an ISA not designed for virtualization E.g., non-privileged instructions accessing hardware resources Either extend ISA, or require guest OS not to use problematic instructions Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
Concluding Remarks Fast memories are small, large memories are slow We really want fast, large memories   Caching gives this illusion   Principle of locality Programs use a small part of their memory space frequently Memory hierarchy L1 cache    L2 cache    …    DRAM memory   disk Memory system design is critical for multiprocessors Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —  §5.12 Concluding Remarks

More Related Content

PPT
Chapter 4 The Processor
PDF
コンテナの作り方「Dockerは裏方で何をしているのか?」
PPT
Chapter 2 instructions language of the computer
PPTX
Diabetes Mellitus
PPTX
Hypertension
PPTX
Republic Act No. 11313 Safe Spaces Act (Bawal Bastos Law).pptx
PPTX
Power Point Presentation on Artificial Intelligence
Chapter 4 The Processor
コンテナの作り方「Dockerは裏方で何をしているのか?」
Chapter 2 instructions language of the computer
Diabetes Mellitus
Hypertension
Republic Act No. 11313 Safe Spaces Act (Bawal Bastos Law).pptx
Power Point Presentation on Artificial Intelligence

What's hot (20)

PPT
Chapter 5 b
PPT
Chapter 5 a
PPT
Chapter 4
PPT
Chapter 3
PPT
Chapter 4 the processor
PPT
Chapter 1 computer abstractions and technology
PPTX
Multi core processors
PPTX
Cache Memory- JMD.pptx
PDF
Unit IV Memory and I/O Organization
PPT
8085 microprocessor Embedded system
PPTX
System Booting Process overview
PPTX
Storage Management
PPTX
Memory technology and optimization in Advance Computer Architechture
PPT
Lecture6 memory hierarchy
PPT
Multi-core architectures
PPT
Multicore computers
PPT
04 cache memory.ppt 1
PDF
UDP - User Datagram Protocol
PPTX
Cache coherence
PPTX
Slideshare - PCIe
Chapter 5 b
Chapter 5 a
Chapter 4
Chapter 3
Chapter 4 the processor
Chapter 1 computer abstractions and technology
Multi core processors
Cache Memory- JMD.pptx
Unit IV Memory and I/O Organization
8085 microprocessor Embedded system
System Booting Process overview
Storage Management
Memory technology and optimization in Advance Computer Architechture
Lecture6 memory hierarchy
Multi-core architectures
Multicore computers
04 cache memory.ppt 1
UDP - User Datagram Protocol
Cache coherence
Slideshare - PCIe
Ad

Similar to Chapter 5 c (20)

PDF
CH09.pdf
PPTX
Memory Hierarchy Design, Basics, Cache Optimization, Address Translation
PPT
Mca ii os u-4 memory management
PPT
Linux Memory
PPT
Memory management ppt coa
PPT
CSE_213_7 Large and Fast Exploiting Memory Hierarchy.ppt
PDF
CH08.pdf
PDF
Cs8493 unit 3
PDF
Cs8493 unit 3
PDF
CS6401 OPERATING SYSTEMS Unit 3
DOCX
virtual memory
PPT
Memory management principles in operating systems
PDF
Virtual+Cache_Memory Operation System...
PPTX
HW29kkkkkkkkkkkkkkkkkkkmmmmkkmmkkk454.pptx
PPTX
Virtual Memory Managementddddddddffffffffffffff.pptx
PPT
Memory comp
PPTX
Web scale MySQL at Facebook (Domas Mituzas)
PDF
381 ccs chapter7_updated(1)
PPT
Chapter 8 - Virtual memory - William stallings.ppt
CH09.pdf
Memory Hierarchy Design, Basics, Cache Optimization, Address Translation
Mca ii os u-4 memory management
Linux Memory
Memory management ppt coa
CSE_213_7 Large and Fast Exploiting Memory Hierarchy.ppt
CH08.pdf
Cs8493 unit 3
Cs8493 unit 3
CS6401 OPERATING SYSTEMS Unit 3
virtual memory
Memory management principles in operating systems
Virtual+Cache_Memory Operation System...
HW29kkkkkkkkkkkkkkkkkkkmmmmkkmmkkk454.pptx
Virtual Memory Managementddddddddffffffffffffff.pptx
Memory comp
Web scale MySQL at Facebook (Domas Mituzas)
381 ccs chapter7_updated(1)
Chapter 8 - Virtual memory - William stallings.ppt
Ad

More from ececourse (10)

DOCX
Auxiliary
DOCX
Mem Tb
DOC
Machine Problem 2
DOC
Machine Problem 1
PPT
Chapter 2 Hw
PPT
Chapter 2 Part2 C
PPT
C:\Fakepath\Chapter 2 Part2 B
PPT
Chapter 2 Part2 A
PPT
Chapter1
PPT
Chapter 2 Part1
Auxiliary
Mem Tb
Machine Problem 2
Machine Problem 1
Chapter 2 Hw
Chapter 2 Part2 C
C:\Fakepath\Chapter 2 Part2 B
Chapter 2 Part2 A
Chapter1
Chapter 2 Part1

Recently uploaded (20)

PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
O7-L3 Supply Chain Operations - ICLT Program
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
Business Ethics Teaching Materials for college
PDF
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
PPTX
Cell Structure & Organelles in detailed.
PPTX
Cell Types and Its function , kingdom of life
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PPTX
Week 4 Term 3 Study Techniques revisited.pptx
PDF
Pre independence Education in Inndia.pdf
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPTX
PPH.pptx obstetrics and gynecology in nursing
PPTX
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
Renaissance Architecture: A Journey from Faith to Humanism
O7-L3 Supply Chain Operations - ICLT Program
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Business Ethics Teaching Materials for college
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
Cell Structure & Organelles in detailed.
Cell Types and Its function , kingdom of life
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
Week 4 Term 3 Study Techniques revisited.pptx
Pre independence Education in Inndia.pdf
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Anesthesia in Laparoscopic Surgery in India
Microbial diseases, their pathogenesis and prophylaxis
Abdominal Access Techniques with Prof. Dr. R K Mishra
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPH.pptx obstetrics and gynecology in nursing
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
102 student loan defaulters named and shamed – Is someone you know on the list?

Chapter 5 c

  • 1. Chapter 5 Large and Fast: Exploiting Memory Hierarchy
  • 2. Multilevel Cache Considerations Primary cache Focus on minimal hit time L-2 cache Focus on low miss rate to avoid main memory access Hit time has less overall impact Results L-1 cache usually smaller than a single cache L-1 block size smaller than L-2 block size Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 3. Interactions with Advanced CPUs Out-of-order CPUs can execute instructions during cache miss Pending store stays in load/store unit Dependent instructions wait in reservation stations Independent instructions continue Effect of miss depends on program data flow Much harder to analyse Use system simulation Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 4. Interactions with Software Misses depend on memory access patterns Algorithm behavior Compiler optimization for memory access Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 5. Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main memory Each gets a private virtual address space holding its frequently used code and data Protected from other programs CPU and OS translate virtual addresses to physical addresses VM “block” is called a page VM translation “miss” is called a page fault Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — §5.4 Virtual Memory
  • 6. Address Translation Fixed-size pages (e.g., 4K) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 7. Page Fault Penalty On page fault, the page must be fetched from disk Takes millions of clock cycles Handled by OS code Try to minimize page fault rate Fully associative placement Smart replacement algorithms Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 8. Page Tables Stores placement information Array of page table entries, indexed by virtual page number Page table register in CPU points to page table in physical memory If page is present in memory PTE stores the physical page number Plus other status bits (referenced, dirty, …) If page is not present PTE can refer to location in swap space on disk Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 9. Translation Using a Page Table Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 10. Mapping Pages to Storage Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 11. Replacement and Writes To reduce page fault rate, prefer least-recently used (LRU) replacement Reference bit (aka use bit) in PTE set to 1 on access to page Periodically cleared to 0 by OS A page with reference bit = 0 has not been used recently Disk writes take millions of cycles Block at once, not individual locations Write through is impractical Use write-back Dirty bit in PTE set when page is written Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 12. Fast Translation Using a TLB Address translation would appear to require extra memory references One to access the PTE Then the actual memory access But access to page tables has good locality So use a fast cache of PTEs within the CPU Called a Translation Look-aside Buffer (TLB) Typical: 16–512 PTEs, 0.5–1 cycle for hit, 10–100 cycles for miss, 0.01%–1% miss rate Misses could be handled by hardware or software Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 13. Fast Translation Using a TLB Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 14. TLB Misses If page is in memory Load the PTE from memory and retry Could be handled in hardware Can get complex for more complicated page table structures Or in software Raise a special exception, with optimized handler If page is not in memory (page fault) OS handles fetching the page and updating the page table Then restart the faulting instruction Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 15. TLB Miss Handler TLB miss indicates Page present, but PTE not in TLB Page not preset Must recognize TLB miss before destination register overwritten Raise exception Handler copies PTE from memory to TLB Then restarts instruction If page not present, page fault will occur Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 16. Page Fault Handler Use faulting virtual address to find PTE Locate page on disk Choose page to replace If dirty, write to disk first Read page into memory and update page table Make process runnable again Restart from faulting instruction Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 17. TLB and Cache Interaction If cache tag uses physical address Need to translate before cache lookup Alternative: use virtual address tag Complications due to aliasing Different virtual addresses for shared physical address Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 18. Memory Protection Different tasks can share parts of their virtual address spaces But need to protect against errant access Requires OS assistance Hardware support for OS protection Privileged supervisor mode (aka kernel mode) Privileged instructions Page tables and other state information only accessible in supervisor mode System call exception (e.g., syscall in MIPS) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 19. The Memory Hierarchy Common principles apply at all levels of the memory hierarchy Based on notions of caching At each level in the hierarchy Block placement Finding a block Replacement on a miss Write policy Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — §5.5 A Common Framework for Memory Hierarchies The BIG Picture
  • 20. Block Placement Determined by associativity Direct mapped (1-way associative) One choice for placement n-way set associative n choices within a set Fully associative Any location Higher associativity reduces miss rate Increases complexity, cost, and access time Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 21. Finding a Block Hardware caches Reduce comparisons to reduce cost Virtual memory Full table lookup makes full associativity feasible Benefit in reduced miss rate Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — Associativity Location method Tag comparisons Direct mapped Index 1 n-way set associative Set index, then search entries within the set n Fully associative Search all entries #entries Full lookup table 0
  • 22. Replacement Choice of entry to replace on a miss Least recently used (LRU) Complex and costly hardware for high associativity Random Close to LRU, easier to implement Virtual memory LRU approximation with hardware support Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 23. Write Policy Write-through Update both upper and lower levels Simplifies replacement, but may require write buffer Write-back Update upper level only Update lower level when block is replaced Need to keep more state Virtual memory Only write-back is feasible, given disk write latency Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 24. Sources of Misses Compulsory misses (aka cold start misses) First access to a block Capacity misses Due to finite cache size A replaced block is later accessed again Conflict misses (aka collision misses) In a non-fully associative cache Due to competition for entries in a set Would not occur in a fully associative cache of the same total size Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 25. Cache Design Trade-offs Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — Design change Effect on miss rate Negative performance effect Increase cache size Decrease capacity misses May increase access time Increase associativity Decrease conflict misses May increase access time Increase block size Decrease compulsory misses Increases miss penalty. For very large block size, may increase miss rate due to pollution.
  • 26. Virtual Machines Host computer emulates guest operating system and machine resources Improved isolation of multiple guests Avoids security and reliability problems Aids sharing of resources Virtualization has some performance impact Feasible with modern high-performance comptuers Examples IBM VM/370 (1970s technology!) VMWare Microsoft Virtual PC Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — §5.6 Virtual Machines
  • 27. Virtual Machine Monitor Maps virtual resources to physical resources Memory, I/O devices, CPUs Guest code runs on native machine in user mode Traps to VMM on privileged instructions and access to protected resources Guest OS may be different from host OS VMM handles real I/O devices Emulates generic virtual I/O devices for guest Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 28. Example: Timer Virtualization In native machine, on timer interrupt OS suspends current process, handles interrupt, selects and resumes next process With Virtual Machine Monitor VMM suspends current VM, handles interrupt, selects and resumes next VM If a VM requires timer interrupts VMM emulates a virtual timer Emulates interrupt for VM when physical timer interrupt occurs Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 29. Instruction Set Support User and System modes Privileged instructions only available in system mode Trap to system if executed in user mode All physical resources only accessible using privileged instructions Including page tables, interrupt controls, I/O registers Renaissance of virtualization support Current ISAs (e.g., x86) adapting Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 30. Cache Control Example cache characteristics Direct-mapped, write-back, write allocate Block size: 4 words (16 bytes) Cache size: 16 KB (1024 blocks) 32-bit byte addresses Valid bit and dirty bit per block Blocking cache CPU waits until access is complete Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — §5.7 Using a Finite State Machine to Control A Simple Cache Tag Index Offset 0 3 4 9 10 31 4 bits 10 bits 18 bits
  • 31. Interface Signals Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — Cache CPU Memory Read/Write Valid Address Write Data Read Data Ready 32 32 32 Read/Write Valid Address Write Data Read Data Ready 32 128 128 Multiple cycles per access
  • 32. Finite State Machines Use an FSM to sequence control steps Set of states, transition on each clock edge State values are binary encoded Current state stored in a register Next state = f n (current state, current inputs) Control output signals = f o (current state) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 33. Cache Controller FSM Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — Could partition into separate states to reduce clock cycle time
  • 34. Cache Coherence Problem Suppose two CPU cores share a physical address space Write-through caches Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — §5.8 Parallelism and Memory Hierarchies: Cache Coherence Time step Event CPU A’s cache CPU B’s cache Memory 0 0 1 CPU A reads X 0 0 2 CPU B reads X 0 0 0 3 CPU A writes 1 to X 1 0 1
  • 35. Coherence Defined Informally: Reads return most recently written value Formally: P writes X; P reads X (no intervening writes)  read returns written value P 1 writes X; P 2 reads X (sufficiently later)  read returns written value c.f. CPU B reading X after step 3 in example P 1 writes X, P 2 writes X  all processors see writes in the same order End up with the same final value for X Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 36. Cache Coherence Protocols Operations performed by caches in multiprocessors to ensure coherence Migration of data to local caches Reduces bandwidth for shared memory Replication of read-shared data Reduces contention for access Snooping protocols Each cache monitors bus reads/writes Directory-based protocols Caches and memory record sharing status of blocks in a directory Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 37. Invalidating Snooping Protocols Cache gets exclusive access to a block when it is to be written Broadcasts an invalidate message on the bus Subsequent read in another cache misses Owning cache supplies updated value Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — CPU activity Bus activity CPU A’s cache CPU B’s cache Memory 0 CPU A reads X Cache miss for X 0 0 CPU B reads X Cache miss for X 0 0 0 CPU A writes 1 to X Invalidate for X 1 0 CPU B read X Cache miss for X 1 1 1
  • 38. Memory Consistency When are writes seen by other processors “ Seen” means a read returns the written value Can’t be instantaneously Assumptions A write completes only when all processors have seen it A processor does not reorder writes with other accesses Consequence P writes X then writes Y  all processors that see new Y also see new X Processors can reorder reads, but not writes Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 39. Multilevel On-Chip Caches Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — §5.10 Real Stuff: The AMD Opteron X4 and Intel Nehalem Per core: 32KB L1 I-cache, 32KB L1 D-cache, 512KB L2 cache Intel Nehalem 4-core processor
  • 40. 2-Level TLB Organization Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — Intel Nehalem AMD Opteron X4 Virtual addr 48 bits 48 bits Physical addr 44 bits 48 bits Page size 4KB, 2/4MB 4KB, 2/4MB L1 TLB (per core) L1 I-TLB: 128 entries for small pages, 7 per thread (2 × ) for large pages L1 D-TLB: 64 entries for small pages, 32 for large pages Both 4-way, LRU replacement L1 I-TLB: 48 entries L1 D-TLB: 48 entries Both fully associative, LRU replacement L2 TLB (per core) Single L2 TLB: 512 entries 4-way, LRU replacement L2 I-TLB: 512 entries L2 D-TLB: 512 entries Both 4-way, round-robin LRU TLB misses Handled in hardware Handled in hardware
  • 41. 3-Level Cache Organization Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — Intel Nehalem AMD Opteron X4 L1 caches (per core) L1 I-cache: 32KB, 64-byte blocks, 4-way, approx LRU replacement, hit time n/a L1 D-cache: 32KB, 64-byte blocks, 8-way, approx LRU replacement, write-back/allocate, hit time n/a L1 I-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, hit time 3 cycles L1 D-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, write-back/allocate, hit time 9 cycles L2 unified cache (per core) 256KB, 64-byte blocks, 8-way, approx LRU replacement, write-back/allocate, hit time n/a 512KB, 64-byte blocks, 16-way, approx LRU replacement, write-back/allocate, hit time n/a L3 unified cache (shared) 8MB, 64-byte blocks, 16-way, replacement n/a, write-back/allocate, hit time n/a 2MB, 64-byte blocks, 32-way, replace block shared by fewest cores, write-back/allocate, hit time 32 cycles n/a: data not available
  • 42. Mis Penalty Reduction Return requested word first Then back-fill rest of block Non-blocking miss processing Hit under miss: allow hits to proceed Mis under miss: allow multiple outstanding misses Hardware prefetch: instructions and data Opteron X4: bank interleaved L1 D-cache Two concurrent accesses per cycle Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 43. Pitfalls Byte vs. word addressing Example: 32-byte direct-mapped cache, 4-byte blocks Byte 36 maps to block 1 Word 36 maps to block 4 Ignoring memory system effects when writing or generating code Example: iterating over rows vs. columns of arrays Large strides result in poor locality Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — §5.11 Fallacies and Pitfalls
  • 44. Pitfalls In multiprocessor with shared L2 or L3 cache Less associativity than cores results in conflict misses More cores  need to increase associativity Using AMAT to evaluate performance of out-of-order processors Ignores effect of non-blocked accesses Instead, evaluate performance by simulation Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 45. Pitfalls Extending address range using segments E.g., Intel 80286 But a segment is not always big enough Makes address arithmetic complicated Implementing a VMM on an ISA not designed for virtualization E.g., non-privileged instructions accessing hardware resources Either extend ISA, or require guest OS not to use problematic instructions Chapter 5 — Large and Fast: Exploiting Memory Hierarchy —
  • 46. Concluding Remarks Fast memories are small, large memories are slow We really want fast, large memories  Caching gives this illusion  Principle of locality Programs use a small part of their memory space frequently Memory hierarchy L1 cache  L2 cache  …  DRAM memory  disk Memory system design is critical for multiprocessors Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — §5.12 Concluding Remarks

Editor's Notes

  • #2: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #3: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #4: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #5: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #6: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #7: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #8: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #9: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #10: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #11: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #12: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #13: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #14: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #15: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #16: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #17: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #18: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #19: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #20: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #21: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #22: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #23: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #24: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #25: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #26: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #27: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #28: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #29: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #30: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #31: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #32: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #33: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #34: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #35: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #36: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #37: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #38: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #39: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #40: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #41: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #42: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #43: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #44: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #45: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #46: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy
  • #47: Morgan Kaufmann Publishers 9 March 2010 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy