SlideShare a Scribd company logo
3-D Memory Stacking
3-D Stacked memory can provide large caches at high bandwidth
3D Stacking for low latency and high bandwidth memory system
- E.g. Half the latency, 8x the bandwidth [Loh&Hill, MICRO’11]
Stacked DRAM: Few hundred MB, not enough for main memory
Hardware-managed cache is desirable: Transparent to software
Source: Loh and Hill MICRO’11
Problems in Architecting Large Caches
Architecting tag-store for low-latency and low-storage is challenging
Organizing at cache line granularity (64 B) reduces wasted space and
wasted bandwidth
Problem: Cache of hundreds of MB needs tag-store of tens of MB
E.g. 256MB DRAM cache needs ~20MB tag store (5 bytes/line)
Option 1: SRAM Tags
Fast, But Impractical
(Not enough transistors)
Option 2: Tags in DRAM
Naïve design has 2x latency
(One access each for tag, data)
Loh-Hill Cache Design [Micro’11, TopPicks]
Recent work tries to reduce latency of Tags-in-DRAM approach
LH-Cache design similar to traditional set-associative cache
2KB row buffer = 32 cache lines
Speed-up cache miss detection:
A MissMap (2MB) in L3 tracks lines of pages resident in DRAM cache
Miss
Map
Data lines (29-ways)Tags
Cache organization: A 29-way set-associative DRAM (in 2KB row)
Keep Tag and Data in same DRAM row (tag-store & data store)
Data access guaranteed row-buffer hit (Latency ~1.5x instead of 2x)
Cache Optimizations Considered Harmful
Need to revisit DRAM cache structure given widely different constraints
DRAM caches are slow  Don’t make them slower
Many “seemingly-indispensable” and “well-understood” design
choices degrade performance of DRAM cache:
• Serial tag and data access
• High associativity
• Replacement update
Optimizations effective only in certain parameters/constraints
Parameters/constraints of DRAM cache quite different from SRAM
E.g. Placing one set in entire DRAM row  Row buffer hit rate ≈ 0%
Outline
 Introduction & Background
 Insight: Optimize First for Latency
 Proposal: Alloy Cache
 Memory Access Prediction
 Summary
Simple Example: Fast Cache (Typical)
Optimizing for hit-rate (at expense of hit latency) is effective
Consider a system with cache: hit latency 0.1 miss latency: 1
Base Hit Rate: 50% (base average latency: 0.55)
Opt A removes 40% misses (hit-rate:70%), increases hit latency by 40%
Base Cache Opt-A
Break Even
Hit-Rate=52%
Hit-Rate A=70%
Simple Example: Slow Cache (DRAM)
Base Cache Opt-A
Break Even
Hit-Rate=83%
Consider a system with cache: hit latency 0.5 miss latency: 1
Base Hit Rate: 50% (base average latency: 0.75)
Opt A removes 40% misses (hit-rate:70%), increases hit latency by 40%
Hit-Rate A=70%
Optimizations that increase hit latency start becoming ineffective
Overview of Different Designs
Our Goal: Outperform SRAM-Tags with a simple and practical design
For DRAM caches, critical to optimize first for latency, then hit-rate
What is the Hit Latency Impact?
Both SRAM-Tag and LH-Cache have much higher latency  ineffective
Consider Isolated accesses: X always gives row buffer hit, Y needs an row activation
How about Bandwidth?
LH-Cache reduces effective DRAM cache bandwidth by > 4x
Configuration Raw
Bandwidth
Transfer
Size on Hit
Effective
Bandwidth
Main Memory 1x 64B 1x
DRAM$(SRAM-Tag) 8x 64B 8x
DRAM$(LH-Cache) 8x 256B+16B 1.8x
DRAM$(IDEAL) 8x 64B 8x
For each hit, LH-Cache transfers:
• 3 lines of tags (3x64=192 bytes)
• 1 line for data (64 bytes)
• Replacement update (16 bytes)
Performance Potential
LH-Cache gives 8.7%, SRAM-Tag 24%, latency-optimized design 38%
8-core system with 8MB shared L3 cache at 24 cycles
DRAM Cache: 256MB (Shared), latency 2x lower than off-chip
0.6
0.8
1
1.2
1.4
1.6
1.8
Speedup(NoDRAM$)
LH-Cache SRAM-Tag IDEAL-Latency Optimized
De-optimizing for Performance
More benefits from optimizing for hit-latency than for hit-rate
LH-Cache uses LRU/DIP  needs update, uses bandwidth
LH-Cache can be configured as direct map  row buffer hits
Configuration Speedup Hit-Rate Hit-Latency
(cycles)
LH-Cache 8.7% 55.2% 107
LH-Cache + Random Repl. 10.2% 51.5% 98
LH-Cache (Direct Map) 15.2% 49.0% 82
IDEAL-LO (Direct Map) 38.4% 48.2% 35
Outline
 Introduction & Background
 Insight: Optimize First for Latency
 Proposal: Alloy Cache
 Memory Access Prediction
 Summary
Alloy Cache: Avoid Tag Serialization
Alloy Cache has low latency and uses less bandwidth
No dependent access for Tag and Data  Avoids Tag serialization
Consecutive lines in same DRAM row  High row buffer hit-rate
No need for separate “Tag-store” and “Data-Store”  Alloy Tag+Data
One “Tag+Data”
0.6
0.8
1
1.2
1.4
1.6
1.8
Performance of Alloy Cache
Alloy Cache with good predictor can outperform SRAM-Tag
Alloy+MissMap SRAM-TagAlloy+PerfectPredAlloy Cache
Speedup(NoDRAM$)
Alloy Cache with no early-miss detection gets 22%, close to SRAM-Tag
Outline
 Introduction & Background
 Insight: Optimize First for Latency
 Proposal: Alloy Cache
 Memory Access Prediction
 Summary
Cache Access Models
Each model has distinct advantage: lower latency or lower BW usage
Serial Access Model (SAM) and Parallel Access Model (PAM)
Higher Miss Latency
Needs less BW
Lower Miss Latency
Needs more BW
To Wait or Not to Wait?
Using Dynamic Access Model (DAM), we can get best latency and BW
Dynamic Access Model: Best of both SAM and PAM
When line likely to be present in cache use SAM, else use PAM
Memory Access
Predictor (MAP)
L3-miss
Address
Prediction =
Cache Hit
Prediction =
Memory Access
Use PAM
Use SAM
Memory Access Predictor (MAP)
Proposed MAP designs simple and low latency
We can use Hit Rate as proxy for MAP: High hit-rate SAM, low PAM
Accuracy improved with History-Based prediction
1. History-Based Global MAP (MAP-G)
• Single saturating counter per-core (3-bit)
• Increment on cache hit, decrement on miss
• MSB indicates SAM or PAM
Table
Of
Counters
(3-bit)
Miss PC
MAC
2. Instruction Based MAP (MAP-PC)
• Have a table of saturating counter
• Index table based on miss-causing PC
• Table of 256 entries sufficient (96 bytes)
0.6
0.8
1
1.2
1.4
1.6
1.8
Predictor Performance
Simple Memory Access Predictors obtain almost all potential gains
Speedup(NoDRAM$)
Alloy+MAP-Global Alloy +MAP-PC Alloy+PerfectMAPAlloy+NoPred
Accuracy of MAP-Global: 82% Accuracy of MAP-PC: 94%
Alloy Cache with MAP-PC gets 35%, Perfect MAP gets 36.5%
Hit-Latency versus Hit-Rate
Latency LH-Cache SRAM-Tag Alloy Cache
Average Latency (cycles) 107 67 43
Relative Latency 2.5x 1.5x 1.0x
Cache Size LH-Cache
(29-way)
Alloy Cache
(1-way)
Delta
Hit-Rate
256MB 55.2% 48.2% 7%
512MB 59.6% 55.2% 4.4%
1GB 62.6% 59.1% 2.5%
DRAM Cache Hit Rate
Alloy Cache reduces hit latency greatly at small loss of hit-rate
DRAM Cache Hit Latency
Outline
 Introduction & Background
 Insight: Optimize First for Latency
 Proposal: Alloy Cache
 Memory Access Prediction
 Summary
Summary
 DRAM Caches are slow, don’t make them slower
 Previous research: DRAM cache architected similar to SRAM cache
 Insight: Optimize DRAM cache first for latency, then hit-rate
 Latency optimized Alloy Cache avoids tag serialization
 Memory Access Predictor: simple, low latency, yet highly effective
 Alloy Cache + MAP outperforms SRAM-Tags (35% vs. 24%)
 Calls for new ways to manage DRAM cache space and bandwidth
Questions
Acknowledgement:
Work on “Memory Access Prediction” done while at IBM Research.
(Patent application filed Feb 2010, published Aug 2011)
Potential for Improvement
Design Performance
Improvement
Alloy Cache + MAP-PC 35.0%
Alloy Cache + Perfect Predictor 36.6%
IDEAL-LO Cache 38.4%
IDEAL-LO + No Tag Overhead 41.0%
Size Analysis
Simple Latency-Optimized design outperforms Impractical SRAM-Tags!
1.00
1.10
1.20
1.30
1.40
1.50
64MB 128MB 256MB 512MB 1GB
SRAM-Tags Alloy Cache + MAP-PCLH-Cache + MissMap
Proposed design provides 1.5x the benefit of SRAM-Tags
(LH-Cache provides about one-third the benefit)
Speedup(NoDRAM$)
How about Commercial Workloads?
Cache
Size
Hit-Rate
(1-way)
Hit-Rate
(32-way)
Hit-Rate
Delta
256MB 53.0% 60.3% 7.3%
512MB 58.6% 63.6% 5.0%
1GB 62.1% 65.1% 3.0%
Data averaged over 7 commercial workloads
Prediction Accuracy of MAP
MAP-PC
What about other SPEC benchmarks?
http://research.cs.wisc.edu/multifacet/papers/micro11_missmap_addendum.pdf
LH-Cache Addendum: Revised Results
SAM vs. PAM

More Related Content

PPTX
Czego pragna klienci
PPTX
Introduction to security_and_crypto
PPT
Prolog resume
PPTX
Data structures and algorithms
PPTX
Abstraction file
PPT
Basic dns-mod
PPT
Database introduction
PPTX
Data mining and knowledge discovery
Czego pragna klienci
Introduction to security_and_crypto
Prolog resume
Data structures and algorithms
Abstraction file
Basic dns-mod
Database introduction
Data mining and knowledge discovery

Viewers also liked (19)

PPT
Linked list
PPTX
Exception handling
PPTX
Test driven development
PPTX
La informacion andres sanchez- nidia rodriguez
PPTX
Python your new best friend
PPTX
Learning python
PDF
Aristokrat corporate
PPT
Information retrieval
PPT
PPTX
PPT
Game theory
PPT
Data preparation
PPT
Database concepts
PPT
PPT
Big data
PPT
Data preprocessing
PPT
Abstract class
PPTX
Key exchange in crypto
PPT
List in webpage
Linked list
Exception handling
Test driven development
La informacion andres sanchez- nidia rodriguez
Python your new best friend
Learning python
Aristokrat corporate
Information retrieval
Game theory
Data preparation
Database concepts
Big data
Data preprocessing
Abstract class
Key exchange in crypto
List in webpage
Ad

Similar to Hardware managed cache (20)

PPTX
CPU Memory Hierarchy and Caching Techniques
PPTX
2021Arch_5_ch2_2.pptx How to improve the performance of Memory hierarchy
PPT
Lec9 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part 1
PDF
Performance and Predictability - Richard Warburton
PDF
Performance and predictability (1)
PDF
2012 benjamin klenk-future-memory_technologies-presentation
PPT
Memoryhierarchy
PPTX
onur-comparch-fall2018-lecture3b-memoryhierarchyandcaches-afterlecture.pptx
PPTX
memorytechnologyandoptimization-140416131506-phpapp02.pptx
PDF
Unit I Memory technology and optimization
PPTX
Memory technology and optimization in Advance Computer Architechture
PDF
Memory_Unit Cache Main Virtual Associative
PPT
Memory Mapping Cache
PPTX
An introduction to column store indexes and batch mode
PPT
Mba admission in india
PPTX
Memory Organization
PPT
memory management powerpoint presentation
PPTX
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
PPTX
Exploring hybrid memory for gpu energy efficiency through software hardware c...
PPT
memory.ppt
CPU Memory Hierarchy and Caching Techniques
2021Arch_5_ch2_2.pptx How to improve the performance of Memory hierarchy
Lec9 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part 1
Performance and Predictability - Richard Warburton
Performance and predictability (1)
2012 benjamin klenk-future-memory_technologies-presentation
Memoryhierarchy
onur-comparch-fall2018-lecture3b-memoryhierarchyandcaches-afterlecture.pptx
memorytechnologyandoptimization-140416131506-phpapp02.pptx
Unit I Memory technology and optimization
Memory technology and optimization in Advance Computer Architechture
Memory_Unit Cache Main Virtual Associative
Memory Mapping Cache
An introduction to column store indexes and batch mode
Mba admission in india
Memory Organization
memory management powerpoint presentation
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
Exploring hybrid memory for gpu energy efficiency through software hardware c...
memory.ppt
Ad

More from Young Alista (20)

PPTX
Google appenginejava.ppt
PDF
Motivation for multithreaded architectures
PPT
Serialization/deserialization
PPTX
Big picture of data mining
PPTX
Business analytics and data mining
PPTX
Directory based cache coherence
PPTX
Cache recap
PPTX
How analysis services caching works
PPTX
Object model
PPTX
Optimizing shared caches in chip multiprocessors
PPT
Abstract data types
PPTX
Concurrency with java
PPTX
Inheritance
PPTX
Cobol, lisp, and python
PPTX
Object oriented analysis
PPTX
Programming for engineers in python
PPTX
Api crash
PPTX
Python basics
PPTX
Extending burp with python
PPTX
Python language data types
Google appenginejava.ppt
Motivation for multithreaded architectures
Serialization/deserialization
Big picture of data mining
Business analytics and data mining
Directory based cache coherence
Cache recap
How analysis services caching works
Object model
Optimizing shared caches in chip multiprocessors
Abstract data types
Concurrency with java
Inheritance
Cobol, lisp, and python
Object oriented analysis
Programming for engineers in python
Api crash
Python basics
Extending burp with python
Python language data types

Recently uploaded (20)

PDF
DP Operators-handbook-extract for the Mautical Institute
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
Getting Started with Data Integration: FME Form 101
PDF
project resource management chapter-09.pdf
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Encapsulation theory and applications.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
OMC Textile Division Presentation 2021.pptx
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
WOOl fibre morphology and structure.pdf for textiles
PPTX
Tartificialntelligence_presentation.pptx
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PDF
Approach and Philosophy of On baking technology
DP Operators-handbook-extract for the Mautical Institute
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
Getting Started with Data Integration: FME Form 101
project resource management chapter-09.pdf
Univ-Connecticut-ChatGPT-Presentaion.pdf
Group 1 Presentation -Planning and Decision Making .pptx
SOPHOS-XG Firewall Administrator PPT.pptx
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Encapsulation theory and applications.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
OMC Textile Division Presentation 2021.pptx
Unlocking AI with Model Context Protocol (MCP)
WOOl fibre morphology and structure.pdf for textiles
Tartificialntelligence_presentation.pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
A comparative analysis of optical character recognition models for extracting...
MIND Revenue Release Quarter 2 2025 Press Release
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
Approach and Philosophy of On baking technology

Hardware managed cache

  • 1. 3-D Memory Stacking 3-D Stacked memory can provide large caches at high bandwidth 3D Stacking for low latency and high bandwidth memory system - E.g. Half the latency, 8x the bandwidth [Loh&Hill, MICRO’11] Stacked DRAM: Few hundred MB, not enough for main memory Hardware-managed cache is desirable: Transparent to software Source: Loh and Hill MICRO’11
  • 2. Problems in Architecting Large Caches Architecting tag-store for low-latency and low-storage is challenging Organizing at cache line granularity (64 B) reduces wasted space and wasted bandwidth Problem: Cache of hundreds of MB needs tag-store of tens of MB E.g. 256MB DRAM cache needs ~20MB tag store (5 bytes/line) Option 1: SRAM Tags Fast, But Impractical (Not enough transistors) Option 2: Tags in DRAM Naïve design has 2x latency (One access each for tag, data)
  • 3. Loh-Hill Cache Design [Micro’11, TopPicks] Recent work tries to reduce latency of Tags-in-DRAM approach LH-Cache design similar to traditional set-associative cache 2KB row buffer = 32 cache lines Speed-up cache miss detection: A MissMap (2MB) in L3 tracks lines of pages resident in DRAM cache Miss Map Data lines (29-ways)Tags Cache organization: A 29-way set-associative DRAM (in 2KB row) Keep Tag and Data in same DRAM row (tag-store & data store) Data access guaranteed row-buffer hit (Latency ~1.5x instead of 2x)
  • 4. Cache Optimizations Considered Harmful Need to revisit DRAM cache structure given widely different constraints DRAM caches are slow  Don’t make them slower Many “seemingly-indispensable” and “well-understood” design choices degrade performance of DRAM cache: • Serial tag and data access • High associativity • Replacement update Optimizations effective only in certain parameters/constraints Parameters/constraints of DRAM cache quite different from SRAM E.g. Placing one set in entire DRAM row  Row buffer hit rate ≈ 0%
  • 5. Outline  Introduction & Background  Insight: Optimize First for Latency  Proposal: Alloy Cache  Memory Access Prediction  Summary
  • 6. Simple Example: Fast Cache (Typical) Optimizing for hit-rate (at expense of hit latency) is effective Consider a system with cache: hit latency 0.1 miss latency: 1 Base Hit Rate: 50% (base average latency: 0.55) Opt A removes 40% misses (hit-rate:70%), increases hit latency by 40% Base Cache Opt-A Break Even Hit-Rate=52% Hit-Rate A=70%
  • 7. Simple Example: Slow Cache (DRAM) Base Cache Opt-A Break Even Hit-Rate=83% Consider a system with cache: hit latency 0.5 miss latency: 1 Base Hit Rate: 50% (base average latency: 0.75) Opt A removes 40% misses (hit-rate:70%), increases hit latency by 40% Hit-Rate A=70% Optimizations that increase hit latency start becoming ineffective
  • 8. Overview of Different Designs Our Goal: Outperform SRAM-Tags with a simple and practical design For DRAM caches, critical to optimize first for latency, then hit-rate
  • 9. What is the Hit Latency Impact? Both SRAM-Tag and LH-Cache have much higher latency  ineffective Consider Isolated accesses: X always gives row buffer hit, Y needs an row activation
  • 10. How about Bandwidth? LH-Cache reduces effective DRAM cache bandwidth by > 4x Configuration Raw Bandwidth Transfer Size on Hit Effective Bandwidth Main Memory 1x 64B 1x DRAM$(SRAM-Tag) 8x 64B 8x DRAM$(LH-Cache) 8x 256B+16B 1.8x DRAM$(IDEAL) 8x 64B 8x For each hit, LH-Cache transfers: • 3 lines of tags (3x64=192 bytes) • 1 line for data (64 bytes) • Replacement update (16 bytes)
  • 11. Performance Potential LH-Cache gives 8.7%, SRAM-Tag 24%, latency-optimized design 38% 8-core system with 8MB shared L3 cache at 24 cycles DRAM Cache: 256MB (Shared), latency 2x lower than off-chip 0.6 0.8 1 1.2 1.4 1.6 1.8 Speedup(NoDRAM$) LH-Cache SRAM-Tag IDEAL-Latency Optimized
  • 12. De-optimizing for Performance More benefits from optimizing for hit-latency than for hit-rate LH-Cache uses LRU/DIP  needs update, uses bandwidth LH-Cache can be configured as direct map  row buffer hits Configuration Speedup Hit-Rate Hit-Latency (cycles) LH-Cache 8.7% 55.2% 107 LH-Cache + Random Repl. 10.2% 51.5% 98 LH-Cache (Direct Map) 15.2% 49.0% 82 IDEAL-LO (Direct Map) 38.4% 48.2% 35
  • 13. Outline  Introduction & Background  Insight: Optimize First for Latency  Proposal: Alloy Cache  Memory Access Prediction  Summary
  • 14. Alloy Cache: Avoid Tag Serialization Alloy Cache has low latency and uses less bandwidth No dependent access for Tag and Data  Avoids Tag serialization Consecutive lines in same DRAM row  High row buffer hit-rate No need for separate “Tag-store” and “Data-Store”  Alloy Tag+Data One “Tag+Data”
  • 15. 0.6 0.8 1 1.2 1.4 1.6 1.8 Performance of Alloy Cache Alloy Cache with good predictor can outperform SRAM-Tag Alloy+MissMap SRAM-TagAlloy+PerfectPredAlloy Cache Speedup(NoDRAM$) Alloy Cache with no early-miss detection gets 22%, close to SRAM-Tag
  • 16. Outline  Introduction & Background  Insight: Optimize First for Latency  Proposal: Alloy Cache  Memory Access Prediction  Summary
  • 17. Cache Access Models Each model has distinct advantage: lower latency or lower BW usage Serial Access Model (SAM) and Parallel Access Model (PAM) Higher Miss Latency Needs less BW Lower Miss Latency Needs more BW
  • 18. To Wait or Not to Wait? Using Dynamic Access Model (DAM), we can get best latency and BW Dynamic Access Model: Best of both SAM and PAM When line likely to be present in cache use SAM, else use PAM Memory Access Predictor (MAP) L3-miss Address Prediction = Cache Hit Prediction = Memory Access Use PAM Use SAM
  • 19. Memory Access Predictor (MAP) Proposed MAP designs simple and low latency We can use Hit Rate as proxy for MAP: High hit-rate SAM, low PAM Accuracy improved with History-Based prediction 1. History-Based Global MAP (MAP-G) • Single saturating counter per-core (3-bit) • Increment on cache hit, decrement on miss • MSB indicates SAM or PAM Table Of Counters (3-bit) Miss PC MAC 2. Instruction Based MAP (MAP-PC) • Have a table of saturating counter • Index table based on miss-causing PC • Table of 256 entries sufficient (96 bytes)
  • 20. 0.6 0.8 1 1.2 1.4 1.6 1.8 Predictor Performance Simple Memory Access Predictors obtain almost all potential gains Speedup(NoDRAM$) Alloy+MAP-Global Alloy +MAP-PC Alloy+PerfectMAPAlloy+NoPred Accuracy of MAP-Global: 82% Accuracy of MAP-PC: 94% Alloy Cache with MAP-PC gets 35%, Perfect MAP gets 36.5%
  • 21. Hit-Latency versus Hit-Rate Latency LH-Cache SRAM-Tag Alloy Cache Average Latency (cycles) 107 67 43 Relative Latency 2.5x 1.5x 1.0x Cache Size LH-Cache (29-way) Alloy Cache (1-way) Delta Hit-Rate 256MB 55.2% 48.2% 7% 512MB 59.6% 55.2% 4.4% 1GB 62.6% 59.1% 2.5% DRAM Cache Hit Rate Alloy Cache reduces hit latency greatly at small loss of hit-rate DRAM Cache Hit Latency
  • 22. Outline  Introduction & Background  Insight: Optimize First for Latency  Proposal: Alloy Cache  Memory Access Prediction  Summary
  • 23. Summary  DRAM Caches are slow, don’t make them slower  Previous research: DRAM cache architected similar to SRAM cache  Insight: Optimize DRAM cache first for latency, then hit-rate  Latency optimized Alloy Cache avoids tag serialization  Memory Access Predictor: simple, low latency, yet highly effective  Alloy Cache + MAP outperforms SRAM-Tags (35% vs. 24%)  Calls for new ways to manage DRAM cache space and bandwidth
  • 24. Questions Acknowledgement: Work on “Memory Access Prediction” done while at IBM Research. (Patent application filed Feb 2010, published Aug 2011)
  • 25. Potential for Improvement Design Performance Improvement Alloy Cache + MAP-PC 35.0% Alloy Cache + Perfect Predictor 36.6% IDEAL-LO Cache 38.4% IDEAL-LO + No Tag Overhead 41.0%
  • 26. Size Analysis Simple Latency-Optimized design outperforms Impractical SRAM-Tags! 1.00 1.10 1.20 1.30 1.40 1.50 64MB 128MB 256MB 512MB 1GB SRAM-Tags Alloy Cache + MAP-PCLH-Cache + MissMap Proposed design provides 1.5x the benefit of SRAM-Tags (LH-Cache provides about one-third the benefit) Speedup(NoDRAM$)
  • 27. How about Commercial Workloads? Cache Size Hit-Rate (1-way) Hit-Rate (32-way) Hit-Rate Delta 256MB 53.0% 60.3% 7.3% 512MB 58.6% 63.6% 5.0% 1GB 62.1% 65.1% 3.0% Data averaged over 7 commercial workloads
  • 29. What about other SPEC benchmarks?

Editor's Notes

  • #6: Give a brief overview of the presentation. Describe the major focus of the presentation and why it is important. Introduce each of the major topics. To provide a road map for the audience, you can repeat this Overview slide throughout the presentation, highlighting the particular topic you will discuss next.
  • #14: Give a brief overview of the presentation. Describe the major focus of the presentation and why it is important. Introduce each of the major topics. To provide a road map for the audience, you can repeat this Overview slide throughout the presentation, highlighting the particular topic you will discuss next.
  • #17: Give a brief overview of the presentation. Describe the major focus of the presentation and why it is important. Introduce each of the major topics. To provide a road map for the audience, you can repeat this Overview slide throughout the presentation, highlighting the particular topic you will discuss next.
  • #23: Give a brief overview of the presentation. Describe the major focus of the presentation and why it is important. Introduce each of the major topics. To provide a road map for the audience, you can repeat this Overview slide throughout the presentation, highlighting the particular topic you will discuss next.