SlideShare a Scribd company logo
1
Directory Protocols
• Topics: directory-based cache coherence implementations
2
Flat Memory-Based Directories
Main memory
Cache 1 Cache 2 Cache 64
…
…
Block size = 128 B
Memory in each node = 1 GB
Cache in each node = 1 MB
For 64 nodes and 64-bit directory,
Directory size = 4 GB
For 64 nodes and 12-bit directory,
Directory size = 0.75 GB
3
Flat Memory-Based Directories
L2 cache
L1 Cache 1 L1 Cache 2 L1 Cache 64
…
…
Block size = 64 B
L2 cache in each node = 1 MB
L1 Cache in each node = 64 KB
For 64 nodes and 64-bit directory,
Directory size = 8 MB
For 64 nodes and 12-bit directory,
Directory size = 1.5 MB
4
Flat Cache-Based Directories
Main memory…
Block size = 128 B
Memory in each node = 1 GB
Cache in each node = 1 MB
6-bit storage in DRAM for each block;
DRAM overhead = 0.375 GB
12-bit storage in SRAM for each block;
SRAM overhead = 0.75 MB
Cache 7 Cache 3 Cache 26
5
Flat Cache-Based Directories
Main memory…
6-bit storage in L2 for each block;
L2 overhead = 0.75 MB
12-bit storage in L1 for each block;
L1 overhead = 96 KB
Cache 7 Cache 3 Cache 26
Block size = 64 B
L2 cache in each node = 1 MB
L1 Cache in each node = 64 KB
6
Flat Cache-Based Directories
• The directory at the memory home node only stores a
pointer to the first cached copy – the caches store
pointers to the next and previous sharers (a doubly linked
list)
• Potentially lower storage, no bottleneck for network traffic,
• Invalidates are now serialized (takes longer to acquire
exclusive access), replacements must update linked list,
must handle race conditions while updating list
7
Serializing Writes for Coherence
• Potential problems: updates may be re-ordered by the
network; General solution: do not start the next write until
the previous one has completed
• Strategies for buffering writes:
 buffer at home: requires more storage at home node
 buffer at requestors: the request is forwarded to the
previous requestor and a linked list is formed
 NACK and retry: the home node nacks all requests
until the outstanding request has completed
8
SGI Origin 2000
• Flat memory-based directory protocol
• Uses a bit vector directory representation
• Two processors per node, but there is no snooping
protocol within a node – combining multiple processors
in a node reduces cost
P
L2
CA
M/D
P
L2
Interconnect
9
Protocol States
• Each memory block has seven states
• Three stable states: unowned, shared, exclusive (either
dirty or clean)
• Three busy states indicate that the home has not
completed the previous request for that block
(read, read-excl or upgrade, uncached read)
• Poison state – used for lazy TLB shootdown
10
Handling Reads
• When the home receives a read request, it looks up
memory (speculative read) and directory in parallel
• Actions taken for each directory state:
 shared or unowned: memory copy is clean, data
is returned to requestor, state is changed to excl if
there are no other sharers
 busy: a NACK is sent to the requestor
 exclusive: home is not the owner, request is fwded
to owner, owner sends data to requestor and home
11
Inner Details of Handling the Read
• The block is in exclusive state – memory may or may not
have a clean copy – it is speculatively read anyway
• The directory state is set to busy-exclusive and the
presence vector is updated
• In addition to fwding the request to the owner, the memory
copy is speculatively forwarded to the requestor
 Case 1: excl-dirty: owner sends block to requestor
and home, the speculatively sent data is over-written
 Case 2: excl-clean: owner sends an ack (without data)
to requestor and home, requestor waits for this ack
before it moves on with speculatively sent data
12
Inner Details II
• Why did we send the block speculatively to the requestor
if it does not save traffic or latency?
 the R10K cache controller is programmed to not
respond with data if it has a block in excl-clean state
 when an excl-clean block is replaced from the cache,
the directory need not be updated – hence, directory
cannot rely on the owner to provide data and
speculatively provides data on its own
13
Handling Write Requests
• The home node must invalidate all sharers and all
invalidations must be acked (to the requestor), the
requestor is informed of the number of invalidates to expect
• Actions taken for each state:
 shared: invalidates are sent, state is changed to
excl, data and num-sharers is sent to requestor,
the requestor cannot continue until it receives all acks
(Note: the directory does not maintain busy state,
subsequent requests will be fwded to new owner
and they must be buffered until the previous write
has completed)
14
Handling Writes II
• Actions taken for each state:
 unowned: if the request was an upgrade and not a
read-exclusive, is there a problem?
 exclusive: is there a problem if the request was an
upgrade? In case of a read-exclusive: directory is
set to busy, speculative reply is sent to requestor,
invalidate is sent to owner, owner sends data to
requestor (if dirty), and a “transfer of ownership”
message (no data) to home to change out of busy
 busy: the request is NACKed and the requestor
must try again
15
Handling Write-Back
• When a dirty block is replaced, a writeback is generated
and the home sends back an ack
• Can the directory state be shared when a writeback is
received by the directory?
• Actions taken for each directory state:
 exclusive: change directory state to unowned and
send an ack
 busy: a request and the writeback have crossed
paths: the writeback changes directory state to
shared or excl (depending on the busy state),
memory is updated, and home sends data to
requestor, the intervention request is dropped
16
Serialization
• Note that the directory serializes writes to a location, but
does not know when a write/read has completed at any
processor
• For example, a read reply may be floating on the network
and may reach the requestor much later – in the meantime,
the directory has already issued a number of invalidates,
the invalidate is overwritten when the read reply finally
shows up – hence, each node must buffer its requests
until outstanding requests have completed
17
Serialization - II
• Assume that a dirty block is being passed from P1 to
another writer P2, the “ownership transfer” message from
P1 to home takes a long time, P2 receives its data and
carries on, P2 does a writeback  protocol must be
designed to handle this case correctly
If the writeback is from the node that placed the directory
in busy state, the writeback is NACKed
(If instead, the writeback was allowed to proceed, at
some later point, if the directory was expecting an
“ownership transfer”, it may mis-interpret the “floating”
message)
18
Directory Structure
• The system supports either a 16-bit or 64-bit directory
(fixed cost)
• For small systems, the directory works as a full bit
vector representation
• For larger systems, a coarse vector is employed – each
bit represents p/64 nodes
• State is maintained for each node, not each processor –
the communication assist broadcasts requests to both
processors
19
Page Migration
• Each page in memory has an array of counters to detect
if a page has more misses from a node other than home
• When a page is moved to a different physical memory
location, the virtual address remains the same, but the
page table and TLBs must be updated
• To reduce the cost of TLB shootdown, the old page sets
its directory state to poisoned – if a process tries to access
this page, the OS intervenes and updates the translation
20
Title
• Bullet

More Related Content

PPTX
Write behind logging
PDF
Log Structured Merge Tree
PPTX
Some key value stores using log-structure
PDF
M|18 Architectural Overview: MariaDB MaxScale
PDF
M|18 Deep Dive: InnoDB Transactions and Write Paths
PDF
Understanding Cassandra, A Visual Approach
PDF
Disperse xlator ramon_datalab
PDF
Optimizing RocksDB for Open-Channel SSDs
Write behind logging
Log Structured Merge Tree
Some key value stores using log-structure
M|18 Architectural Overview: MariaDB MaxScale
M|18 Deep Dive: InnoDB Transactions and Write Paths
Understanding Cassandra, A Visual Approach
Disperse xlator ramon_datalab
Optimizing RocksDB for Open-Channel SSDs

What's hot (20)

PPTX
Exchang Server 2013 chapter 2
PPTX
RocksDB compaction
PDF
Plugging in oracle database 12c pluggable databases
PDF
1 technical-dns-workshop-day1
PDF
Pgxc scalability pg_open2012
PPTX
M|18 Creating a Reference Architecture for High Availability at Nokia
PPTX
Using oracle12c pluggable databases to archive
PDF
MariaDB MaxScale: an Intelligent Database Proxy
PPT
운영체제론 Ch11
PDF
Storage as a Service with Gluster
PPTX
Ch07 disaster recovery
PDF
Gluster fs current_features_and_roadmap
PDF
5 technical-dns-workshop-day3
PPTX
RocksDB detail
PDF
Gluster overview & future directions vault 2015
PDF
Gluster.next feb-2016
PPTX
August 2013 HUG: Removing the NameNode's memory limitation
ODP
Software defined storage
ODP
Sdc challenges-2012
PDF
Tech Talk: RocksDB Slides by Dhruba Borthakur & Haobo Xu of Facebook
Exchang Server 2013 chapter 2
RocksDB compaction
Plugging in oracle database 12c pluggable databases
1 technical-dns-workshop-day1
Pgxc scalability pg_open2012
M|18 Creating a Reference Architecture for High Availability at Nokia
Using oracle12c pluggable databases to archive
MariaDB MaxScale: an Intelligent Database Proxy
운영체제론 Ch11
Storage as a Service with Gluster
Ch07 disaster recovery
Gluster fs current_features_and_roadmap
5 technical-dns-workshop-day3
RocksDB detail
Gluster overview & future directions vault 2015
Gluster.next feb-2016
August 2013 HUG: Removing the NameNode's memory limitation
Software defined storage
Sdc challenges-2012
Tech Talk: RocksDB Slides by Dhruba Borthakur & Haobo Xu of Facebook
Ad

Similar to Dir based imp_5 (20)

KEY
Polyglot parallelism
PPT
Snooping protocols 3
PDF
Reinventing the wheel: libmc
PPT
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
PPT
Exploiting Multicore CPUs Now: Scalability and Reliability for Off-the-shelf ...
PPTX
CPU Caches
PDF
Kernel Recipes 2014 - NDIV: a low overhead network traffic diverter
PDF
1083 wang
PDF
OpenFabrics Interfaces introduction
PDF
CanSecWest 2017 - Port(al) to the iOS Core
PDF
Voldemort Nosql
PPTX
VM Forking and Hypervisor-based fuzzing
PDF
Server Tips
PDF
Fighting the Branch Predictor (ESUG 2025)
PDF
PPT
Snooping 2
PDF
All The Little Pieces
PDF
PASTE: A Network Programming Interface for Non-Volatile Main Memory
PDF
Van jaconson netchannels
PDF
Practical IoT Exploitation (DEFCON23 IoTVillage) - Lyon Yang
Polyglot parallelism
Snooping protocols 3
Reinventing the wheel: libmc
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Exploiting Multicore CPUs Now: Scalability and Reliability for Off-the-shelf ...
CPU Caches
Kernel Recipes 2014 - NDIV: a low overhead network traffic diverter
1083 wang
OpenFabrics Interfaces introduction
CanSecWest 2017 - Port(al) to the iOS Core
Voldemort Nosql
VM Forking and Hypervisor-based fuzzing
Server Tips
Fighting the Branch Predictor (ESUG 2025)
Snooping 2
All The Little Pieces
PASTE: A Network Programming Interface for Non-Volatile Main Memory
Van jaconson netchannels
Practical IoT Exploitation (DEFCON23 IoTVillage) - Lyon Yang
Ad

More from Yasir Khan (20)

PPT
Lecture 6
PPT
Lecture 4
PPT
Lecture 3
PPT
Lecture 2
PPT
Lec#1
PPT
Ch10 (1)
PPT
PPT
PPT
Introduction 1
PPT
Hpc sys
PPTX
Hpc 6 7
PPTX
Hpc 4 5
PPTX
Hpc 3
PPTX
Hpc 2
PPTX
Hpc 1
PPT
Flynns classification
PPT
Natural Language Processing
PPT
Uncertainity
PPT
Logic
PPT
M6 game
Lecture 6
Lecture 4
Lecture 3
Lecture 2
Lec#1
Ch10 (1)
Introduction 1
Hpc sys
Hpc 6 7
Hpc 4 5
Hpc 3
Hpc 2
Hpc 1
Flynns classification
Natural Language Processing
Uncertainity
Logic
M6 game

Recently uploaded (20)

PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
Classroom Observation Tools for Teachers
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Complications of Minimal Access Surgery at WLH
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
A systematic review of self-coping strategies used by university students to ...
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
GDM (1) (1).pptx small presentation for students
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Classroom Observation Tools for Teachers
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
O5-L3 Freight Transport Ops (International) V1.pdf
Anesthesia in Laparoscopic Surgery in India
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
102 student loan defaulters named and shamed – Is someone you know on the list?
Complications of Minimal Access Surgery at WLH
human mycosis Human fungal infections are called human mycosis..pptx
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
A systematic review of self-coping strategies used by university students to ...
Supply Chain Operations Speaking Notes -ICLT Program
VCE English Exam - Section C Student Revision Booklet
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
GDM (1) (1).pptx small presentation for students
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf

Dir based imp_5

  • 1. 1 Directory Protocols • Topics: directory-based cache coherence implementations
  • 2. 2 Flat Memory-Based Directories Main memory Cache 1 Cache 2 Cache 64 … … Block size = 128 B Memory in each node = 1 GB Cache in each node = 1 MB For 64 nodes and 64-bit directory, Directory size = 4 GB For 64 nodes and 12-bit directory, Directory size = 0.75 GB
  • 3. 3 Flat Memory-Based Directories L2 cache L1 Cache 1 L1 Cache 2 L1 Cache 64 … … Block size = 64 B L2 cache in each node = 1 MB L1 Cache in each node = 64 KB For 64 nodes and 64-bit directory, Directory size = 8 MB For 64 nodes and 12-bit directory, Directory size = 1.5 MB
  • 4. 4 Flat Cache-Based Directories Main memory… Block size = 128 B Memory in each node = 1 GB Cache in each node = 1 MB 6-bit storage in DRAM for each block; DRAM overhead = 0.375 GB 12-bit storage in SRAM for each block; SRAM overhead = 0.75 MB Cache 7 Cache 3 Cache 26
  • 5. 5 Flat Cache-Based Directories Main memory… 6-bit storage in L2 for each block; L2 overhead = 0.75 MB 12-bit storage in L1 for each block; L1 overhead = 96 KB Cache 7 Cache 3 Cache 26 Block size = 64 B L2 cache in each node = 1 MB L1 Cache in each node = 64 KB
  • 6. 6 Flat Cache-Based Directories • The directory at the memory home node only stores a pointer to the first cached copy – the caches store pointers to the next and previous sharers (a doubly linked list) • Potentially lower storage, no bottleneck for network traffic, • Invalidates are now serialized (takes longer to acquire exclusive access), replacements must update linked list, must handle race conditions while updating list
  • 7. 7 Serializing Writes for Coherence • Potential problems: updates may be re-ordered by the network; General solution: do not start the next write until the previous one has completed • Strategies for buffering writes:  buffer at home: requires more storage at home node  buffer at requestors: the request is forwarded to the previous requestor and a linked list is formed  NACK and retry: the home node nacks all requests until the outstanding request has completed
  • 8. 8 SGI Origin 2000 • Flat memory-based directory protocol • Uses a bit vector directory representation • Two processors per node, but there is no snooping protocol within a node – combining multiple processors in a node reduces cost P L2 CA M/D P L2 Interconnect
  • 9. 9 Protocol States • Each memory block has seven states • Three stable states: unowned, shared, exclusive (either dirty or clean) • Three busy states indicate that the home has not completed the previous request for that block (read, read-excl or upgrade, uncached read) • Poison state – used for lazy TLB shootdown
  • 10. 10 Handling Reads • When the home receives a read request, it looks up memory (speculative read) and directory in parallel • Actions taken for each directory state:  shared or unowned: memory copy is clean, data is returned to requestor, state is changed to excl if there are no other sharers  busy: a NACK is sent to the requestor  exclusive: home is not the owner, request is fwded to owner, owner sends data to requestor and home
  • 11. 11 Inner Details of Handling the Read • The block is in exclusive state – memory may or may not have a clean copy – it is speculatively read anyway • The directory state is set to busy-exclusive and the presence vector is updated • In addition to fwding the request to the owner, the memory copy is speculatively forwarded to the requestor  Case 1: excl-dirty: owner sends block to requestor and home, the speculatively sent data is over-written  Case 2: excl-clean: owner sends an ack (without data) to requestor and home, requestor waits for this ack before it moves on with speculatively sent data
  • 12. 12 Inner Details II • Why did we send the block speculatively to the requestor if it does not save traffic or latency?  the R10K cache controller is programmed to not respond with data if it has a block in excl-clean state  when an excl-clean block is replaced from the cache, the directory need not be updated – hence, directory cannot rely on the owner to provide data and speculatively provides data on its own
  • 13. 13 Handling Write Requests • The home node must invalidate all sharers and all invalidations must be acked (to the requestor), the requestor is informed of the number of invalidates to expect • Actions taken for each state:  shared: invalidates are sent, state is changed to excl, data and num-sharers is sent to requestor, the requestor cannot continue until it receives all acks (Note: the directory does not maintain busy state, subsequent requests will be fwded to new owner and they must be buffered until the previous write has completed)
  • 14. 14 Handling Writes II • Actions taken for each state:  unowned: if the request was an upgrade and not a read-exclusive, is there a problem?  exclusive: is there a problem if the request was an upgrade? In case of a read-exclusive: directory is set to busy, speculative reply is sent to requestor, invalidate is sent to owner, owner sends data to requestor (if dirty), and a “transfer of ownership” message (no data) to home to change out of busy  busy: the request is NACKed and the requestor must try again
  • 15. 15 Handling Write-Back • When a dirty block is replaced, a writeback is generated and the home sends back an ack • Can the directory state be shared when a writeback is received by the directory? • Actions taken for each directory state:  exclusive: change directory state to unowned and send an ack  busy: a request and the writeback have crossed paths: the writeback changes directory state to shared or excl (depending on the busy state), memory is updated, and home sends data to requestor, the intervention request is dropped
  • 16. 16 Serialization • Note that the directory serializes writes to a location, but does not know when a write/read has completed at any processor • For example, a read reply may be floating on the network and may reach the requestor much later – in the meantime, the directory has already issued a number of invalidates, the invalidate is overwritten when the read reply finally shows up – hence, each node must buffer its requests until outstanding requests have completed
  • 17. 17 Serialization - II • Assume that a dirty block is being passed from P1 to another writer P2, the “ownership transfer” message from P1 to home takes a long time, P2 receives its data and carries on, P2 does a writeback  protocol must be designed to handle this case correctly If the writeback is from the node that placed the directory in busy state, the writeback is NACKed (If instead, the writeback was allowed to proceed, at some later point, if the directory was expecting an “ownership transfer”, it may mis-interpret the “floating” message)
  • 18. 18 Directory Structure • The system supports either a 16-bit or 64-bit directory (fixed cost) • For small systems, the directory works as a full bit vector representation • For larger systems, a coarse vector is employed – each bit represents p/64 nodes • State is maintained for each node, not each processor – the communication assist broadcasts requests to both processors
  • 19. 19 Page Migration • Each page in memory has an array of counters to detect if a page has more misses from a node other than home • When a page is moved to a different physical memory location, the virtual address remains the same, but the page table and TLBs must be updated • To reduce the cost of TLB shootdown, the old page sets its directory state to poisoned – if a process tries to access this page, the OS intervenes and updates the translation