SlideShare a Scribd company logo
2
Most read
7
Most read
10
Most read
STUDY OF FUNCTIONING OF
CACHE MEMORY AND ITS
LATEST DEVELOPMENTS IN
CACHE MEMORY
WHAT IS CACHE ?
A cache memory is a fast and relatively
small memory, that stores the most recently
used (MRU) main memory(MM) (or working
memory) data.
It is simply a copy of a small data segment
residing in the main memory.
Hold identical copies of main memory.
The function of this is to speed up the MM
data access.
Cache memory ppt
Memories that consists of circuits capable of retaining their state
as long as power is applied are known as static memory.
Less expensive RAM‟s can be implemented if simplex calls are
used such cells cannot retain their state indefinitely. Hence they
are called Dynamic RAM’s (DRAM). The information stored in a
dynamic memory cell in the form of a charge on a capacitor and
this charge can be maintained only for tens of Milliseconds.
Latency is one of the parameter indicating good performance
Latency: It refers to the amount of time it takes to transfer a word
of data to or from the memory.
Cache memory ppt
Cache memory ppt
Functional principles of the cache memory
 In MM read operation, the cache controller first of all
checks if the data is stored in cache.
In case of match (Hit - cache hit ) the data is fastly and
directly supplied from the cache to the processor
without involving the MM.
 Else (Miss - cache miss ) the data is read from MM.
 A cache hit is a state in which data requested for
processing by a component or application is found in
the cache memory. It is a faster means of delivering
data to the processor, as the cache already contains
the requested data.
 A cache hit occurs when an application or software
requests data. First, the CPU looks for the data in its
closest memory location, which is usually the primary
cache. If the requested data is found in the cache, it is
considered a cache hit.
 A cache hit serves data more quickly, as the data can be
retrieved by reading the cache memory.
Cache miss is a state where the data requested for
processing by a component or application is not found in
the cache memory. It causes execution delays by
requiring the program or application to fetch the data
from other cache levels or the main memory.
 Each cache miss slows down the overall process because
after a cache miss, the CPU will look for a higher level
cache, such as L1, L2, L3 and RAM for that data.
Further, a new entry is created and copied in cache
before it can be accessed by the processor.
 The more cache hits the better.
Cache Memory operation is based on two
major "principles of locality"
Temporal locality
 Spatial locality'
Temporal locality
Data which have been used recently have high likelihood of
being used again.
A cache stores only a subset of MM data – the most recent-used
MRU. Data read from MM are temporary stored in cache. If the
processor requires the same data, this is supplied by the cache.
Spatial locality
If a data is referenced, it is very likely that nearby data will be
accessed soon.
Instructions and data are transferred from MM to the cache in
fixed blocks (cache block), known as cache lines. Cache line
size is in the range of 4 to 512 bytes.
Most programs are highly sequential. Next instruction usually
comes from the next memory location. Data is usually
structured and data in these structures normally are stored in
contiguous memory locations (data strings, arrays, etc.).
Large lines size increase the spatial locality but increase also the
number of invalidated data in case of line replacement .
MAPPING FUNCTIONS
1. Direct mapping
 The simplest way to determine cache locations in which to store
memoty blocks is the direct mapping technique.
2. Associative mapping
 In this method, the main memory block can be placed into any cache
block position.
 12 tag bits will identify a memory block when it is resolved in the
cache.
 The tag bits of an address received from the processor are compared
to the tag bits of each block of the cache to see if the desired block is
persent.This is called associative mapping.
3. Set-associative mapping
 It is the combination of direct and associative mapping. The blocks of
the cache are grouped into sets and the mapping allows a block of the
main memory to reside in any block of the specified set.
Block 1
Block 0
Block 127
Block 0
Block 1
Block 127
Block 128
Block 129
Block 255
Block 256
Block 257
Block 4095
tag
tag
tag
cache
Main
memory
DIRECTION MAPPING
Block 1
Block 0
Block 127
Block 0
Block 1
Block i
Block 257
Block 4095
tag
tag
tag
cache
Main
memory
ASSOCIATIVE MAPPING
Block 1
Block 0
Block 126
Block 0
Block 1
Block 63
Block 64
Block 65
Block 127
Block 128
Block 129
Block 4095
tag
tag
tag
cache
Main
memory
Block 2
Block 3
tag
tag
Block 127tag
Set 0
Set 1
Set 63
SET-ASSOCIATIVE MAPPING
 The Cache memory stores a reasonable number of blocks at a given
time but this number is small compared to the total number of blocks
available in Main Memory.
 The correspondence between main memory block and the block in
cache memory is specified by a mapping function.
 The Cache control hardware decide that which block should be
removed to create space for the new block that contains the
referenced word. The collection of rule for making this decision is
called the replacement algorithm.
 The cache control circuit determines whether the requested word
currently exists in the cache. If it exists, then Read/Write operation
will take place on appropriate cache location.
 In this case Read/Write hit will occur.
REPLACEMENT POLICIES
1. First In First Out (FIFO)
Using this algorithm the cache behaves in the same way as a FIFO
queue. The cache evicts the first block accessed first without any regard
to how often or how many times it was accessed before.
2. Last In First Out (LIFO)
Using this algorithm the cache behaves in the exact opposite way as a
FIFO queue. The cache evicts the block accessed most recently first
without any regard to how often or how many times it was accessed
before.
3. Least Recently Used (LRU)
Discards the least recently used items first. This algorithm requires
keeping track of what was used when, which is expensive if one wants
to make sure the algorithm always discards the least recently used item.
General implementations of this technique require keeping "age bits" for
cache-lines and track the "Least Recently Used" cache-line based on
THANK YOU

More Related Content

PPTX
Threads (operating System)
PPS
Cache memory
PPTX
Cache Memory
PPTX
Single &Multi Core processor
PDF
Deep learning for image video processing
PPT
pipelining
PPTX
Cache memory
PPTX
Lock based protocols
Threads (operating System)
Cache memory
Cache Memory
Single &Multi Core processor
Deep learning for image video processing
pipelining
Cache memory
Lock based protocols

What's hot (20)

PPTX
Cache memory
PPTX
Computer architecture memory system
PPTX
Cache memory
PPTX
Memory organization in computer architecture
PPTX
Computer architecture page replacement algorithms
PPTX
Segmentation in operating systems
PPTX
Computer registers
PPTX
Fragmentaton
PPTX
CS304PC:Computer Organization and Architecture Session 11 general register or...
PPTX
Presentation on Segmentation
PPT
Cache Memory
PPS
Virtual memory
DOCX
Cache memory
PDF
Associative memory
PPT
cache memory
PPTX
Memory management
PPS
Computer instructions
PPT
04 cache memory.ppt 1
PPTX
Cache memory
Computer architecture memory system
Cache memory
Memory organization in computer architecture
Computer architecture page replacement algorithms
Segmentation in operating systems
Computer registers
Fragmentaton
CS304PC:Computer Organization and Architecture Session 11 general register or...
Presentation on Segmentation
Cache Memory
Virtual memory
Cache memory
Associative memory
cache memory
Memory management
Computer instructions
04 cache memory.ppt 1
Ad

Similar to Cache memory ppt (20)

PPTX
CPU Caching Concepts
PPT
Cpu caching concepts mr mahesh
PDF
Cache memory
PPT
IS 139 Lecture 7
PPTX
lecture-5.pptx
PPT
Computer architecture cache memory
PDF
computerarchitecturecachememory-170927134432.pdf
PPTX
CA UNIT V..pptx
PPTX
MODULE-4 - Memory-System used in Computer organization
PDF
cachememory-210517060741 (1).pdf
PPT
Cache memory and cache
PDF
lecture-2-3_Memory.pdf,describing memory
PPTX
UNIT 4 COA MEMORY.pptx computer organisation
PDF
computer-memory
PPTX
Computer organizatin.Chapter Seven.pptxs
PPTX
Cache simulator
PPTX
Cache Memory.pptx
DOC
Virtual Memory vs Cache Memory
PPTX
4-Memory Management -Main memoryyesno.pptx
PPT
Cache memory ...
CPU Caching Concepts
Cpu caching concepts mr mahesh
Cache memory
IS 139 Lecture 7
lecture-5.pptx
Computer architecture cache memory
computerarchitecturecachememory-170927134432.pdf
CA UNIT V..pptx
MODULE-4 - Memory-System used in Computer organization
cachememory-210517060741 (1).pdf
Cache memory and cache
lecture-2-3_Memory.pdf,describing memory
UNIT 4 COA MEMORY.pptx computer organisation
computer-memory
Computer organizatin.Chapter Seven.pptxs
Cache simulator
Cache Memory.pptx
Virtual Memory vs Cache Memory
4-Memory Management -Main memoryyesno.pptx
Cache memory ...
Ad

Recently uploaded (20)

PPTX
Geodesy 1.pptx...............................................
PDF
Digital Logic Computer Design lecture notes
PPTX
additive manufacturing of ss316l using mig welding
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
Welding lecture in detail for understanding
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PDF
PPT on Performance Review to get promotions
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPT
Mechanical Engineering MATERIALS Selection
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PDF
composite construction of structures.pdf
PPTX
Construction Project Organization Group 2.pptx
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Geodesy 1.pptx...............................................
Digital Logic Computer Design lecture notes
additive manufacturing of ss316l using mig welding
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
UNIT 4 Total Quality Management .pptx
Welding lecture in detail for understanding
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Lesson 3_Tessellation.pptx finite Mathematics
PPT on Performance Review to get promotions
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
bas. eng. economics group 4 presentation 1.pptx
Mechanical Engineering MATERIALS Selection
CYBER-CRIMES AND SECURITY A guide to understanding
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
composite construction of structures.pdf
Construction Project Organization Group 2.pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Lecture Notes Electrical Wiring System Components
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx

Cache memory ppt

  • 1. STUDY OF FUNCTIONING OF CACHE MEMORY AND ITS LATEST DEVELOPMENTS IN CACHE MEMORY
  • 2. WHAT IS CACHE ? A cache memory is a fast and relatively small memory, that stores the most recently used (MRU) main memory(MM) (or working memory) data. It is simply a copy of a small data segment residing in the main memory. Hold identical copies of main memory. The function of this is to speed up the MM data access.
  • 4. Memories that consists of circuits capable of retaining their state as long as power is applied are known as static memory. Less expensive RAM‟s can be implemented if simplex calls are used such cells cannot retain their state indefinitely. Hence they are called Dynamic RAM’s (DRAM). The information stored in a dynamic memory cell in the form of a charge on a capacitor and this charge can be maintained only for tens of Milliseconds. Latency is one of the parameter indicating good performance Latency: It refers to the amount of time it takes to transfer a word of data to or from the memory.
  • 7. Functional principles of the cache memory  In MM read operation, the cache controller first of all checks if the data is stored in cache. In case of match (Hit - cache hit ) the data is fastly and directly supplied from the cache to the processor without involving the MM.  Else (Miss - cache miss ) the data is read from MM.  A cache hit is a state in which data requested for processing by a component or application is found in the cache memory. It is a faster means of delivering data to the processor, as the cache already contains the requested data.
  • 8.  A cache hit occurs when an application or software requests data. First, the CPU looks for the data in its closest memory location, which is usually the primary cache. If the requested data is found in the cache, it is considered a cache hit.  A cache hit serves data more quickly, as the data can be retrieved by reading the cache memory. Cache miss is a state where the data requested for processing by a component or application is not found in the cache memory. It causes execution delays by requiring the program or application to fetch the data from other cache levels or the main memory.
  • 9.  Each cache miss slows down the overall process because after a cache miss, the CPU will look for a higher level cache, such as L1, L2, L3 and RAM for that data. Further, a new entry is created and copied in cache before it can be accessed by the processor.  The more cache hits the better.
  • 10. Cache Memory operation is based on two major "principles of locality" Temporal locality  Spatial locality' Temporal locality Data which have been used recently have high likelihood of being used again. A cache stores only a subset of MM data – the most recent-used MRU. Data read from MM are temporary stored in cache. If the processor requires the same data, this is supplied by the cache.
  • 11. Spatial locality If a data is referenced, it is very likely that nearby data will be accessed soon. Instructions and data are transferred from MM to the cache in fixed blocks (cache block), known as cache lines. Cache line size is in the range of 4 to 512 bytes. Most programs are highly sequential. Next instruction usually comes from the next memory location. Data is usually structured and data in these structures normally are stored in contiguous memory locations (data strings, arrays, etc.). Large lines size increase the spatial locality but increase also the number of invalidated data in case of line replacement .
  • 12. MAPPING FUNCTIONS 1. Direct mapping  The simplest way to determine cache locations in which to store memoty blocks is the direct mapping technique. 2. Associative mapping  In this method, the main memory block can be placed into any cache block position.  12 tag bits will identify a memory block when it is resolved in the cache.  The tag bits of an address received from the processor are compared to the tag bits of each block of the cache to see if the desired block is persent.This is called associative mapping. 3. Set-associative mapping  It is the combination of direct and associative mapping. The blocks of the cache are grouped into sets and the mapping allows a block of the main memory to reside in any block of the specified set.
  • 13. Block 1 Block 0 Block 127 Block 0 Block 1 Block 127 Block 128 Block 129 Block 255 Block 256 Block 257 Block 4095 tag tag tag cache Main memory DIRECTION MAPPING
  • 14. Block 1 Block 0 Block 127 Block 0 Block 1 Block i Block 257 Block 4095 tag tag tag cache Main memory ASSOCIATIVE MAPPING
  • 15. Block 1 Block 0 Block 126 Block 0 Block 1 Block 63 Block 64 Block 65 Block 127 Block 128 Block 129 Block 4095 tag tag tag cache Main memory Block 2 Block 3 tag tag Block 127tag Set 0 Set 1 Set 63 SET-ASSOCIATIVE MAPPING
  • 16.  The Cache memory stores a reasonable number of blocks at a given time but this number is small compared to the total number of blocks available in Main Memory.  The correspondence between main memory block and the block in cache memory is specified by a mapping function.  The Cache control hardware decide that which block should be removed to create space for the new block that contains the referenced word. The collection of rule for making this decision is called the replacement algorithm.  The cache control circuit determines whether the requested word currently exists in the cache. If it exists, then Read/Write operation will take place on appropriate cache location.  In this case Read/Write hit will occur.
  • 17. REPLACEMENT POLICIES 1. First In First Out (FIFO) Using this algorithm the cache behaves in the same way as a FIFO queue. The cache evicts the first block accessed first without any regard to how often or how many times it was accessed before. 2. Last In First Out (LIFO) Using this algorithm the cache behaves in the exact opposite way as a FIFO queue. The cache evicts the block accessed most recently first without any regard to how often or how many times it was accessed before. 3. Least Recently Used (LRU) Discards the least recently used items first. This algorithm requires keeping track of what was used when, which is expensive if one wants to make sure the algorithm always discards the least recently used item. General implementations of this technique require keeping "age bits" for cache-lines and track the "Least Recently Used" cache-line based on