SlideShare a Scribd company logo
MODULE 4
MEMORY SYSTEM ORGANIZATION AND
ARCHITECTURE
• Memory is one of the important subsystems in a Computer. It is a volatile storage
system that stores Instructions and Data. Unless the program gets loaded in memory
in executable form, the CPU cannot execute it.
• A memory unit is a collection of storage cells and associated circuits needed to
transfer information in and out of storage.
• The memory stores binary information in a group of bits called words. A group of
eight bits is called a byte. n data input lines
k address lines
Read
Write
n data output lines
Memory unit 2k words
n bits per word
MEMORY SYSTEM HIERARCHY
1. Registers
These are small, high-speed memory units located in the CPU. It is used to store the datas and
instructions. Registers have the fastest access time and the smallest storage capacity.
2. Cache Memory
Cache memory is a small, fast memory unit located close to the CPU. It stores frequently used
data and instructions that have been recently accessed from the main memory. Cache memory is
designed to minimize the time it takes to access data by providing the CPU with quick access to
frequently used data.
3. Main Memory
Main memory, also known as RAM (Random Access Memory), is the primary memory of a
computer system. It has a larger storage capacity than cache memory, but it is slower. Main
memory is used to store data and instructions that are currently in use by the CPU.
Types of Main Memory
•Static RAM: Static RAM stores the binary information in flip flops and information remains
valid until power is supplied. It has a faster access time and is used in implementing cache
memory.
•Dynamic RAM: It stores the binary information as a charge on the capacitor. It requires
refreshing circuitry to maintain the charge on the capacitors after a few milliseconds. It contains
more memory cells per unit area as compared to SRAM.
4. Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a non-volatile
memory unit that has a larger storage capacity than main memory. It is used to store data and
instructions that are not currently in use by the CPU. Secondary storage has the slowest access
time and is typically the least expensive type of memory in the memory hierarchy.
5. Magnetic Disk
Magnetic Disks are simply circular plates that are fabricated with either a metal or a
plastic or a magnetized material. The Magnetic disks work at a high speed inside the
computer and these are frequently used.
6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is covered with a plastic film.
It is generally used for the backup of data. In the case of a magnetic tape, the access time
for a computer is a little slower and therefore, it requires some amount of time for
accessing the strip.
CHARACTERISTICS OF MEMORY HIERARCHY
•Capacity: It is the global volume of information the memory can store. As we move from
top to bottom in the Hierarchy, the capacity increases.
•Access Time: It is the time interval between the read/write request and the availability of the
data. As we move from top to bottom in the Hierarchy, the access time increases.
•Performance: The speed gap increased between the CPU registers and Main Memory due to
a large difference in access time. This results in lower performance of the system and thus,
enhancement was required.. One of the most significant ways to increase system performance
is minimizing how far down the memory hierarchy one has to go to manipulate data.
•Cost Per Bit: As we move from bottom to top in the Hierarchy, the cost per bit increases i.e.
Internal Memory is costlier than External Memory.
MEMORY CAPACITY
• Number of bytes that can be stored
MEMORY CHARACTERISTICS
• Location
– CPU
– Internal (main)
– External (secondary)
• Capacity
– Word size (in Bytes)
– Number of words (Blocks)
• Unit of transfer
– Word
– Block
• Access methods
– Sequential access
– Direct access
– Random access
– Associative access
• Performance
–Access time
–Cycle time
–Transfer rate
• Physical Type
– Semiconductor
– Magnetic surface
– Optical
• Physical Characteristics
– Volatile / Non-Volatile
– Erasable / Non-erasable
MEMORY CHARACTERISTICS
1.Location of memories
• CPU
• Registers – used by CPU as its local memory
• Internal memory
• Main memory
• Cache memory
• External memory
• Peripheral devices – disk, tape – accessible to CPU
via I/O controllers
MEMORY CHARACTERISTICS
2. Capacity & 3. Unit of Transfer
• Word (Internal Memory)
The capacity of the internal memory is typically expressed in terms of
bytes or words
• Word length
For the internal memory, the unit of transfer is equal to the number of
data lines into and out of the main memory module (word Length). The
common word lengths are 8,16 and 32 bits.
Total memory = number of words × word length
• Block (External Memory)
External memory capacity is expressed in terms of blocks. For
external memory, the data often transferred in much longer units than a word,
and these are referred to as Blocks.
MEMORY CHARACTERISTICS
3.Access
Methods
Sequential Access
• Accesses the
memory in
predetermined
sequence
• Shared read/write head is used, and this must be moved its
current location to the desired location, passing and
rejecting each intermediate record.
• Memory is organized into units of data, called Records.
Access must be made in a specific linear sequence
• The time to access data in this type of method depends on
the location of the data. So, the time to access an arbitrary
record is highly variable
MEMORY CHARACTERISTICS
3.Access
Methods Random
access
• In random access method, data from any location of the
memory can be accessed randomly.
• The access to any location is not related with its physical
location and is independent of other locations.
• There is a separate access mechanism for each location.
• Main memory systems are a random access
• Storage locations can be accessed in any order
• Example of random access: Semiconductor memories like
RAM, ROM use random access method.
MEMORY CHARACTERISTICS
3. Access Methods
Direct access
• Direct access method can be seen as combination of
sequential access method and random access method. It is
also referred as semi random access memory
• Magnetic hard disks contain many rotating storage tracks.
• Here each tracks has its own read or write head and the
tracks can be accessed randomly. But access within each
track is sequential.
• Access is accomplished by general access to reach a
general vicinity plus sequential searching, counting,
waiting to reach the final location.
• Example of direct access: Memory devices such as
magnetic hard disks.
Random Access
Sequential
access
MEMORY CHARACTERISTICS
3.Access
Methods
Associate Access
• Word is retrieved based on portion of its contents rather
than its address
• This enables one to make a comparison of desired
bit
locations within a word for specific match
• Has own addressing mechanism
• Retrieval time is constant
• Access time is independent of location or prior
access
patterns
• Example: Cache memories
MEMORY CHARACTERISTICS
4. Performance
Access time
• The time required to read / write the data from / into desired record
• Depends on the amount of data to be read / write
• If the amount data is uniform for all records then the access time is same for all records.
• Time from the instant that an address is presented to the memory to the instant that data have been
stored or made available for use.
Memory Cycle time
• Access time + time required before a second access can commence
• For Random access method ,this memory cycle time is same for all records
• The sequential access and direct access ,the memory cycle time is different
Transfer rate / Throughput
• Rate at which the data can be transferred into or out of a memory unit
• Random access memory: 1/cycle time
• Non-Random access memory
Tn = Ta + (N/R), where
Tn– average time to read or write N bits
Ta – average access time
N – Number of bits
R – Transfer rate in bits per second
MEMORY CHARACTERISTICS
MEMORY PERFORMANCE PARAMETERS
MEMORY CHARACTERISTICS
5. Physical type
Semiconductor
• Semiconductor memory uses
semiconductor- based integrated circuits to store
information.
Magnetic surface
• Magnetic storage uses different patterns
of magnetization on a magnetically coated surface to store
information.
• Example: Magnetic disk, Floppy disk, Hard disk drive
Optical
• The typical optical disc, stores information in deformities
on the surface of a circular disc and reads this information
by illuminating the surface with a laser diode and
observing the reflection.
MEMORY CHARACTERISTICS
6.Physical
characteristics
Erasable/non erasable
• Erasable memory
• Erase the stored
information by
writing new
information
• Ex: Magnetic
storage is erasable
• Non-erasable memory
• Cannot be altered,
except by
destroying the
storage unit (ROM)
• A practical non-
erasable memory
must also be non-
volatile
• Ex: CD-R, Flash
MEMORY ORGANIZATION
• Physical arrangement of bits to form words
• 2 types
• 1 dimensional
• 2 dimensional
• Basic element = memory cell
• Properties of Memory cell:
- They exhibit two stable states, which can be used to represent binary 1 and 0.
- They are capable of being written into (at least once) to set the state.
- They are capable of being read to sense the state.
Memory Organization
1 – dimensional organization
Memory Organization
2 – dimensional organization
BYTE STORAGE METHODS
• Two ways to store a string of data (Bytes) in computers:
• Big Endian
• Little Endian
• Least Significant Byte (LSB) is the right-most bit in a string - because it has the least
effect on the value of the binary number.
• Most Significant Byte (MSB) - the left-most byte that carries the greatest
numerical value.
• In Big Endian, the MSB of the data is placed at the byte with the lowest address.
(First byte stored in the memory first)
• In Little Endian, the LSB of the data is placed at the byte with the lowest address.
(Last
byte stored in the memory first)
• In Big Endian, the MSB of the data is placed at the byte with the lowest address.
(First byte stored in the memory first)
• In Little Endian, the LSB of the data is placed at the byte with the lowest address.
(Last byte stored in the memory first)
BYTE STORAGE METHODS
• Big Endian - the most common format in data networking (TCP, UPD, IPv4 and IPv6 are
using Big endian order to transmit data)
• Little Endian - on microprocessors
BYTE STORAGE METHODS
• Big-Endian
• Assigns MSB to least address and LSB to highest address
• Ex: 0 × DEADBEEF
Memory Location Value
Base Address + 0 DE
Base Address + 1 AD
Base Address + 2 BE
Base Address + 3 EF
BYTE STORAGE METHODS
• Little Endian
• Assigns MSB to highest address and LSB to least
address
• Ex: 0 × DEADBEEF
Memory Location Value
Base Address + 0 EF
Base Address + 1 BE
Base Address + 2 AD
Base Address + 3 DE
BYTE STORAGE METHODS
• Little Endian
• Intel × 86 family
• Digital equipment corporation architectures (PDP – 11, VAX, Alpha)
• Big Endian
• Sun SPARC
• IBM 360 / 370
• Motorola 68000
• Motorola 88000
• Bi-Endian (The ability to switch between big endian and little endian
ordering.)
• Power PC
• MIPS
• Intel’s 64 IA - 64
BYTE STORAGE METHODS
CONCEPTUAL VIEW OF MEMORY DESIGN
• Revisiting Logic Gates
A Single
RAM
Memory
Cell
CONCEPTUAL VIEW OF MEMORY DESIGN
CONCEPTUAL VIEW OF MEMORY DESIGN
2-4 Decoder Circuit
CONCEPTUAL VIEW OF MEMORY DESIGN
CONCEPTUAL VIEW OF MEMORY DESIGN
A typical chip Layout
Main Memory Structure:
• If the Main Memory is structured as collection of physically separate
modules - each with it’s own Address Buffer Register (ABR) and Data
Buffer Register (DBR), memory access operations….
• It may proceed in more than one module at the same time.
• Hence, aggregate rate of transmission of words to and from the memory can be
increased.
MEMORY INTERLEAVING
Interleaving:
• Memory Interleaving is an abstraction technique.
• Designed to compensate for core memory by spreading memory addresses
evenly across memory banks.
•Divides memory into a number of
modules such that successive words in the
address
space are placed in the different module.
•To implement interleaved structure,
there must be 2k modules.
(k=lower order k bits)
MEMORY INTERLEAVING
Distribution Methods
in Memory system
• Consecutive words in
a module
• When consecutive locations are
accessed, as happens
when a block of data is
transferred to a cache,
only one module is involved.
MEMORY INTERLEAVING
MEMORY
INTERLEAVING
Distribution Methods in Memory
system
• Consecutive words in a Consecutive
module
• This method is called memory
interleaving
• Parallel access is possible. Hence,
faster
• Higher average utilization of the memory
system
Usage of Memory Interleaving:
• Main memory is relatively slower than the cache.
• So to improve the access time of the main memory, interleaving is
used.
• This method uses memory effectively.
Classification of Memory Interleaving: (Two address formats
for Memory Interleaving)
• High Order Interleaving
• Low Order Interleaving
MEMORY INTERLEAVING
High Order Interleaving:
• In high-order interleaving, the most significant bits of the address
select the memory chip.
• The least significant bits are sent as addresses to each chip.
• The maximum rate of data transfer is limited by the memory cycle
time.
MEMORY INTERLEAVING
Low Order Interleaving:
• In low-order interleaving, the least significant bits select the memory
bank (module).
• In this, consecutive memory addresses are in different memory
modules.
• This allows memory access at much faster rates than allowed by the
cycle time.
MEMORY INTERLEAVING
MEMORY INTERLEAVING
Benefits of Memory Interleaving
• It allows simultaneous access to different modules of memory.
• Interleave memory is useful in the system with pipelining and vector
processing.
• In an interleaved memory, consecutive memory addresses are spread
across different memory modules.
•Reduce the memory access time by a factor close
to the number of memory banks.
MEMORY INTERLEAVING
Interleaving DRAM
• Main memory is usually composed of a
collection of DRAM (Dynamic random-
access memory) memory chips grouped
together to form a memory bank.
• The memory banks will be interleaved.
• Memory accesses to different banks can
proceed in parallel with high throughput.
• Memory banks can be allocated a
contiguous block of memory addresses,
gives an equal performance and access
gives far better performance in interleaved
layouts.
DESIGN OF SCALABLE MEMORY USING RAM’S
Memory -
Block Diagram
DESIGN OF SCALABLE MEMORY USING RAM
DESIGN OF SCALABLE MEMORY USING RAM’S
RAM Chips
• The logic 1 and 0 are normal digital signals.
• High impedance state behaves like an open circuit, which
means
that the output does not carry a signal and has no logic
significance.
• The unit is in operation only when CS1 = 1 and CS2 = 0. The bar on
top of the second select variable indicates that this input is enabled
when it is equal to 0.
• If the chip select inputs are not enabled, or if they are enabled but
the read or write inputs are not enabled, the memory is inhibited
(temporarily inaccessible) and its data bus is in a high-impedance
state (the output of a buffer is disconnected from the output bus).
• When CS1 = 1 and CS2 = 0, the memory can be placed in a write
or read mode.
• When the WR input is enabled, the memory stores a byte from
the
data bus into a location specified by the address input lines.
• When the RD input is enabled, the content of the selected byte is
placed into the data bus. The RD and WR signals control the
memory operation as well as the bus buffers associated with the
bidirectional data bus .
READ ONLY MEMORY (ROM)
ROM TYPES
READ ONLY MEMORY (ROM)
ROM Chips
• A ROM chip is organized externally in a similar
manner. However, since a ROM can only read, the data bus
can only be in an output mode
• For the same-size chip, it is possible to have more bits of
ROM than of RAM, because the internal binary cells in
ROM occupy less space than in RAM.
• For this reason, the diagram specifies a 512-byte ROM,
while the RAM has only 128 bytes.
• The nine address lines in the ROM chip specify any one
of the 512 bytes stored in it
• The two chip select inputs must be CS1 = 1 and CS2 = 0
for the unit to operate. Otherwise, the data bus is in a
high- impedance state.
• There is no need for a read or write control because the
unit can only read. Thus when the chip is enabled by
the two select inputs, the byte selected by the address lines
appears on the data bus.
.
• No R/W signal – Default Read only.
• Data Bus - unidirectional
CONSTRUCTION OF LARGER SIZE MEMORY
• Mix of RAM and
ROM
• Decoder Circuit
• Address Lines
• Data Lines
• SELECT signal
MEMORY CONNECTION TO CPU
Address Lines – specifies the No. Rows
Data Lines – Specifies the No. of Columns
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
• The designer of a computer system must calculate the amount of memory required for the particular
application and assign it to either RAM or ROM.
• The interconnection between memory and processor is then established from knowledge of the size
of
memory needed and the type of RAM and ROM chips available.
• The addressing of memory can be established by means of a table that specifies the memory address
assigned to each chip.
• The table, called a memory address map, is a pictorial representation of assigned address space for each
chip in the system.
To demonstrate with a particular example,
assume that a computer system needs 512
bytes of RAM and 512 bytes of ROM.
The memory address map for this
configuration is shown in Table 1.
MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
• The component column specifies whether a RAM or a ROM chip is used. The hexadecimal address column
assigns a range of hexadecimal equivalent addresses for each chip.
• The address bus lines are listed in the third column. Although there are 16 lines in the address bus, the table
shows only 10 lines because the other 6 are not used in this example and are assumed to be zero.
• The small x's under the address bus lines designate those lines that must be connected to the address inputs in each
chip. The RAM chips have 128 bytes and need seven address lines.
• The ROM chip has 512 bytes and needs 9 address lines.
• The x's are always assigned to the low-order bus lines:
• lines 1 through 7 for the RAM and lines 1 through 9 for the ROM.
MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
• It is now necessary to distinguish between four RAM chips by assigning to each a different address. For this particular
example we choose bus lines 8 and 9 to represent four distinct binary combinations.
• Note that any other pair of unused bus lines can be chosen for this purpose. The table clearly shows that the nine low-
order bus lines constitute a memory space for RAM equal to 29 = 512 bytes.
• The distinction between a RAM and ROM address is done with another bus line. Here we choose line 10 for
this purpose. When line 10 is 0, the CPU selects a RAM, and when this line is equal to 1, it selects the ROM.
• The equivalent hexadecimal address for each chip is obtained from the information under the address bus assignment.
The address bus lines are subdivided into groups of four bits each so that each group can be represented with a
hexadecimal digit.
• The first hexadecimal digit represents lines 13 to 16 and is always 0. The next hexadecimal digit represents lines 9 to
12, but lines 11 and 12 are always 0.
• The range of hexadecimal addresses for each component is determined from the x's associated with it. These x's
represent a binary number that can range from an all-0's to an all-1's value.
MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
Bus lines 8 and 9 are used to
select one ROM out of 4 RAMs
Bus line 10 select RAM or ROM
No. of Address line bits (x) is given by N=2x
No. of data bus = No. of columns (Word size)
Assumptions
RAM size: 128 * 8 ----- 7 bit address bits needed, No. of Data Lines --- 8
ROM size: 512 * 8 ------ 9 bit address bits required, No. of Data Lines --- 8
Address Calculation
(From – To)
-calculated from 16
bit address
- Hexadecimal
format
MEMORY DESIGN - MEMORY
INTERFACE ADDRESS
MAP
If x=0 ‘From’ address is 0 0 0 0
v v v v
X=0
0 0 1 0 0 0 0 0 0
0 0 0
Convert to hexadecimal value 0 2
So, from address for ROM 1 is 0200
0 0
Address Calculation
(From Address)
-calculated from 16 bit address
- Hexadecimal formaStubstitute x=0 to get ‘From’ address
v v v
X=1
0 0 1 1 1 1 1 1 1
1 1 1
Convert to hexadecimal value 0 3
So, To address for ROM 1 is 03FF
F F
MEMORY DESIGN - MEMORY
INTERFACE ADDRESS MAP
Address Calculation
(To Address) v
-calculated from 16 bit address
- Hexadecimal formaSt ubstitute x=1 to get ‘To’ address
If x=1  ‘To’ address is
0 0 0 0
MEMORY DESIGN - MEMORY INTERFACE
Address Map
• Considerations:
N – Number of Words in the chip available (Rows)
N’– Required Number of Words in the chip required
W – Width/size of a word in the chip available (Columns)
W’ – Required Width/size of a word in the chip required
MEMORY DESIGN - MEMORY INTERFACE ADDRESS
MAP
• Hence,
• Available Memory chip Size: N × W
• (N – represents ROWS – No. of words) (W– represents COLUMNS – word size)
• Required Memory chip size: N’ × W’
where N’≥ N and W’≥
W
,
(Required>Availa
ble)
• Required number of chips = p × q where p = N’ / N (No. of rows reqd. ) and q = W’/ W (no. of Cols
reqd.)
• Address bits - required to map to the row
• Decoder used No. of Address line bits (x) is given by N=2x
• No. of data bus = No. of columns (Word size)
MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
There are 3 types of organizations of N’ × W’ that can be formed using N× W
• N’ > N and W’ = W => increasing the number of words (Rows - N) in the memory
• N’ = N and W’ > W => increasing the word size (Columns – W) of the chip
• N’ > N and W’ > W => increasing both N & W (Rows & Columns) the number of
words and number of bits in each word.
There are different types of organization of N1 x W1 –memory using N x
W –bit chips
How many 1024x 8 RAM chips are needed to provide a memory capacity of
2048 x 8?
NI
If > N & WI = W
Increase the word size of a Memory by a
factor of
I
f
Case 2: NI = N & WI > W
NI
If > N & WI > W
Increase number of words by the factor of p
&
Increase the word size of a Memory by a factor
of q
Case 1:
Case 3:
q =
WI
W
How many 1024x 4 RAM chips are needed to provide a memory
capacity of 2048 x 8?
NI
Increase number of words by the factor of p =
N
How many 1024x 4 RAM chips are needed to provide a memory capacity of 1024
x 8?
MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
Problem – 1 (CASE 2) – Increasing the word size (Columns)
• Design 128 × 16 (N’ × W’)- bit RAM using 128 × 4(N × W) - bit RAM
• Solution: p = 128 / 128 = 1;
q = 16 / 4 = 4
• Therefore, No. of chips required is calculated by,
p × q = 1 × 4 =4 (i.e. 4 memory chips of size 128 × 4 are required to construct 128 × 16
bit RAM)
x – Number of bits required
to represent the address
lines
Y- Number of bits required for Selecting
the specific RAM.
128 x 4= 27 x 4 (by N=2x)
X=7,Y=0,Z=0
x=7 bit address is required
Z – Number of bits to select the RAM or ROM or Interface…
No. of Types (T) = 1
Z=0 ((by T=2Z)
(p = 2y)
1= 20
y=0
(since
only
one
RAM)
Component Hexadecimal address Address Bus
From To 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
RAM 1.1 0000 007F x x x x x x x
RAM 1.2 0000 007F x x x x x x x
RAM 1.3 0000 007F x x x x x x x
RAM 1.4 0000 007F x x x x x x x
Z – Number of bits to select the RAM or ROM or Interface…
Z=0 address line 9 is empty
Y- Number of bits required for Selecting the specific RAM
Y=0 Only one RAM so address lines 7 and 8 are empty
X=7 7
bit
address
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
Component Address Bus
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
RAM 1.1
RAM 1.2
RAM 1.3
x x x x x x x
x x x x x x x
Hexadecimal address
From To
0000
007F
0000
007F
0000
007F
0 0 0 0 0 0 0 0 0 0 0 0
RAM 1.4 0000 007F v
Substitute x=0 to get ‘From’ address
If x=0 ‘From’ address is
0 0 0 0
v
x x x x x x x
xv x x x x v x x
X=0
Convert to hexadecimal value 0 0
So, from address of RAM is 0000
0 0
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
Component Address Bus
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
RAM 1.1
RAM 1.2
RAM 1.3
x x x x x x x
x x x x x x x
Hexadecimal address
From To
0000
007F
0000
007F
0000
007F
RAM 1.4 0000 007F
Substitute x=1 to get ‘to’ address
If x=1 ‘to’ address is 0 0 0 0
v v
x x x x x x x
x vx x x x v x x
X=1
0 0 0 0 0 1 1 1 1 1 1 1
Convert to hexadecimal value 0 0
So, to address of RAM is 007F
7 F
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
CS
Data (0-3)
R/W
Address (0-6)
128 × 4
RAM
CS
Data (0-3)
R/W
Address (0-6)
128 × 4
RAM
CS
Data (0-3)
R/W
Address (0-6)
128 × 4
RAM
CS
Data (0-3)
R/W
Address (0-6)
128 × 4
RAM
Data Bus
16
Address
Bus
4 4 4 4
Memory design – Increasing the word size
Design 128 × 16 (N’ × W’)- bit RAM using 128 × 4 (N × W) - bit RAM
7
Chip Select
Read/write Control
Since W is increased ---- the 4 chips should be arranged horizontally.
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
6 - 0
Data r/w
1
16
4
4
4
4
7
128 x
4
RAM
128 x
4
RAM
128 x 4
RAM
128 x
4
RAM
Address Bus
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
Problem – 2 (CASE 1) – Increasing the number of words
(Rows)
Design 1024 × 8 - bit RAM using 256 × 8 - bit RAM
Solution: p = 1024 / 256 = 4; q = 8 / 8 = 1
S.NO Memory N x W N1 x W1 P q p * q x y z Total
1 RAM 256 × 8
1024 ×
8
4 1 4 8 2 0 10
2
3
4
p × q = 4 × 1 = 4  4 memory chips of size 256 × 8 are required
to construct 1024 × 8 bit RAM
x – Number of bits required to
represent the address lines
256 x 8= 28 x 4
x=8 bit address is
required
Y- Number of bits for
Selecting the specific RAM.
(p = 2y) (p=422y ) y=2
Z – Number of 0
(select the RAM or
ROM)
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
X=8 8 bit address lines(from 0 to 7)
Y=2 address lines 8 and 9 will select one RAM among 4
RAM Z=0 address line 10 is empty
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
Substitute x=0 to get ‘From’ address
If x=0 ‘From’ address
is
v v
X=0
Convert to hexadecimal value 0 3
So, from address of RAM is 0300
0 0
v
0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
Substitute x=1 to get ‘To’ address
If x=1 ‘To’ address is
v v
X=1
Convert to hexadecimal value 0 3
So, from address of RAM is 03FF
F F
v
0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
Data
Bus
R / W
8
Address
Bus 8
256 × 8
RAM
CS
Data
Bus
R / W
8
Address
Bus 8
256 × 8
RAM
CS
Data
Bus
R / W
8
Address
Bus 8
256 × 8
RAM
CS
8
Address
Bus
A0 – A7
2 × 4
decoder
Data
Bus
R / W
8
Address
Bus 8
256 × 8
RAM
CS
A9
A8
Data
Bus
R / W
8
0
1
2
3
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
Data
Bus
R / W
8
Address
Bus 8
256 × 8
RAM
CS
Data
Bus
R / W
8
Address
Bus 8
256 × 8
RAM
CS
Data
Bus
R / W
8
Address
Bus 8
256 × 8
RAM
CS
8
Address
Bus
A0 – A7
Data
Bus
R / W
8
Address
Bus 8
256 × 8
RAM
CS
Data
Bus
8
A9 A8
R / W
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
9 8 7 - 0
256 × 8
RAM 1
256 × 8
RAM 3
Data r/w
2 × 4
Decoder
3 2 1 0
8
8
8
256 × 8
RAM 2
8
256 × 8
RAM 4
8
Problem – 3 (CASE 3) – Increasing the Words & Word size (Rows & Columns)
Design 256 × 16 – bit RAM using 128 × 8 – bit RAM chips
Solution: p = 256 / 128 = 2; q = 16 / 8 = 2
S.NO Memory N x W N1 x W1 P q p * q x y z Total
1 RAM 128 × 8 256 × 16 2 2 4 7 1 0 8
2
3
4
p × q = 2 × 2 = 4
x – Number of bits required to
represent the address lines
128 x 4= 27 x 4
x=7 bit address is
required
Y- Number of bits for
Selecting the specific RAM.
(p = 2y) (p=221y ) y=1
Z – Number of 0
(select the RAM or
ROM)
4 chips are required, 2chips rows, 2chips columns
MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
Substitute x=0 to get ‘From’ address
If x=0 ‘From’ address
is
v v
Convert to hexadecimal value 0 0
So, from address of RAM is 0080
8 0
v
X=0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
Substitute x=1 to get ‘To’ address
If x=1 ‘To’ address is
v v
Convert to hexadecimal value 0 0
So, from address of RAM is 00FF
F F
v
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
X=1
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
7 6 - 0 Data r/w
1 × 2
Decode
r
1 0
Address Bus
8
8
8
8
16
16
128 × 8
RAM 1.1
128 × 8
RAM 1.2
128 × 8
RAM 2.1
128 × 8
RAM 2.2
Problem – 4: Design 256 × 16 – bit RAM using 256 × 8 – bit RAM chips and 256 × 8 – bit
ROM
using 128 × 8 – bit ROM chips.
Solution: p = 256 / 256 =
1;
S.NO Memory N x W N1 x W1 P q p * q x y z Total
1 RAM 256 × 8 256 × 16 1 2 2 8 0 1 9
2 Rom 128 × 8 256 × 8 2 1 2 7 1 1 9
3
4
RAM Chip calculation
q = 16 / 8 = 2
p × q = 1 × 2 = 2 RAM2 chips are required 1 row  2 chips
(cols)
x – Number of bits required to
represent the address lines
256 x 8= 28 x 8
x=8 bit address is
required
Y- Number of bits for
Selecting the specific RAM.
(p = 2y) (p=120y ) y=0
Z – Number of 1
(select the RAM or
ROM)
MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
S.NO Memory N x W N1 x W1 P q p * q x y z Total
1 RAM 256 × 8 256 × 16 1 2 2 8 0 1 9
2 Rom 128 × 8 256 × 8 2 1 2 7 1 1 9
3
4
ROM Chip calculation
q = 8 / 8 = 1
p × q = 2 × 1 = 2 ROM2 chips are required 2 chips(rows) , 1
column
x – Number of bits required to
represent the address lines
128 x 8= 27 x 8
x=7 bit address is
required
Y- Number of bits for
Selecting the specific RAM.
(p = 2y) (p=221y ) y=1
Z – Number of 1
(select the RAM or
ROM)
MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
Problem – 4: Design 256 × 16 – bit RAM using 256 × 8 – bit RAM chips and 256 × 8 – bit
ROM
using 128 × 8 – bit ROM chips.
Solution: p = 256 / 128 =
2;
For RAM – No bits required – (No address line used)
Y = 1 bit --- For ROM (7 th address line is used)
Z = 1 bit
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
Substitute x=0 to get ‘From’ address
If x=0 ‘From’ address
is
v v
Convert to hexadecimal value 0 1
So, from address of RAM is 0180
8 0
v
X=0
0 0 0 0 0 0 0 1 1 0 0 0 0 0 0
0
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
Substitute x=1 to get ‘To’ address
If x=1 ‘To’ address is
v v
Convert to hexadecimal value 0 1
So, from address of RAM is 01FF
F F
v
X=1
0 0 0 0 0 0 0 1 1 1 1 1 1 1
1 1
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
8 7 6 - 0 Data r/w
1 × 2
Decode
r
1 0
Address Bus
8
8
128 × 8
ROM 1
128 × 8
ROM 2
8
16
8
256 × 8
RAM 1.1
256 × 8
RAM 1.2
1 × 2
Decode
r
1 0
●
Problem – 5
• A computer employs RAM chips of 128 x 8 and ROM chips of 512 x 8. The computer system needs
256 x 8 of RAM, 1024 x 16 of ROM, and two interface units with 256 registers each. A memory
mapped I/O configuration is used. The two higher -order bits of the address bus are assigned 00 for
RAM, 01 for ROM, and 10 for interface registers.
• a. Compute total number of decoders are needed for the above system?
• b. Design a memory-address map for the above system
• c. Show the chip layout for the above design
MEMORY DESIGN -
MEMORY INTERFACE ADDRESS MAP
• A computer employs RAM chips of 128 x 8 and ROM chips of 512 x 8. The computer system needs
256 x 8 of RAM, 1024 x 16 of ROM, and two interface units with 256 registers each. A memory
mapped I/O configuration is used. The two higher -order bits of the address bus are assigned 00 for
RAM, 01 for ROM, and 10 for interface registers.
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
q = always 1 for Interfaces (Column)
x – Number of bits required to
represent the address lines
128 x 8= 27 x 8
x=7 bit address is required
Y- Number of bits for Z – Number of 2
Selecting the specific RAM. (select the RAM
or (p = 2y) (p=221y ) y=1 ROM or interface)
S.NO Memory N x W N1 x W1 P q p * q x y z Total
1 RAM 128 × 8 256 × 8 2 1 2 7 1 2 10
2 ROM 512 × 8 1024 × 16 2 2 4 9 1 2 12
3 Interface 256 2 1 2 8 1 2 11
4
1.RAM
Solution: p = 256 / 128 = 2; q = 8 / 8 = 1
p × q = 2 × 1 = 2 RAM2 chips are required 2chips(row)
1(col)
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
x – Number of bits required to
represent the address lines
512 x 8= 29 x 8
x=9 bit address is required
Y- Number of bits for Z – Number of 2
Selecting the specific RAM. (select the RAM
or (p = 2y) (p=221y ) y=1 ROM or interface)
S.NO Memory N x W N1 x W1 P q p * q x y z Total
1 RAM 128 × 8 256 × 8 2 1 2 7 1 2 10
2 ROM 512 × 8 1024 × 16 2 2 4 9 1 2 12
3 Interface 256 2 1 2 8 1 2 11
4
2.ROM
Solution: p = 1024 / 512 = 2; q = 16 / 8 = 2
p × q = 2 × 2 = 4 RAM4chips are required 2chips(rows),
2chips(cols)
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
S.NO Memory N x W N1 x W1 P q p * q x y z Total
1 RAM 128 × 8 256 × 8 2 1 2 7 1 2 10
2 ROM 512 × 8 1024 × 16 2 2 4 9 1 2 12
3 Interface 256 2 1 2 8 1 2 11
4
q is 1 always for interfaces.
2x = 256 (Number of registers)
X= 8
3.Interfacep = number of interfaces=2(input is given)
2y = p=2
y= 1
p*q= 2 * 1 = 2
Z – Number of 2 (select
the RAM or ROM or
interface)
2 interfaces are required 2 interfaces(rows), 1
column
MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
Component Hexadecimal Address Address Bus
From To 15 - 12 11 10 9 8 7 6 5 4 3 2 1 0
RAM1 0000 007F 0 0 0 x x x x x x x
RAM2 0200 027F 0 0 1 x x x x x x x
ROM1.1 0400 05FF 0 1 0 x x x x x x x x x
ROM1.2 0400 05FF 0 1 0 x x x x x x x x x
ROM2.1 0600 07FF 0 1 1 x x x x x x x x x
ROM2.2 0600 07FF 0 1 1 x x x x x x x x x
Interface1 0800 08FF 1 0 0 x x x x x x x x
Interface2 0A00 0AFF 1 0 1 x x x x x x x x
X=address lines(RAM7,ROM9,interface8
Y=1 Z=2
MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
A
ddress Ch.
Select
r/w Data
11 10
9 8 7 6 - 0
Data r/w
ADDRESS
BUS
3 × 8
Decode
r
0
1
2
3
4
5
A Ch.
ddress
Select
r/w Data
128 × 8
RAM 1
128 × 8
RAM 2
512 × 8
ROM 1.1
512 × 8
ROM 1.2
512 × 8
ROM 2.1
512 × 8
ROM 2.2
Interface 1
Interface 2
Problem – 6
• Suppose that a 2M × 16 RAM memory is built using 256K × 8 RAM chips and 1K x 8
ROM memory is build using 256 x 8 ROM chips with the word addressable memory, find
the following:
• a) How many RAM chips and ROM chips are necessary?
• b) If we were accessing one full word in RAM, how many chips would be involved?
• c) How many address bits are needed for each RAM chip?
• d) How many memory banks are required? Hint: number of banks are addressable units in
main memory and RAM chips
• e) If high-order interleaving is used, where would address (1B)16 be located? Also, for a
low-order interleaving?
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
Problem – 6 - Solution
• a) 16 RAMS , 4 ROMS
• b) Each RAM chip is 256K × 8, so to access one full word (2 bytes), 2 RAM chips would
be involved.
• c) The 256K × 8 RAM chip has 18 address bits (2^18 = 256K), so each RAM chip would
need 18 address bits.
• d) 8
• e) module 0 – word
27 module 3 - word
3
MEMORY DESIGN - MEMORY INTERFACE
ADDRESS MAP
CACHE MEMORY
• Special very high-speed memory
• used to speed up and synchronize with high-speed CPU
• Cache memory is costlier than main memory or disk
memory but more economical than CPU registers
extremely fast memory type that acts as a buffer between
RAM and the CPU
• It holds frequently requested data and instructions so that they are
immediately available to the CPU when needed
• used to reduce the average time to access data from the Main
memory
• The cache is a smaller and faster memory that stores copies of the data
from frequently used main memory locations
CACHE MEMORY:
PRINCIPLES
• The intent of cache memory is to provide
the fastest access to resources without
compromising on size and price of the
memory.
• The processor attempting to read a byte
of data, first looks at the cache
memory.
• If the byte does not exist in cache
memory, it searches for the byte in the
main memory.
TYPES OF CACHE
MEMORY
• Primary Cache:
• very fast and its access time is similar to the
processor registers
• it is built onto the processor chip
• its size is quite small
• also known as a level 1 cache and is build using
static RAM (SRAM)
• Secondary Cache:
• The secondary cache or external cache is cache
memory that is external to the primary
cache
• It is located between the primary cache and the
main memory
CACHE MEMORY: ADVANTAGES &
DISADVANTAGES
• Advantages of Cache Memory
• Faster
• Less access time
• Disadvantages of Cache Memory
• Expensive
• Limited capacity
Block Placement
Direct Mapping
Set
Associative
Block Identification
Fully
Associative
Tag
Index
Offset
Block Replacement
FCFS
LRU, MRU
Update Policies
Optimal
Write Through
Write back
Write around
Write allocate
Cache Memory Management
Techniques
CACHE MEMORY
Management
Techniques
MAPPING - BASICS
In SM to MM – The
terms Pages and
Frames are used.
In MM to CM – The
terms Blocks and
Lines are used.
Pages, Frames, Blocks,
Lines
– Same
Blocks – MM Blocks
Cache Lines - Cache Blocks
MAPPING -
BASICS
Mapping 64 words of Main memory on to 16 words of Cache
• Main Memory Address
16 Blocks
4 words per Block
MAPPING -
BASICS
4 words per Block
0 1 2 3
Mapping 64 words of Main memory on to 16 words of Cache
• Main Memory Address
Each word (0 to 63) – has data – (needs to be transferred)
Hence, Main Memory Address(Physical Address)
– should point to the individual word in a block.
No. of bits needed to store 64 words
16 Blocks
Main Memory Address bits (Physical Address Bits – PA Bits): 6 bits
MAPPING -
BASICS
• Main Memory Address
No. of bits needed to store 64 words = 6 bits
No. of words in the memory: 64 words
No. of bits needed to identify the block (total 16 blocks)
No. of bits to identify a block
in 16 blocks = 4 bits
No. of bits to
identify a single
word
Mapping 64 words of Main memory on to 16 words of Cache
MAPPING -
BASICS
• Main Memory Address
If the PA is 011111,
Then,
the corresponding block and
the word in that block can be
identified with the first 4 bits and
last 2 bits respectively.
Word
1 2 3
0
Mapping 64 words of Main memory on to 16 words of Cache
MAPPING -
BASICS
• Cache Memory Organization
In Main memory,
No. of Cache Blocks (Lines) = 4
No. of bits needed to store 4 lines (Cache Blocks) = 2 bits
Mapping 64 words of Main memory on to 16 words of Cache
MM Size – 16 blocks
Cache Size – 4 blocks(Lines)
Hence, Mapping is Required
CACHE - MEMORY MAPPING
TECHNIQUES
• Cache mapping defines how a block
from the main memory is mapped to
the cache memory in case of a cache
miss
• Cache mapping is a technique by
which the contents of main memory
are brought into the cache memory.
• Direct mapping
• Associative mapping
• Set-Associative mapping
• Direct Mapping
• Many to One Mapping
• Map to a fixed Cache Line
• Associative Mapping
• Many to Many Mapping
• Map to any Cache Line
• Set Associative Mapping
CACHE - MEMORY MAPPING
TECHNIQUES
MAPPING TECHNIQUES - DIRECT MAPPING
• A particular block of main memory can map only to a particular line
of the cache
• Assign each memory block to a specific line in the cache
• If a line is previously taken up by a memory block when a new block
needs to be loaded, the old block is trashed
Cache line number = ( MM Block Address ) % (Number of CM
lines)
MAPPING TECHNIQUES - DIRECT
MAPPING
Mapping 64 words of Main memory on to 16 words of Cache
• Mapping – Direct Mapping
MM Size – 16 blocks
Cache Size – 4 blocks(Lines)
Hence, Mapping is Required
Mapping Techniques - Direct Mapping
Cache line number = ( MM Block Address ) % (Number of CM lines)
MAPPING TECHNIQUES - DIRECT
MAPPING
Block is identified by
first 4 bits of PA
2 bits identifies a word in a Block
How a block is
identified in
cache with
MM Address
bits?
Cache
Blocks/Lines
Mapping 64 words of Main memory on to 16 words of Cache
Mapping – Direct Mapping
MM Blocks
Many–to–One
Relationship
Main Memory
Cach
e
Me
m
or
y
MAPPING TECHNIQUES - DIRECT
MAPPING
Mapping 64 words of Main memory on to 16 words of Cache
Word is identified by
last 2 bits of PA
2 bits
identifies a
word
in
both
MM/CM
Block
Main Memory
Cache
Memory
Cache Blocks/
Lines
Line Number/Index
Tag bits
MM Blocks
How a Word in
a block is
identified in
cache with
MM Address
bits?
LINE
NUMBER
TAG BITS
MAPPING TECHNIQUES - DIRECT
MAPPING
Mapping 64 words of Main memory on to 16 words of Cache
2 bits identifies a word in a Block
(LINE OFFSET)
MM Blocks
Cache Blocks/
Lines
Tag bits Line Number(Index)
Mapping Techniques - Direct Mapping
Why called TAG bits?
Tag Bits
DIRECT MAPPING –
CALCULATION OF
PA BITS NEEDED
N BITS
PA = MM Size = 2N
X bits Y bits
No. of Blocks = 2X Block Size/Line size = 2Y
Tag Line Number Word / Offset
in powers of 2 in powers of 2
=MM Size/ Cache Size =Cache Size/ Line Size=Line Size in powers of 2
No. of Blocks = MM Size/ Block Size
MAPPING TECHNIQUES - DIRECT MAPPING
• Consider a cache consisting of 128 blocks of 16 words each, for total of
2048(2K) words
• Assume that the main memory is addressable by 16 bit address.
• Main memory is 64K which will be viewed as 4K (4×1024=4096) blocks of 16
words each.
• In this block J of the main memory maps on to block J modulo 128 of the
cache. Thus main memory blocks 0,128,256,….is loaded into cache is
stored at block 0. Block 1,129,257,….are stored at block 1 and so on.
PA Bits Calculation:
MM Words = 4096*16 words = 65536 words, So, No. of PA bits => 216 => 16 bits
BLOCK / LINE bits:
No. of MM blocks : 4096, So, No. of bits to represent a block =>212 => 12 bits (Block bits + Tag Bits)
LINE OFFSET (WORD) Bits:
Each Block – 16 words => No. of bits for 16 words = 24 => 4 bits
TAG Bits & Block Bits:
No. of Lines in cache (128 blocks) = CACHE size/Block Size = 2048 /
16 = 128. Block bits (128 lines/blocks) => 27 => 7 bits.
Tag Bits => Total Block bits – Block bits = 12-7 = 5 bits
MAPPING TECHNIQUES - DIRECT MAPPING
Need of Replacement Algorithm-
In direct mapping,
There is no need of any replacement algorithm.
This is because a main memory block can map only to a
particular line of the cache.
Thus, the new incoming block will always replace the
existing block (if any) in that particular line.
MAPPING TECHNIQUES - DIRECT MAPPING -
PROBLEMS
Solution: Main Memory:
Calculation of PA bits:
Physical Address Split:
Calculation of Block &
Word(offset) bits:
Cache Memory:
Calculation of Cache Block/
Line Number/Index bits:
Calculation of TAG bits
MAPPING TECHNIQUES - DIRECT MAPPING -
PROBLEMS
Not Required
Logic:
MAPPING TECHNIQUES - DIRECT MAPPING - PROBLEMS
PARAMETERS OF CACHE
MEMORY
⚫ Cache Hit
⚫ A referenced item is found in the cache by the processor
⚫ Cache Miss
⚫ A referenced item is not present in the cache
⚫ Hit ratio
⚫ Ratio of number of hits to total number of references => number of hits/(number of hits +
number of Miss)
⚫ Miss penalty
⚫ Additional cycles required to serve the miss
⚫ Time required for the cache miss depends on both the latency and bandwidth
⚫ Latency – time to retrieve the first word of the block
⚫ Bandwidth – time to retrieve the rest of this block
TYPES OF CACHE
MISS
• Compulsory Miss / Cold Miss
• Conflict Miss/ Collision miss/ Interference miss
• Capacity Miss – Not because of mapping techniques … because of size of
cache.
MAPPING TECHNIQUES - DIRECT MAPPING -
DRAWBACKS
• Drawback – Conflict Miss
Conflict Miss
Compulsory
Miss
MAPPING TECHNIQUES – (FULLY) ASSOCIATIVE
MAPPING
Relationship
• To reduce Conflict Miss ----- Associative Mapping
Many–to–Many
• A block of main memory can map to any
line of the cache that is freely available
at that moment.
• This makes fully associative mapping
more flexible than direct mapping.
• All the lines of cache are freely
available.
• When all the cache lines are occupied,
then one of the existing blocks will have
to be replaced.
MAPPING TECHNIQUES – (FULLY)
ASSOCIATIVE
MAPPING
• Entire block number bits are used as
TAG bits --- (Fully Associative)
• No need of specifying the block number. Many–to–Many
Relationship
PA ADDRESS CALCULATION - DIRECT & ASSOCIATIVE
Direct Mapping Associative Mapping
MAPPING TECHNIQUES – (FULLY) ASSOCIATIVE
MAPPING
Need of Replacement Algorithm:
• A replacement algorithm is
required.
• Replacement algorithm suggests the
block to be replaced if all the
cache lines are occupied.
• Thus, replacement algorithm like
FCFS Algorithm, LRU Algorithm
etc is employed.
Disadvantages:
• A replacement algorithm is required.
• During retrieval, all the blocks needs to
be checked for the data.
ASSOCIATIVE MAPPING – CALCULATION OF PA
BITS NEEDED
N BITS
PA = MM Size = 2N
X bits Y bits
No. of Blocks = 2X Block Size/Line size = 2Y
Tag Word / Offset
=Line Size in powers of 2
No. of Blocks = MM Size/ Block Size
MAPPING TECHNIQUES – (FULLY)
ASSOCIATIVE
MAPPING
Example
Consider a fully associative mapped cache of size 16 KB with block size 256 bytes. The size of
main
memory is 128 KB.
1.Number of bits in tag
2.Tag directory size
Given
Cache memory size = 16 KB
Block size = Frame size = Line size = 256
bytes Main memory size = 128 KB
We consider that the memory is byte
addressable.
Number of Bits in Physical Address-
Size of main memory
= 128 KB
= 217 bytes
Thus, Number of bits in physical address = 17 bits
MAPPING TECHNIQUES – (FULLY)
ASSOCIATIVE
MAPPING
Number of Bits in Block Offset-
Block size
= 256 bytes
= 28 bytes
Thus, Number of bits in block offset = 8 bits
Number of Bits in Tag-
Number of bits in tag= Number of bits in physical address – Number of bits in block
offset
= 17 bits – 8 bits
= 9 bits
Thus, Number of bits in tag = 9 bits
MAPPING TECHNIQUES – (FULLY) ASSOCIATIVE MAPPING
•NUMBER OF LINES IN CACHE
• TOTAL NUMBER OF LINES IN CACHE= CACHE SIZE / LINE SIZE
• = 16 KB / 256 BYTES
• = 16 X 1024 BYTES / 256 BYTES
• = 64 LINES
•TAG DIRECTORY SIZE
• = NUMBER OF LINES IN CACHE X NUMBER OF BITS IN TAG
• = 64 X 9 BITS
• = 576 BITS
• = 72 BYTES
• THUS, SIZE OF TAG DIRECTORY = 72 BYTES
MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING
• Combination of direct and associative mapping technique
• Cache blocks are grouped into sets and mapping allow block of main memory reside into
any block of a specific set
MAPPING TECHNIQUES – SET
ASSOCIATIVE
MAPPING
• Hence contention problem of direct mapping is eased , at the same time ,
hardware cost is reduced by decreasing the size of associative search.
• For a cache with two blocks per set. In this case, memory block 0, 64,
128,…..,4032 map into cache set 0 and they can occupy any two
block within this set.
• Having 64 sets means that the 6 bit set field of the address determines
which set of the cache might contain the desired block.
• The tag bits of address must be associatively compared to the tags of the
two blocks of the set to check if desired block is present. This is two
way associative search.
Cache set number = ( MM Block Address ) % (Number of sets in
Cache)
MAPPING TECHNIQUES – SET ASSOCIATIVE
MAPPING
Mapping using Set Associative
MAPPING TECHNIQUES – SET
ASSOCIATIVE MAPPING -
PROBLEMS
Block number (MM) % number of sets
• k = 2 suggests that each set contains two cache lines. It is called as 2-way
set associative mapping
• Since cache contains 6 lines, so number of sets in the cache = 6 / 2 = 3 sets.
• Block ‘j’ of main memory can map to set number (j mod 3) only of the cache.
• Within that set, block ‘j’ can map to any cache line that is freely available at
that moment.
• If all the cache lines are occupied, then one of the existing blocks will have
to be replaced.
MAPPING TECHNIQUES – SET
ASSOCIATIVE
MAPPING
MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING
PA Bits Calculation:
MM Words = 4096*16 words = 65536 words, So, No. of PA bits => 216 => 16 bits
BLOCK / LINE bits:
No. of MM blocks : 4096, So, No. of bits to represent a block =>212 =>
12 bits (Block bits + Tag Bits)
LINE OFFSET (WORD) Bits:
Each Block – 16 words => No. of bits for 16 words = 24 => 4 bits
TAG Bits & Block Bits:
No. of Lines in cache (128 blocks) = CACHE size/Block
Size = 2048 / 16 = 128.
• If k = 1, then k-way set associative mapping becomes direct mapping i.e
• 1-way Set Associative Mapping ≡ Direct Mapping
• If k = Total number of lines in the cache, then k-way set associative mapping
becomes fully associative mapping.
Need of Replacement Algorithm:
• Set associative mapping is a combination of direct mapping and fully associative
mapping.
• It uses fully associative mapping within each set.
• Thus, set associative mapping requires a replacement algorithm.
SET ASSOCIATIVE MAPPING – CALCULATION OF PA BITS
NEEDED
N BITS
PA = MM Size = 2N
X bits Y bits
No. of Blocks = 2X Block Size/Line size = 2Y
Tag Set Number Word / Offset
=K * (MM Size/ Cache Size)
in powers of 2
=No. of sets in powers of 2
(No. of Lines=Cache Size/ Line Size
No. of Sets (S)= No.of Lines/K)
=Line Size in
powers of 2
No. of Blocks = MM Size/ Block Size
MAPPING TECHNIQUES – SET
ASSOCIATIVE MAPPING -
PROBLEMS
Solution: Main Memory:
Calculation of PA bits:
Physical Address Split:
Calculation of TAG bits
Example 1
Calculation of Block &
Word(offset) bits:
Cache Memory:
Number of lines and
sets
PA-(set no+offset) 7-(2+2)=3
MAPPING TECHNIQUES – SET
ASSOCIATIVE MAPPING -
PROBLEMS
Example 1 - Elaborated
MAPPING TECHNIQUES – SET
ASSOCIATIVE MAPPING -
PROBLEMS
Example 1 - Elaborated
MAPPING TECHNIQUES – SET
ASSOCIATIVE MAPPING -
PROBLEMS
Example 1 - Elaborated
MAPPING TECHNIQUES – SET
ASSOCIATIVE MAPPING -
PROBLEMS
Example 1 - Elaborated
MAPPING TECHNIQUES – SET
ASSOCIATIVE MAPPING -
PROBLEMS
Example 2
MAPPING TECHNIQUES – SET
ASSOCIATIVE MAPPING -
PROBLEMS
Example 2
MAPPING TECHNIQUES – SET
ASSOCIATIVE
Physical Address Split:
Calculation of Block &
Word(offset) bits:
Cache Memory:
Number of lines and
sets
Calculation of TAG bits
Example 3
28-(12+7)=9
Mapping - Problems
1.P.A split?
2.Tag directory size?
3.
Show the format of main memory
address
Solution: Main Memory:
Calculation of PA bits:
PA-(set no+offset)
MAPPING TECHNIQUES – SET
ASSOCIATIVE
Mapping - Problems
Solution:
Calculation of PA bits:
Physical Address Split:
Calculation of Block &
Word(offset) bits:
Example 4
MAPPING TECHNIQUES – SET
ASSOCIATIVE
Mapping - Problems
Physical Address Split:
Solution:
Cache Memory:
Number of lines and sets
Cache size
Example 4
(OR)
Tag Bits =K * (MM Size/ Cache Size)
in powers of 2
MAPPING TECHNIQUES – SET ASSOCIATIVE
MAPPING - PROBLEMS
EXAMPLE 5
MAPPING TECHNIQUES – SET
ASSOCIATIVE MAPPING -
PROBLEMS
Example 6
MAPPING TECHNIQUES – SET ASSOCIATIVE
MAPPING - PROBLEMS
EXAMPLE 7
MAPPING TECHNIQUES – SET ASSOCIATIVE
MAPPING - PROBLEMS
EXAMPLE 8
MAPPING TECHNIQUES – SET
ASSOCIATIVE MAPPING -
PROBLEMS
Example 9
• Consider a 32-bit microprocessor that has an on-chip 16-kB four-
way set-associative cache. Assume that the cache has a line size of
four 32-bit words. Draw a block diagram of this cache showing its
organization and how the different address fields are used to
determine a cache hit/miss. Identify the set number in cache for
mapping the given memory address ABCDE8F8.
MAPPING TECHNIQUES – SET
ASSOCIATIVE MAPPING -
PROBLEMS
Example 9 – Solution
• Given: 16 Kb cache size. - 4 way set associative.
Hence, Line size = 4*32 bit = 16 bytes.
• Here, in the question "word" is mentioned and even "Where in the cache is the word from
memory location“ is asked. So, word addressing is in use.
So, offset bits = 2 for 4 words.
• No. of sets=no of lines/p(way)
• No. of lines=cache size /line size
• Identifying the Set Number: Address -- ABCDE8F8
its binary from is :1010 1011 1100 1101 1110 1000 1111 1000
<1010 1011 1100 1101 1110 10> <00 1111 10> <00>
• it is mapped to set number 62 in cache.
tag(22 bit) sets( 8 bit) block size(2 bit)
Block Placement
Direct Mapping
Set
Associative
Block Identification
Fully
Associative
Tag
Index
Offset
Block Replacement
FCFS
LRU, MRU
Update Policies
Optimal
Write Through
Write back
Write around
Write allocate
CACHE
MEMORY
Management
Techniques
R
Cache Memory Management
Techniques
Block
eplacement
Strategies
CACHE : BLOCK
REPLACEMENT POLICIES
• Cache memory size < main memory size
• Processor fetches data from cache memory to perform execution operation
• So, when required block is not found within cache, then main memory block is transferred
to cache and previously present block is replaced---- Cache replacement policies are
needed….
• Replacement policies – used in
• Set-associative cache
• Fully – Associative/ Associative Cache
• Cache Replacement Policies:
• FIFO
• Optimal Algorithm
• LRU
• MRU
CACHE : BLOCK REPLACEMENT POLICIES - FIFO
CACHE : BLOCK REPLACEMENT POLICIES - FIFO
CACHE : BLOCK REPLACEMENT POLICIES - OPTIMAL
CACHE : BLOCK REPLACEMENT POLICIES - OPTIMAL
CACHE : BLOCK REPLACEMENT POLICIES - LRU
CACHE : BLOCK REPLACEMENT POLICIES - LRU
CACHE : BLOCK REPLACEMENT
POLICIES - MRU
Most Recently Used
Most Recently Used (MRU)
CACHE : BLOCK REPLACEMENT POLICIES - MRU
Block Placement
Direct Mapping
Set
Associative
Block Identification
Fully
Associative
Tag
Index
Offset
Block Replacement
FCFS
LRU, MRU
Update Policies
Optimal
Write Through
Write back
Write around
Write allocate
CACHE MEMORY
Management
Techniques
Cache Memory Management
Techniques
Update Policies
CACHE MEMORY – UPDATE POLICIES
• Update policy - determines how a cache & Main Memory is updated
after an operation.
• Write through
Used on Write HIT
• Write back
• Write around
Used on Write MISS
• Write Allocate
UPDATE POLICY - WRITE-THROUGH
• Correspond to items currently in the cache (i.e. write Hit)
• Systems that write to main memory each time as well as to cache
• It's the easiest policy to implement, but it lowers the cache's
performance.
• It's used when there are no frequent writes to the cache.
CPU Cache Main Memory
UPDATE POLICY - WRITE-BACK/COPY BACK
CPU Cache Main Memory
• Correspond to items currently in the cache (i.e. write Hit)
• Updating main memory until the block containing the altered item is
removed from the cache
On Replacement
of cache line
UPDATE POLICY - WRITE-AROUND
• Correspond to items not currently in the cache (i.e. write misses)
• The item could be updated in main memory only without affecting
the cache.
CPU Cache Main Memory
UPDATE POLICY - WRITE-ALLOCATE
• Correspond to items not currently in the cache (i.e. write misses)
• Update the item in main memory and bring the block containing the
updated item into the cache.
CPU Cache Main Memory
CACHE UPDATE POLICY - SUMMARY
• Update policies
• Write Through
• On write hit,
Update
cache as well
as Main
Memory
simultaneous
ly
• Write Back
• On write hit, Update cache instantly and update main memory on cache line
replacement
• Write –Around
• On write miss, Update main memory without affecting Cache
PERFORMANCE METRICS
• Cache Hit
• Hit Ratio
• Cache Miss
• Miss Ratio
• Miss Penalty
• Average Memory Access Time
PERFORMANCE METRICS
• Hit Ratio:
• Number of references found in the cache against Total number of references
•Hit Ratio (h)=
𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆𝒔 𝒇𝒐𝒖𝒏𝒅 𝒊𝒏 𝒕𝒉𝒆 𝑪𝒂𝒄𝒉𝒆
𝒕𝒐𝒕𝒂𝒍 𝒏𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒎𝒆𝒎𝒐𝒓𝒚 𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆𝒔
• Miss Ratio (m) =1- Hit Ratio
• Hit Ratio + Miss Ratio=1
• TC is the average cache access time
• TM is Mean memory access time
• TA is average access time
MEAN MEMORY ACCESS TIME (MMAT)
• Average Memory Access Time
= Hit ratio * Cache Memory Access Time
+
(1 – Hit ratio) * (Cache Memory Access Time + Time required to access a
block of main memory)
LOOK THROUGH
CPU Cache Main Memory
On Miss
• The cache is checked first for a hit, and if a miss occurs then the
access to main memory is started
TC is the average cache
access time
TM is Mean memory
access time
TAvg = (TC ) + (1-h) × TM
EXAMPLE
•Assume that A computer system employs A cache with an access time
of 20ns and A main memory with A cycle time of 200ns. Suppose that
the hit ratio for reads is 90%,
•A) what would be the average access time for reads if the cache is A
• “Look-through” cache?
• The average read access time
• TAVG = (TC ) + (1-H) × TM
• = 20NS + 0.10 * 200NS = 40NS
LOOK ASIDE
• Access to main memory in parallel with the cache lookup
CPU Cache Main Memory
Cache Memory Access Time during Cache Hit (Tc ) =Tc
Cache Memory Access Time during Cache miss (Tc ) =0
TC is the average cache access time
TM is Mean memory access time
TAvg = (h × TC ) + (1-h) × TM
EXAMPLE
•Assume that a computer system employs a cache with an access time
of 20ns and a main memory with a cycle time of 200ns. Suppose that
the hit ratio for reads is 90%,
•B) what would be the average access time for reads if the cache is a
•“Look-aside” cache?
• The average read access time in this case
• Tavg = (h × TC ) + (1-h) × TM
• = 0.9*20NS + 0.10 * 200NS = 38NS
EVALUATION OF CACHE
MEAN MEMORY ACCESS TIME
• TAvg => Average Memory Access Time
• h => Hit rate,
For a Two Level cache implementation,
• Average Memory Access Time = Hit ratio × Cache Memory Access
Time + (1 – Hit ratio) × Time required to access a block of main
memory.
• TC => time to access information in cache (Cache access time)
• TM => time to access information in main memory (Memory access
time)
h1 is the Hit rate in L1 Cache
TAvg = (h1 × TC1 ) + (1-h1) × h2TC2+ (1-h1) ×(1-h2) × TM
h2 is the Hit rate in L2 Cache
TC1 is the time to access information in L1 Cache
TC2 is the time to access information in L2 Cache
A COMPUTER SYSTEM EMPLOYS A WRITE-BACK CACHE WITH A 90% HIT RATIO FOR
WRITES. THE CACHE OPERATES IN "LOOK THROUGH" AND HAS A 80% READ HIT
RATIO. READS ACCOUNT FOR 60% OF ALL MEMORY REFERENCES AND WRITES
ACCOUNT FOR 40%.IF THE MAIN MEMORY CYCLE TIME IS 300NS AND THE CACHE
ACCESS TIME IS 30NS, WHAT WOULD BE THE AVERAGE ACCESS TIME FOR ALL
REFERENCES (READS AS WELL AS WRITES)?
A computer system employs a write-back cache with a 90% hit ratio for writes. The cache operates in
"Look Aside" and has a 80% read hit ratio. Reads account for 60% of all memory references and
writes account for 40%.If the main memory cycle time is 300ns and the cache access time is 30ns, what
would be the average access time for all references (reads as well as writes)?
Ans:

More Related Content

PPTX
Topic-10 Memory Basics and Different Storages.pptx
PDF
19IS305_U4_LP10_LM10-22-23.pdf
PPT
Internal Memory FIT NED UNIVERSITY OF EN
PPTX
Unit 4 all topics covered cs.pptx
PPTX
Memory organization
PDF
3 computer memory
PPT
Chapter 8 computer memory system overview
PPT
Memory hierarchy of computer architecture.ppt
Topic-10 Memory Basics and Different Storages.pptx
19IS305_U4_LP10_LM10-22-23.pdf
Internal Memory FIT NED UNIVERSITY OF EN
Unit 4 all topics covered cs.pptx
Memory organization
3 computer memory
Chapter 8 computer memory system overview
Memory hierarchy of computer architecture.ppt

Similar to computer architecture memory system organization (20)

PPTX
Memory hierarchy
PPT
Computer organisation basic presentation
PPTX
Memory hierarchy
PPTX
Computer architecture memory system
PPT
Memory and storage
PDF
Unit IV Memory and I/O Organization
PPTX
AI and EXPERT SYSTEM of expert system .pptx
PPTX
unit4 and unit5.pptx
PDF
Computer Memory
PPTX
Week 6- PC HARDWARE AND MAINTENANCE-THEORY.pptx
PPTX
PC Hardware administration and Maintenance.pptx
PPT
Lecture 7
PPT
Datastorage
PDF
Computer Memory
PPTX
Memory devices copy
PPTX
Memory and storage devices
PPTX
Computer Memory.pptx with its types and storage
PPTX
Computer architecture bca 2nd semes.pptx
PPTX
PDF
Database Indexing and Tuning.pdf DDBMS
Memory hierarchy
Computer organisation basic presentation
Memory hierarchy
Computer architecture memory system
Memory and storage
Unit IV Memory and I/O Organization
AI and EXPERT SYSTEM of expert system .pptx
unit4 and unit5.pptx
Computer Memory
Week 6- PC HARDWARE AND MAINTENANCE-THEORY.pptx
PC Hardware administration and Maintenance.pptx
Lecture 7
Datastorage
Computer Memory
Memory devices copy
Memory and storage devices
Computer Memory.pptx with its types and storage
Computer architecture bca 2nd semes.pptx
Database Indexing and Tuning.pdf DDBMS
Ad

Recently uploaded (20)

PPT
Mechanical Engineering MATERIALS Selection
PPTX
Geodesy 1.pptx...............................................
PPT
introduction to datamining and warehousing
PPTX
bas. eng. economics group 4 presentation 1.pptx
PDF
composite construction of structures.pdf
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPT
Project quality management in manufacturing
PPTX
Current and future trends in Computer Vision.pptx
PPTX
Safety Seminar civil to be ensured for safe working.
PPTX
OOP with Java - Java Introduction (Basics)
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
CH1 Production IntroductoryConcepts.pptx
PPTX
Sustainable Sites - Green Building Construction
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PDF
Well-logging-methods_new................
PPTX
web development for engineering and engineering
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Mechanical Engineering MATERIALS Selection
Geodesy 1.pptx...............................................
introduction to datamining and warehousing
bas. eng. economics group 4 presentation 1.pptx
composite construction of structures.pdf
Foundation to blockchain - A guide to Blockchain Tech
Project quality management in manufacturing
Current and future trends in Computer Vision.pptx
Safety Seminar civil to be ensured for safe working.
OOP with Java - Java Introduction (Basics)
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Automation-in-Manufacturing-Chapter-Introduction.pdf
Operating System & Kernel Study Guide-1 - converted.pdf
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
CH1 Production IntroductoryConcepts.pptx
Sustainable Sites - Green Building Construction
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Well-logging-methods_new................
web development for engineering and engineering
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Ad

computer architecture memory system organization

  • 1. MODULE 4 MEMORY SYSTEM ORGANIZATION AND ARCHITECTURE
  • 2. • Memory is one of the important subsystems in a Computer. It is a volatile storage system that stores Instructions and Data. Unless the program gets loaded in memory in executable form, the CPU cannot execute it. • A memory unit is a collection of storage cells and associated circuits needed to transfer information in and out of storage. • The memory stores binary information in a group of bits called words. A group of eight bits is called a byte. n data input lines k address lines Read Write n data output lines Memory unit 2k words n bits per word
  • 4. 1. Registers These are small, high-speed memory units located in the CPU. It is used to store the datas and instructions. Registers have the fastest access time and the smallest storage capacity. 2. Cache Memory Cache memory is a small, fast memory unit located close to the CPU. It stores frequently used data and instructions that have been recently accessed from the main memory. Cache memory is designed to minimize the time it takes to access data by providing the CPU with quick access to frequently used data. 3. Main Memory Main memory, also known as RAM (Random Access Memory), is the primary memory of a computer system. It has a larger storage capacity than cache memory, but it is slower. Main memory is used to store data and instructions that are currently in use by the CPU.
  • 5. Types of Main Memory •Static RAM: Static RAM stores the binary information in flip flops and information remains valid until power is supplied. It has a faster access time and is used in implementing cache memory. •Dynamic RAM: It stores the binary information as a charge on the capacitor. It requires refreshing circuitry to maintain the charge on the capacitors after a few milliseconds. It contains more memory cells per unit area as compared to SRAM. 4. Secondary Storage Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a non-volatile memory unit that has a larger storage capacity than main memory. It is used to store data and instructions that are not currently in use by the CPU. Secondary storage has the slowest access time and is typically the least expensive type of memory in the memory hierarchy.
  • 6. 5. Magnetic Disk Magnetic Disks are simply circular plates that are fabricated with either a metal or a plastic or a magnetized material. The Magnetic disks work at a high speed inside the computer and these are frequently used. 6. Magnetic Tape Magnetic Tape is simply a magnetic recording device that is covered with a plastic film. It is generally used for the backup of data. In the case of a magnetic tape, the access time for a computer is a little slower and therefore, it requires some amount of time for accessing the strip.
  • 7. CHARACTERISTICS OF MEMORY HIERARCHY •Capacity: It is the global volume of information the memory can store. As we move from top to bottom in the Hierarchy, the capacity increases. •Access Time: It is the time interval between the read/write request and the availability of the data. As we move from top to bottom in the Hierarchy, the access time increases. •Performance: The speed gap increased between the CPU registers and Main Memory due to a large difference in access time. This results in lower performance of the system and thus, enhancement was required.. One of the most significant ways to increase system performance is minimizing how far down the memory hierarchy one has to go to manipulate data. •Cost Per Bit: As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal Memory is costlier than External Memory.
  • 8. MEMORY CAPACITY • Number of bytes that can be stored
  • 9. MEMORY CHARACTERISTICS • Location – CPU – Internal (main) – External (secondary) • Capacity – Word size (in Bytes) – Number of words (Blocks) • Unit of transfer – Word – Block • Access methods – Sequential access – Direct access – Random access – Associative access • Performance –Access time –Cycle time –Transfer rate • Physical Type – Semiconductor – Magnetic surface – Optical • Physical Characteristics – Volatile / Non-Volatile – Erasable / Non-erasable
  • 10. MEMORY CHARACTERISTICS 1.Location of memories • CPU • Registers – used by CPU as its local memory • Internal memory • Main memory • Cache memory • External memory • Peripheral devices – disk, tape – accessible to CPU via I/O controllers
  • 11. MEMORY CHARACTERISTICS 2. Capacity & 3. Unit of Transfer • Word (Internal Memory) The capacity of the internal memory is typically expressed in terms of bytes or words • Word length For the internal memory, the unit of transfer is equal to the number of data lines into and out of the main memory module (word Length). The common word lengths are 8,16 and 32 bits. Total memory = number of words × word length • Block (External Memory) External memory capacity is expressed in terms of blocks. For external memory, the data often transferred in much longer units than a word, and these are referred to as Blocks.
  • 12. MEMORY CHARACTERISTICS 3.Access Methods Sequential Access • Accesses the memory in predetermined sequence • Shared read/write head is used, and this must be moved its current location to the desired location, passing and rejecting each intermediate record. • Memory is organized into units of data, called Records. Access must be made in a specific linear sequence • The time to access data in this type of method depends on the location of the data. So, the time to access an arbitrary record is highly variable
  • 13. MEMORY CHARACTERISTICS 3.Access Methods Random access • In random access method, data from any location of the memory can be accessed randomly. • The access to any location is not related with its physical location and is independent of other locations. • There is a separate access mechanism for each location. • Main memory systems are a random access • Storage locations can be accessed in any order • Example of random access: Semiconductor memories like RAM, ROM use random access method.
  • 14. MEMORY CHARACTERISTICS 3. Access Methods Direct access • Direct access method can be seen as combination of sequential access method and random access method. It is also referred as semi random access memory • Magnetic hard disks contain many rotating storage tracks. • Here each tracks has its own read or write head and the tracks can be accessed randomly. But access within each track is sequential. • Access is accomplished by general access to reach a general vicinity plus sequential searching, counting, waiting to reach the final location. • Example of direct access: Memory devices such as magnetic hard disks. Random Access Sequential access
  • 15. MEMORY CHARACTERISTICS 3.Access Methods Associate Access • Word is retrieved based on portion of its contents rather than its address • This enables one to make a comparison of desired bit locations within a word for specific match • Has own addressing mechanism • Retrieval time is constant • Access time is independent of location or prior access patterns • Example: Cache memories
  • 16. MEMORY CHARACTERISTICS 4. Performance Access time • The time required to read / write the data from / into desired record • Depends on the amount of data to be read / write • If the amount data is uniform for all records then the access time is same for all records. • Time from the instant that an address is presented to the memory to the instant that data have been stored or made available for use. Memory Cycle time • Access time + time required before a second access can commence • For Random access method ,this memory cycle time is same for all records • The sequential access and direct access ,the memory cycle time is different Transfer rate / Throughput • Rate at which the data can be transferred into or out of a memory unit • Random access memory: 1/cycle time • Non-Random access memory Tn = Ta + (N/R), where Tn– average time to read or write N bits Ta – average access time N – Number of bits R – Transfer rate in bits per second
  • 18. MEMORY CHARACTERISTICS 5. Physical type Semiconductor • Semiconductor memory uses semiconductor- based integrated circuits to store information. Magnetic surface • Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. • Example: Magnetic disk, Floppy disk, Hard disk drive Optical • The typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection.
  • 19. MEMORY CHARACTERISTICS 6.Physical characteristics Erasable/non erasable • Erasable memory • Erase the stored information by writing new information • Ex: Magnetic storage is erasable • Non-erasable memory • Cannot be altered, except by destroying the storage unit (ROM) • A practical non- erasable memory must also be non- volatile • Ex: CD-R, Flash
  • 20. MEMORY ORGANIZATION • Physical arrangement of bits to form words • 2 types • 1 dimensional • 2 dimensional • Basic element = memory cell • Properties of Memory cell: - They exhibit two stable states, which can be used to represent binary 1 and 0. - They are capable of being written into (at least once) to set the state. - They are capable of being read to sense the state.
  • 21. Memory Organization 1 – dimensional organization
  • 22. Memory Organization 2 – dimensional organization
  • 23. BYTE STORAGE METHODS • Two ways to store a string of data (Bytes) in computers: • Big Endian • Little Endian • Least Significant Byte (LSB) is the right-most bit in a string - because it has the least effect on the value of the binary number. • Most Significant Byte (MSB) - the left-most byte that carries the greatest numerical value. • In Big Endian, the MSB of the data is placed at the byte with the lowest address. (First byte stored in the memory first) • In Little Endian, the LSB of the data is placed at the byte with the lowest address. (Last byte stored in the memory first)
  • 24. • In Big Endian, the MSB of the data is placed at the byte with the lowest address. (First byte stored in the memory first) • In Little Endian, the LSB of the data is placed at the byte with the lowest address. (Last byte stored in the memory first) BYTE STORAGE METHODS
  • 25. • Big Endian - the most common format in data networking (TCP, UPD, IPv4 and IPv6 are using Big endian order to transmit data) • Little Endian - on microprocessors BYTE STORAGE METHODS
  • 26. • Big-Endian • Assigns MSB to least address and LSB to highest address • Ex: 0 × DEADBEEF Memory Location Value Base Address + 0 DE Base Address + 1 AD Base Address + 2 BE Base Address + 3 EF BYTE STORAGE METHODS
  • 27. • Little Endian • Assigns MSB to highest address and LSB to least address • Ex: 0 × DEADBEEF Memory Location Value Base Address + 0 EF Base Address + 1 BE Base Address + 2 AD Base Address + 3 DE BYTE STORAGE METHODS
  • 28. • Little Endian • Intel × 86 family • Digital equipment corporation architectures (PDP – 11, VAX, Alpha) • Big Endian • Sun SPARC • IBM 360 / 370 • Motorola 68000 • Motorola 88000 • Bi-Endian (The ability to switch between big endian and little endian ordering.) • Power PC • MIPS • Intel’s 64 IA - 64 BYTE STORAGE METHODS
  • 29. CONCEPTUAL VIEW OF MEMORY DESIGN • Revisiting Logic Gates
  • 31. CONCEPTUAL VIEW OF MEMORY DESIGN
  • 32. 2-4 Decoder Circuit CONCEPTUAL VIEW OF MEMORY DESIGN
  • 33. CONCEPTUAL VIEW OF MEMORY DESIGN A typical chip Layout
  • 34. Main Memory Structure: • If the Main Memory is structured as collection of physically separate modules - each with it’s own Address Buffer Register (ABR) and Data Buffer Register (DBR), memory access operations…. • It may proceed in more than one module at the same time. • Hence, aggregate rate of transmission of words to and from the memory can be increased. MEMORY INTERLEAVING
  • 35. Interleaving: • Memory Interleaving is an abstraction technique. • Designed to compensate for core memory by spreading memory addresses evenly across memory banks. •Divides memory into a number of modules such that successive words in the address space are placed in the different module. •To implement interleaved structure, there must be 2k modules. (k=lower order k bits) MEMORY INTERLEAVING
  • 36. Distribution Methods in Memory system • Consecutive words in a module • When consecutive locations are accessed, as happens when a block of data is transferred to a cache, only one module is involved. MEMORY INTERLEAVING
  • 37. MEMORY INTERLEAVING Distribution Methods in Memory system • Consecutive words in a Consecutive module • This method is called memory interleaving • Parallel access is possible. Hence, faster • Higher average utilization of the memory system
  • 38. Usage of Memory Interleaving: • Main memory is relatively slower than the cache. • So to improve the access time of the main memory, interleaving is used. • This method uses memory effectively. Classification of Memory Interleaving: (Two address formats for Memory Interleaving) • High Order Interleaving • Low Order Interleaving MEMORY INTERLEAVING
  • 39. High Order Interleaving: • In high-order interleaving, the most significant bits of the address select the memory chip. • The least significant bits are sent as addresses to each chip. • The maximum rate of data transfer is limited by the memory cycle time. MEMORY INTERLEAVING
  • 40. Low Order Interleaving: • In low-order interleaving, the least significant bits select the memory bank (module). • In this, consecutive memory addresses are in different memory modules. • This allows memory access at much faster rates than allowed by the cycle time. MEMORY INTERLEAVING
  • 41. MEMORY INTERLEAVING Benefits of Memory Interleaving • It allows simultaneous access to different modules of memory. • Interleave memory is useful in the system with pipelining and vector processing. • In an interleaved memory, consecutive memory addresses are spread across different memory modules. •Reduce the memory access time by a factor close to the number of memory banks.
  • 42. MEMORY INTERLEAVING Interleaving DRAM • Main memory is usually composed of a collection of DRAM (Dynamic random- access memory) memory chips grouped together to form a memory bank. • The memory banks will be interleaved. • Memory accesses to different banks can proceed in parallel with high throughput. • Memory banks can be allocated a contiguous block of memory addresses, gives an equal performance and access gives far better performance in interleaved layouts.
  • 43. DESIGN OF SCALABLE MEMORY USING RAM’S Memory - Block Diagram
  • 44. DESIGN OF SCALABLE MEMORY USING RAM
  • 45. DESIGN OF SCALABLE MEMORY USING RAM’S RAM Chips • The logic 1 and 0 are normal digital signals. • High impedance state behaves like an open circuit, which means that the output does not carry a signal and has no logic significance. • The unit is in operation only when CS1 = 1 and CS2 = 0. The bar on top of the second select variable indicates that this input is enabled when it is equal to 0. • If the chip select inputs are not enabled, or if they are enabled but the read or write inputs are not enabled, the memory is inhibited (temporarily inaccessible) and its data bus is in a high-impedance state (the output of a buffer is disconnected from the output bus). • When CS1 = 1 and CS2 = 0, the memory can be placed in a write or read mode. • When the WR input is enabled, the memory stores a byte from the data bus into a location specified by the address input lines. • When the RD input is enabled, the content of the selected byte is placed into the data bus. The RD and WR signals control the memory operation as well as the bus buffers associated with the bidirectional data bus .
  • 46. READ ONLY MEMORY (ROM) ROM TYPES
  • 47. READ ONLY MEMORY (ROM) ROM Chips • A ROM chip is organized externally in a similar manner. However, since a ROM can only read, the data bus can only be in an output mode • For the same-size chip, it is possible to have more bits of ROM than of RAM, because the internal binary cells in ROM occupy less space than in RAM. • For this reason, the diagram specifies a 512-byte ROM, while the RAM has only 128 bytes. • The nine address lines in the ROM chip specify any one of the 512 bytes stored in it • The two chip select inputs must be CS1 = 1 and CS2 = 0 for the unit to operate. Otherwise, the data bus is in a high- impedance state. • There is no need for a read or write control because the unit can only read. Thus when the chip is enabled by the two select inputs, the byte selected by the address lines appears on the data bus. . • No R/W signal – Default Read only. • Data Bus - unidirectional
  • 48. CONSTRUCTION OF LARGER SIZE MEMORY • Mix of RAM and ROM • Decoder Circuit • Address Lines • Data Lines • SELECT signal
  • 49. MEMORY CONNECTION TO CPU Address Lines – specifies the No. Rows Data Lines – Specifies the No. of Columns
  • 50. MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP • The designer of a computer system must calculate the amount of memory required for the particular application and assign it to either RAM or ROM. • The interconnection between memory and processor is then established from knowledge of the size of memory needed and the type of RAM and ROM chips available. • The addressing of memory can be established by means of a table that specifies the memory address assigned to each chip. • The table, called a memory address map, is a pictorial representation of assigned address space for each chip in the system. To demonstrate with a particular example, assume that a computer system needs 512 bytes of RAM and 512 bytes of ROM. The memory address map for this configuration is shown in Table 1.
  • 51. MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP • The component column specifies whether a RAM or a ROM chip is used. The hexadecimal address column assigns a range of hexadecimal equivalent addresses for each chip. • The address bus lines are listed in the third column. Although there are 16 lines in the address bus, the table shows only 10 lines because the other 6 are not used in this example and are assumed to be zero. • The small x's under the address bus lines designate those lines that must be connected to the address inputs in each chip. The RAM chips have 128 bytes and need seven address lines. • The ROM chip has 512 bytes and needs 9 address lines. • The x's are always assigned to the low-order bus lines: • lines 1 through 7 for the RAM and lines 1 through 9 for the ROM.
  • 52. MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP • It is now necessary to distinguish between four RAM chips by assigning to each a different address. For this particular example we choose bus lines 8 and 9 to represent four distinct binary combinations. • Note that any other pair of unused bus lines can be chosen for this purpose. The table clearly shows that the nine low- order bus lines constitute a memory space for RAM equal to 29 = 512 bytes. • The distinction between a RAM and ROM address is done with another bus line. Here we choose line 10 for this purpose. When line 10 is 0, the CPU selects a RAM, and when this line is equal to 1, it selects the ROM. • The equivalent hexadecimal address for each chip is obtained from the information under the address bus assignment. The address bus lines are subdivided into groups of four bits each so that each group can be represented with a hexadecimal digit. • The first hexadecimal digit represents lines 13 to 16 and is always 0. The next hexadecimal digit represents lines 9 to 12, but lines 11 and 12 are always 0. • The range of hexadecimal addresses for each component is determined from the x's associated with it. These x's represent a binary number that can range from an all-0's to an all-1's value.
  • 53. MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP Bus lines 8 and 9 are used to select one ROM out of 4 RAMs Bus line 10 select RAM or ROM No. of Address line bits (x) is given by N=2x No. of data bus = No. of columns (Word size) Assumptions RAM size: 128 * 8 ----- 7 bit address bits needed, No. of Data Lines --- 8 ROM size: 512 * 8 ------ 9 bit address bits required, No. of Data Lines --- 8 Address Calculation (From – To) -calculated from 16 bit address - Hexadecimal format
  • 54. MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP If x=0 ‘From’ address is 0 0 0 0 v v v v X=0 0 0 1 0 0 0 0 0 0 0 0 0 Convert to hexadecimal value 0 2 So, from address for ROM 1 is 0200 0 0 Address Calculation (From Address) -calculated from 16 bit address - Hexadecimal formaStubstitute x=0 to get ‘From’ address
  • 55. v v v X=1 0 0 1 1 1 1 1 1 1 1 1 1 Convert to hexadecimal value 0 3 So, To address for ROM 1 is 03FF F F MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP Address Calculation (To Address) v -calculated from 16 bit address - Hexadecimal formaSt ubstitute x=1 to get ‘To’ address If x=1  ‘To’ address is 0 0 0 0
  • 56. MEMORY DESIGN - MEMORY INTERFACE Address Map • Considerations: N – Number of Words in the chip available (Rows) N’– Required Number of Words in the chip required W – Width/size of a word in the chip available (Columns) W’ – Required Width/size of a word in the chip required
  • 57. MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP • Hence, • Available Memory chip Size: N × W • (N – represents ROWS – No. of words) (W– represents COLUMNS – word size) • Required Memory chip size: N’ × W’ where N’≥ N and W’≥ W , (Required>Availa ble) • Required number of chips = p × q where p = N’ / N (No. of rows reqd. ) and q = W’/ W (no. of Cols reqd.) • Address bits - required to map to the row • Decoder used No. of Address line bits (x) is given by N=2x • No. of data bus = No. of columns (Word size)
  • 58. MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP There are 3 types of organizations of N’ × W’ that can be formed using N× W • N’ > N and W’ = W => increasing the number of words (Rows - N) in the memory • N’ = N and W’ > W => increasing the word size (Columns – W) of the chip • N’ > N and W’ > W => increasing both N & W (Rows & Columns) the number of words and number of bits in each word.
  • 59. There are different types of organization of N1 x W1 –memory using N x W –bit chips How many 1024x 8 RAM chips are needed to provide a memory capacity of 2048 x 8? NI If > N & WI = W Increase the word size of a Memory by a factor of I f Case 2: NI = N & WI > W NI If > N & WI > W Increase number of words by the factor of p & Increase the word size of a Memory by a factor of q Case 1: Case 3: q = WI W How many 1024x 4 RAM chips are needed to provide a memory capacity of 2048 x 8? NI Increase number of words by the factor of p = N How many 1024x 4 RAM chips are needed to provide a memory capacity of 1024 x 8?
  • 60. MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP Problem – 1 (CASE 2) – Increasing the word size (Columns) • Design 128 × 16 (N’ × W’)- bit RAM using 128 × 4(N × W) - bit RAM • Solution: p = 128 / 128 = 1; q = 16 / 4 = 4 • Therefore, No. of chips required is calculated by, p × q = 1 × 4 =4 (i.e. 4 memory chips of size 128 × 4 are required to construct 128 × 16 bit RAM) x – Number of bits required to represent the address lines Y- Number of bits required for Selecting the specific RAM. 128 x 4= 27 x 4 (by N=2x) X=7,Y=0,Z=0 x=7 bit address is required Z – Number of bits to select the RAM or ROM or Interface… No. of Types (T) = 1 Z=0 ((by T=2Z) (p = 2y) 1= 20 y=0 (since only one RAM)
  • 61. Component Hexadecimal address Address Bus From To 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 RAM 1.1 0000 007F x x x x x x x RAM 1.2 0000 007F x x x x x x x RAM 1.3 0000 007F x x x x x x x RAM 1.4 0000 007F x x x x x x x Z – Number of bits to select the RAM or ROM or Interface… Z=0 address line 9 is empty Y- Number of bits required for Selecting the specific RAM Y=0 Only one RAM so address lines 7 and 8 are empty X=7 7 bit address MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 62. Component Address Bus 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 RAM 1.1 RAM 1.2 RAM 1.3 x x x x x x x x x x x x x x Hexadecimal address From To 0000 007F 0000 007F 0000 007F 0 0 0 0 0 0 0 0 0 0 0 0 RAM 1.4 0000 007F v Substitute x=0 to get ‘From’ address If x=0 ‘From’ address is 0 0 0 0 v x x x x x x x xv x x x x v x x X=0 Convert to hexadecimal value 0 0 So, from address of RAM is 0000 0 0 MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 63. Component Address Bus 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 RAM 1.1 RAM 1.2 RAM 1.3 x x x x x x x x x x x x x x Hexadecimal address From To 0000 007F 0000 007F 0000 007F RAM 1.4 0000 007F Substitute x=1 to get ‘to’ address If x=1 ‘to’ address is 0 0 0 0 v v x x x x x x x x vx x x x v x x X=1 0 0 0 0 0 1 1 1 1 1 1 1 Convert to hexadecimal value 0 0 So, to address of RAM is 007F 7 F MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 64. CS Data (0-3) R/W Address (0-6) 128 × 4 RAM CS Data (0-3) R/W Address (0-6) 128 × 4 RAM CS Data (0-3) R/W Address (0-6) 128 × 4 RAM CS Data (0-3) R/W Address (0-6) 128 × 4 RAM Data Bus 16 Address Bus 4 4 4 4 Memory design – Increasing the word size Design 128 × 16 (N’ × W’)- bit RAM using 128 × 4 (N × W) - bit RAM 7 Chip Select Read/write Control Since W is increased ---- the 4 chips should be arranged horizontally. MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 65. 6 - 0 Data r/w 1 16 4 4 4 4 7 128 x 4 RAM 128 x 4 RAM 128 x 4 RAM 128 x 4 RAM Address Bus MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 66. Problem – 2 (CASE 1) – Increasing the number of words (Rows) Design 1024 × 8 - bit RAM using 256 × 8 - bit RAM Solution: p = 1024 / 256 = 4; q = 8 / 8 = 1 S.NO Memory N x W N1 x W1 P q p * q x y z Total 1 RAM 256 × 8 1024 × 8 4 1 4 8 2 0 10 2 3 4 p × q = 4 × 1 = 4  4 memory chips of size 256 × 8 are required to construct 1024 × 8 bit RAM x – Number of bits required to represent the address lines 256 x 8= 28 x 4 x=8 bit address is required Y- Number of bits for Selecting the specific RAM. (p = 2y) (p=422y ) y=2 Z – Number of 0 (select the RAM or ROM) MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 67. X=8 8 bit address lines(from 0 to 7) Y=2 address lines 8 and 9 will select one RAM among 4 RAM Z=0 address line 10 is empty MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 68. Substitute x=0 to get ‘From’ address If x=0 ‘From’ address is v v X=0 Convert to hexadecimal value 0 3 So, from address of RAM is 0300 0 0 v 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 69. Substitute x=1 to get ‘To’ address If x=1 ‘To’ address is v v X=1 Convert to hexadecimal value 0 3 So, from address of RAM is 03FF F F v 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 70. Data Bus R / W 8 Address Bus 8 256 × 8 RAM CS Data Bus R / W 8 Address Bus 8 256 × 8 RAM CS Data Bus R / W 8 Address Bus 8 256 × 8 RAM CS 8 Address Bus A0 – A7 2 × 4 decoder Data Bus R / W 8 Address Bus 8 256 × 8 RAM CS A9 A8 Data Bus R / W 8 0 1 2 3 MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 71. Data Bus R / W 8 Address Bus 8 256 × 8 RAM CS Data Bus R / W 8 Address Bus 8 256 × 8 RAM CS Data Bus R / W 8 Address Bus 8 256 × 8 RAM CS 8 Address Bus A0 – A7 Data Bus R / W 8 Address Bus 8 256 × 8 RAM CS Data Bus 8 A9 A8 R / W MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 72. 9 8 7 - 0 256 × 8 RAM 1 256 × 8 RAM 3 Data r/w 2 × 4 Decoder 3 2 1 0 8 8 8 256 × 8 RAM 2 8 256 × 8 RAM 4 8
  • 73. Problem – 3 (CASE 3) – Increasing the Words & Word size (Rows & Columns) Design 256 × 16 – bit RAM using 128 × 8 – bit RAM chips Solution: p = 256 / 128 = 2; q = 16 / 8 = 2 S.NO Memory N x W N1 x W1 P q p * q x y z Total 1 RAM 128 × 8 256 × 16 2 2 4 7 1 0 8 2 3 4 p × q = 2 × 2 = 4 x – Number of bits required to represent the address lines 128 x 4= 27 x 4 x=7 bit address is required Y- Number of bits for Selecting the specific RAM. (p = 2y) (p=221y ) y=1 Z – Number of 0 (select the RAM or ROM) 4 chips are required, 2chips rows, 2chips columns MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 74. Substitute x=0 to get ‘From’ address If x=0 ‘From’ address is v v Convert to hexadecimal value 0 0 So, from address of RAM is 0080 8 0 v X=0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 75. Substitute x=1 to get ‘To’ address If x=1 ‘To’ address is v v Convert to hexadecimal value 0 0 So, from address of RAM is 00FF F F v 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 X=1 MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 76. 7 6 - 0 Data r/w 1 × 2 Decode r 1 0 Address Bus 8 8 8 8 16 16 128 × 8 RAM 1.1 128 × 8 RAM 1.2 128 × 8 RAM 2.1 128 × 8 RAM 2.2
  • 77. Problem – 4: Design 256 × 16 – bit RAM using 256 × 8 – bit RAM chips and 256 × 8 – bit ROM using 128 × 8 – bit ROM chips. Solution: p = 256 / 256 = 1; S.NO Memory N x W N1 x W1 P q p * q x y z Total 1 RAM 256 × 8 256 × 16 1 2 2 8 0 1 9 2 Rom 128 × 8 256 × 8 2 1 2 7 1 1 9 3 4 RAM Chip calculation q = 16 / 8 = 2 p × q = 1 × 2 = 2 RAM2 chips are required 1 row  2 chips (cols) x – Number of bits required to represent the address lines 256 x 8= 28 x 8 x=8 bit address is required Y- Number of bits for Selecting the specific RAM. (p = 2y) (p=120y ) y=0 Z – Number of 1 (select the RAM or ROM) MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 78. S.NO Memory N x W N1 x W1 P q p * q x y z Total 1 RAM 256 × 8 256 × 16 1 2 2 8 0 1 9 2 Rom 128 × 8 256 × 8 2 1 2 7 1 1 9 3 4 ROM Chip calculation q = 8 / 8 = 1 p × q = 2 × 1 = 2 ROM2 chips are required 2 chips(rows) , 1 column x – Number of bits required to represent the address lines 128 x 8= 27 x 8 x=7 bit address is required Y- Number of bits for Selecting the specific RAM. (p = 2y) (p=221y ) y=1 Z – Number of 1 (select the RAM or ROM) MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP Problem – 4: Design 256 × 16 – bit RAM using 256 × 8 – bit RAM chips and 256 × 8 – bit ROM using 128 × 8 – bit ROM chips. Solution: p = 256 / 128 = 2;
  • 79. For RAM – No bits required – (No address line used) Y = 1 bit --- For ROM (7 th address line is used) Z = 1 bit MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 80. Substitute x=0 to get ‘From’ address If x=0 ‘From’ address is v v Convert to hexadecimal value 0 1 So, from address of RAM is 0180 8 0 v X=0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 81. Substitute x=1 to get ‘To’ address If x=1 ‘To’ address is v v Convert to hexadecimal value 0 1 So, from address of RAM is 01FF F F v X=1 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 82. 8 7 6 - 0 Data r/w 1 × 2 Decode r 1 0 Address Bus 8 8 128 × 8 ROM 1 128 × 8 ROM 2 8 16 8 256 × 8 RAM 1.1 256 × 8 RAM 1.2 1 × 2 Decode r 1 0 ●
  • 83. Problem – 5 • A computer employs RAM chips of 128 x 8 and ROM chips of 512 x 8. The computer system needs 256 x 8 of RAM, 1024 x 16 of ROM, and two interface units with 256 registers each. A memory mapped I/O configuration is used. The two higher -order bits of the address bus are assigned 00 for RAM, 01 for ROM, and 10 for interface registers. • a. Compute total number of decoders are needed for the above system? • b. Design a memory-address map for the above system • c. Show the chip layout for the above design MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 84. • A computer employs RAM chips of 128 x 8 and ROM chips of 512 x 8. The computer system needs 256 x 8 of RAM, 1024 x 16 of ROM, and two interface units with 256 registers each. A memory mapped I/O configuration is used. The two higher -order bits of the address bus are assigned 00 for RAM, 01 for ROM, and 10 for interface registers. MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP q = always 1 for Interfaces (Column)
  • 85. x – Number of bits required to represent the address lines 128 x 8= 27 x 8 x=7 bit address is required Y- Number of bits for Z – Number of 2 Selecting the specific RAM. (select the RAM or (p = 2y) (p=221y ) y=1 ROM or interface) S.NO Memory N x W N1 x W1 P q p * q x y z Total 1 RAM 128 × 8 256 × 8 2 1 2 7 1 2 10 2 ROM 512 × 8 1024 × 16 2 2 4 9 1 2 12 3 Interface 256 2 1 2 8 1 2 11 4 1.RAM Solution: p = 256 / 128 = 2; q = 8 / 8 = 1 p × q = 2 × 1 = 2 RAM2 chips are required 2chips(row) 1(col) MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 86. x – Number of bits required to represent the address lines 512 x 8= 29 x 8 x=9 bit address is required Y- Number of bits for Z – Number of 2 Selecting the specific RAM. (select the RAM or (p = 2y) (p=221y ) y=1 ROM or interface) S.NO Memory N x W N1 x W1 P q p * q x y z Total 1 RAM 128 × 8 256 × 8 2 1 2 7 1 2 10 2 ROM 512 × 8 1024 × 16 2 2 4 9 1 2 12 3 Interface 256 2 1 2 8 1 2 11 4 2.ROM Solution: p = 1024 / 512 = 2; q = 16 / 8 = 2 p × q = 2 × 2 = 4 RAM4chips are required 2chips(rows), 2chips(cols) MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 87. S.NO Memory N x W N1 x W1 P q p * q x y z Total 1 RAM 128 × 8 256 × 8 2 1 2 7 1 2 10 2 ROM 512 × 8 1024 × 16 2 2 4 9 1 2 12 3 Interface 256 2 1 2 8 1 2 11 4 q is 1 always for interfaces. 2x = 256 (Number of registers) X= 8 3.Interfacep = number of interfaces=2(input is given) 2y = p=2 y= 1 p*q= 2 * 1 = 2 Z – Number of 2 (select the RAM or ROM or interface) 2 interfaces are required 2 interfaces(rows), 1 column MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 88. Component Hexadecimal Address Address Bus From To 15 - 12 11 10 9 8 7 6 5 4 3 2 1 0 RAM1 0000 007F 0 0 0 x x x x x x x RAM2 0200 027F 0 0 1 x x x x x x x ROM1.1 0400 05FF 0 1 0 x x x x x x x x x ROM1.2 0400 05FF 0 1 0 x x x x x x x x x ROM2.1 0600 07FF 0 1 1 x x x x x x x x x ROM2.2 0600 07FF 0 1 1 x x x x x x x x x Interface1 0800 08FF 1 0 0 x x x x x x x x Interface2 0A00 0AFF 1 0 1 x x x x x x x x X=address lines(RAM7,ROM9,interface8 Y=1 Z=2 MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 89. A ddress Ch. Select r/w Data 11 10 9 8 7 6 - 0 Data r/w ADDRESS BUS 3 × 8 Decode r 0 1 2 3 4 5 A Ch. ddress Select r/w Data 128 × 8 RAM 1 128 × 8 RAM 2 512 × 8 ROM 1.1 512 × 8 ROM 1.2 512 × 8 ROM 2.1 512 × 8 ROM 2.2 Interface 1 Interface 2
  • 90. Problem – 6 • Suppose that a 2M × 16 RAM memory is built using 256K × 8 RAM chips and 1K x 8 ROM memory is build using 256 x 8 ROM chips with the word addressable memory, find the following: • a) How many RAM chips and ROM chips are necessary? • b) If we were accessing one full word in RAM, how many chips would be involved? • c) How many address bits are needed for each RAM chip? • d) How many memory banks are required? Hint: number of banks are addressable units in main memory and RAM chips • e) If high-order interleaving is used, where would address (1B)16 be located? Also, for a low-order interleaving? MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 91. Problem – 6 - Solution • a) 16 RAMS , 4 ROMS • b) Each RAM chip is 256K × 8, so to access one full word (2 bytes), 2 RAM chips would be involved. • c) The 256K × 8 RAM chip has 18 address bits (2^18 = 256K), so each RAM chip would need 18 address bits. • d) 8 • e) module 0 – word 27 module 3 - word 3 MEMORY DESIGN - MEMORY INTERFACE ADDRESS MAP
  • 92. CACHE MEMORY • Special very high-speed memory • used to speed up and synchronize with high-speed CPU • Cache memory is costlier than main memory or disk memory but more economical than CPU registers extremely fast memory type that acts as a buffer between RAM and the CPU • It holds frequently requested data and instructions so that they are immediately available to the CPU when needed • used to reduce the average time to access data from the Main memory • The cache is a smaller and faster memory that stores copies of the data from frequently used main memory locations
  • 93. CACHE MEMORY: PRINCIPLES • The intent of cache memory is to provide the fastest access to resources without compromising on size and price of the memory. • The processor attempting to read a byte of data, first looks at the cache memory. • If the byte does not exist in cache memory, it searches for the byte in the main memory.
  • 94. TYPES OF CACHE MEMORY • Primary Cache: • very fast and its access time is similar to the processor registers • it is built onto the processor chip • its size is quite small • also known as a level 1 cache and is build using static RAM (SRAM) • Secondary Cache: • The secondary cache or external cache is cache memory that is external to the primary cache • It is located between the primary cache and the main memory
  • 95. CACHE MEMORY: ADVANTAGES & DISADVANTAGES • Advantages of Cache Memory • Faster • Less access time • Disadvantages of Cache Memory • Expensive • Limited capacity
  • 96. Block Placement Direct Mapping Set Associative Block Identification Fully Associative Tag Index Offset Block Replacement FCFS LRU, MRU Update Policies Optimal Write Through Write back Write around Write allocate Cache Memory Management Techniques CACHE MEMORY Management Techniques
  • 97. MAPPING - BASICS In SM to MM – The terms Pages and Frames are used. In MM to CM – The terms Blocks and Lines are used. Pages, Frames, Blocks, Lines – Same Blocks – MM Blocks Cache Lines - Cache Blocks
  • 98. MAPPING - BASICS Mapping 64 words of Main memory on to 16 words of Cache • Main Memory Address 16 Blocks 4 words per Block
  • 99. MAPPING - BASICS 4 words per Block 0 1 2 3 Mapping 64 words of Main memory on to 16 words of Cache • Main Memory Address Each word (0 to 63) – has data – (needs to be transferred) Hence, Main Memory Address(Physical Address) – should point to the individual word in a block. No. of bits needed to store 64 words 16 Blocks Main Memory Address bits (Physical Address Bits – PA Bits): 6 bits
  • 100. MAPPING - BASICS • Main Memory Address No. of bits needed to store 64 words = 6 bits No. of words in the memory: 64 words No. of bits needed to identify the block (total 16 blocks) No. of bits to identify a block in 16 blocks = 4 bits No. of bits to identify a single word Mapping 64 words of Main memory on to 16 words of Cache
  • 101. MAPPING - BASICS • Main Memory Address If the PA is 011111, Then, the corresponding block and the word in that block can be identified with the first 4 bits and last 2 bits respectively. Word 1 2 3 0 Mapping 64 words of Main memory on to 16 words of Cache
  • 102. MAPPING - BASICS • Cache Memory Organization In Main memory, No. of Cache Blocks (Lines) = 4 No. of bits needed to store 4 lines (Cache Blocks) = 2 bits Mapping 64 words of Main memory on to 16 words of Cache MM Size – 16 blocks Cache Size – 4 blocks(Lines) Hence, Mapping is Required
  • 103. CACHE - MEMORY MAPPING TECHNIQUES • Cache mapping defines how a block from the main memory is mapped to the cache memory in case of a cache miss • Cache mapping is a technique by which the contents of main memory are brought into the cache memory. • Direct mapping • Associative mapping • Set-Associative mapping
  • 104. • Direct Mapping • Many to One Mapping • Map to a fixed Cache Line • Associative Mapping • Many to Many Mapping • Map to any Cache Line • Set Associative Mapping CACHE - MEMORY MAPPING TECHNIQUES
  • 105. MAPPING TECHNIQUES - DIRECT MAPPING • A particular block of main memory can map only to a particular line of the cache • Assign each memory block to a specific line in the cache • If a line is previously taken up by a memory block when a new block needs to be loaded, the old block is trashed Cache line number = ( MM Block Address ) % (Number of CM lines)
  • 106. MAPPING TECHNIQUES - DIRECT MAPPING Mapping 64 words of Main memory on to 16 words of Cache • Mapping – Direct Mapping MM Size – 16 blocks Cache Size – 4 blocks(Lines) Hence, Mapping is Required
  • 107. Mapping Techniques - Direct Mapping Cache line number = ( MM Block Address ) % (Number of CM lines)
  • 108. MAPPING TECHNIQUES - DIRECT MAPPING Block is identified by first 4 bits of PA 2 bits identifies a word in a Block How a block is identified in cache with MM Address bits? Cache Blocks/Lines Mapping 64 words of Main memory on to 16 words of Cache Mapping – Direct Mapping MM Blocks Many–to–One Relationship Main Memory Cach e Me m or y
  • 109. MAPPING TECHNIQUES - DIRECT MAPPING Mapping 64 words of Main memory on to 16 words of Cache Word is identified by last 2 bits of PA 2 bits identifies a word in both MM/CM Block Main Memory Cache Memory Cache Blocks/ Lines Line Number/Index Tag bits MM Blocks How a Word in a block is identified in cache with MM Address bits? LINE NUMBER TAG BITS
  • 110. MAPPING TECHNIQUES - DIRECT MAPPING Mapping 64 words of Main memory on to 16 words of Cache 2 bits identifies a word in a Block (LINE OFFSET) MM Blocks Cache Blocks/ Lines Tag bits Line Number(Index)
  • 111. Mapping Techniques - Direct Mapping Why called TAG bits? Tag Bits
  • 112. DIRECT MAPPING – CALCULATION OF PA BITS NEEDED N BITS PA = MM Size = 2N X bits Y bits No. of Blocks = 2X Block Size/Line size = 2Y Tag Line Number Word / Offset in powers of 2 in powers of 2 =MM Size/ Cache Size =Cache Size/ Line Size=Line Size in powers of 2 No. of Blocks = MM Size/ Block Size
  • 113. MAPPING TECHNIQUES - DIRECT MAPPING • Consider a cache consisting of 128 blocks of 16 words each, for total of 2048(2K) words • Assume that the main memory is addressable by 16 bit address. • Main memory is 64K which will be viewed as 4K (4×1024=4096) blocks of 16 words each. • In this block J of the main memory maps on to block J modulo 128 of the cache. Thus main memory blocks 0,128,256,….is loaded into cache is stored at block 0. Block 1,129,257,….are stored at block 1 and so on. PA Bits Calculation: MM Words = 4096*16 words = 65536 words, So, No. of PA bits => 216 => 16 bits BLOCK / LINE bits: No. of MM blocks : 4096, So, No. of bits to represent a block =>212 => 12 bits (Block bits + Tag Bits) LINE OFFSET (WORD) Bits: Each Block – 16 words => No. of bits for 16 words = 24 => 4 bits TAG Bits & Block Bits: No. of Lines in cache (128 blocks) = CACHE size/Block Size = 2048 / 16 = 128. Block bits (128 lines/blocks) => 27 => 7 bits. Tag Bits => Total Block bits – Block bits = 12-7 = 5 bits
  • 114. MAPPING TECHNIQUES - DIRECT MAPPING Need of Replacement Algorithm- In direct mapping, There is no need of any replacement algorithm. This is because a main memory block can map only to a particular line of the cache. Thus, the new incoming block will always replace the existing block (if any) in that particular line.
  • 115. MAPPING TECHNIQUES - DIRECT MAPPING - PROBLEMS Solution: Main Memory: Calculation of PA bits: Physical Address Split: Calculation of Block & Word(offset) bits: Cache Memory: Calculation of Cache Block/ Line Number/Index bits: Calculation of TAG bits
  • 116. MAPPING TECHNIQUES - DIRECT MAPPING - PROBLEMS Not Required Logic:
  • 117. MAPPING TECHNIQUES - DIRECT MAPPING - PROBLEMS
  • 118. PARAMETERS OF CACHE MEMORY ⚫ Cache Hit ⚫ A referenced item is found in the cache by the processor ⚫ Cache Miss ⚫ A referenced item is not present in the cache ⚫ Hit ratio ⚫ Ratio of number of hits to total number of references => number of hits/(number of hits + number of Miss) ⚫ Miss penalty ⚫ Additional cycles required to serve the miss ⚫ Time required for the cache miss depends on both the latency and bandwidth ⚫ Latency – time to retrieve the first word of the block ⚫ Bandwidth – time to retrieve the rest of this block
  • 119. TYPES OF CACHE MISS • Compulsory Miss / Cold Miss • Conflict Miss/ Collision miss/ Interference miss • Capacity Miss – Not because of mapping techniques … because of size of cache.
  • 120. MAPPING TECHNIQUES - DIRECT MAPPING - DRAWBACKS • Drawback – Conflict Miss Conflict Miss Compulsory Miss
  • 121. MAPPING TECHNIQUES – (FULLY) ASSOCIATIVE MAPPING Relationship • To reduce Conflict Miss ----- Associative Mapping Many–to–Many • A block of main memory can map to any line of the cache that is freely available at that moment. • This makes fully associative mapping more flexible than direct mapping. • All the lines of cache are freely available. • When all the cache lines are occupied, then one of the existing blocks will have to be replaced.
  • 122. MAPPING TECHNIQUES – (FULLY) ASSOCIATIVE MAPPING • Entire block number bits are used as TAG bits --- (Fully Associative) • No need of specifying the block number. Many–to–Many Relationship
  • 123. PA ADDRESS CALCULATION - DIRECT & ASSOCIATIVE Direct Mapping Associative Mapping
  • 124. MAPPING TECHNIQUES – (FULLY) ASSOCIATIVE MAPPING Need of Replacement Algorithm: • A replacement algorithm is required. • Replacement algorithm suggests the block to be replaced if all the cache lines are occupied. • Thus, replacement algorithm like FCFS Algorithm, LRU Algorithm etc is employed. Disadvantages: • A replacement algorithm is required. • During retrieval, all the blocks needs to be checked for the data.
  • 125. ASSOCIATIVE MAPPING – CALCULATION OF PA BITS NEEDED N BITS PA = MM Size = 2N X bits Y bits No. of Blocks = 2X Block Size/Line size = 2Y Tag Word / Offset =Line Size in powers of 2 No. of Blocks = MM Size/ Block Size
  • 126. MAPPING TECHNIQUES – (FULLY) ASSOCIATIVE MAPPING Example Consider a fully associative mapped cache of size 16 KB with block size 256 bytes. The size of main memory is 128 KB. 1.Number of bits in tag 2.Tag directory size Given Cache memory size = 16 KB Block size = Frame size = Line size = 256 bytes Main memory size = 128 KB We consider that the memory is byte addressable. Number of Bits in Physical Address- Size of main memory = 128 KB = 217 bytes Thus, Number of bits in physical address = 17 bits
  • 127. MAPPING TECHNIQUES – (FULLY) ASSOCIATIVE MAPPING Number of Bits in Block Offset- Block size = 256 bytes = 28 bytes Thus, Number of bits in block offset = 8 bits Number of Bits in Tag- Number of bits in tag= Number of bits in physical address – Number of bits in block offset = 17 bits – 8 bits = 9 bits Thus, Number of bits in tag = 9 bits
  • 128. MAPPING TECHNIQUES – (FULLY) ASSOCIATIVE MAPPING •NUMBER OF LINES IN CACHE • TOTAL NUMBER OF LINES IN CACHE= CACHE SIZE / LINE SIZE • = 16 KB / 256 BYTES • = 16 X 1024 BYTES / 256 BYTES • = 64 LINES •TAG DIRECTORY SIZE • = NUMBER OF LINES IN CACHE X NUMBER OF BITS IN TAG • = 64 X 9 BITS • = 576 BITS • = 72 BYTES • THUS, SIZE OF TAG DIRECTORY = 72 BYTES
  • 129. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING • Combination of direct and associative mapping technique • Cache blocks are grouped into sets and mapping allow block of main memory reside into any block of a specific set
  • 130. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING • Hence contention problem of direct mapping is eased , at the same time , hardware cost is reduced by decreasing the size of associative search. • For a cache with two blocks per set. In this case, memory block 0, 64, 128,…..,4032 map into cache set 0 and they can occupy any two block within this set. • Having 64 sets means that the 6 bit set field of the address determines which set of the cache might contain the desired block. • The tag bits of address must be associatively compared to the tags of the two blocks of the set to check if desired block is present. This is two way associative search. Cache set number = ( MM Block Address ) % (Number of sets in Cache)
  • 131. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING
  • 132. Mapping using Set Associative MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS Block number (MM) % number of sets
  • 133. • k = 2 suggests that each set contains two cache lines. It is called as 2-way set associative mapping • Since cache contains 6 lines, so number of sets in the cache = 6 / 2 = 3 sets. • Block ‘j’ of main memory can map to set number (j mod 3) only of the cache. • Within that set, block ‘j’ can map to any cache line that is freely available at that moment. • If all the cache lines are occupied, then one of the existing blocks will have to be replaced. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING
  • 134. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING PA Bits Calculation: MM Words = 4096*16 words = 65536 words, So, No. of PA bits => 216 => 16 bits BLOCK / LINE bits: No. of MM blocks : 4096, So, No. of bits to represent a block =>212 => 12 bits (Block bits + Tag Bits) LINE OFFSET (WORD) Bits: Each Block – 16 words => No. of bits for 16 words = 24 => 4 bits TAG Bits & Block Bits: No. of Lines in cache (128 blocks) = CACHE size/Block Size = 2048 / 16 = 128.
  • 135. • If k = 1, then k-way set associative mapping becomes direct mapping i.e • 1-way Set Associative Mapping ≡ Direct Mapping • If k = Total number of lines in the cache, then k-way set associative mapping becomes fully associative mapping. Need of Replacement Algorithm: • Set associative mapping is a combination of direct mapping and fully associative mapping. • It uses fully associative mapping within each set. • Thus, set associative mapping requires a replacement algorithm.
  • 136. SET ASSOCIATIVE MAPPING – CALCULATION OF PA BITS NEEDED N BITS PA = MM Size = 2N X bits Y bits No. of Blocks = 2X Block Size/Line size = 2Y Tag Set Number Word / Offset =K * (MM Size/ Cache Size) in powers of 2 =No. of sets in powers of 2 (No. of Lines=Cache Size/ Line Size No. of Sets (S)= No.of Lines/K) =Line Size in powers of 2 No. of Blocks = MM Size/ Block Size
  • 137. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS Solution: Main Memory: Calculation of PA bits: Physical Address Split: Calculation of TAG bits Example 1 Calculation of Block & Word(offset) bits: Cache Memory: Number of lines and sets PA-(set no+offset) 7-(2+2)=3
  • 138. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS Example 1 - Elaborated
  • 139. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS Example 1 - Elaborated
  • 140. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS Example 1 - Elaborated
  • 141. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS Example 1 - Elaborated
  • 142. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS Example 2
  • 143. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS Example 2
  • 144. MAPPING TECHNIQUES – SET ASSOCIATIVE Physical Address Split: Calculation of Block & Word(offset) bits: Cache Memory: Number of lines and sets Calculation of TAG bits Example 3 28-(12+7)=9 Mapping - Problems 1.P.A split? 2.Tag directory size? 3. Show the format of main memory address Solution: Main Memory: Calculation of PA bits: PA-(set no+offset)
  • 145. MAPPING TECHNIQUES – SET ASSOCIATIVE Mapping - Problems Solution: Calculation of PA bits: Physical Address Split: Calculation of Block & Word(offset) bits: Example 4
  • 146. MAPPING TECHNIQUES – SET ASSOCIATIVE Mapping - Problems Physical Address Split: Solution: Cache Memory: Number of lines and sets Cache size Example 4 (OR) Tag Bits =K * (MM Size/ Cache Size) in powers of 2
  • 147. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS EXAMPLE 5
  • 148. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS Example 6
  • 149. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS EXAMPLE 7
  • 150. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS EXAMPLE 8
  • 151. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS Example 9 • Consider a 32-bit microprocessor that has an on-chip 16-kB four- way set-associative cache. Assume that the cache has a line size of four 32-bit words. Draw a block diagram of this cache showing its organization and how the different address fields are used to determine a cache hit/miss. Identify the set number in cache for mapping the given memory address ABCDE8F8.
  • 152. MAPPING TECHNIQUES – SET ASSOCIATIVE MAPPING - PROBLEMS Example 9 – Solution • Given: 16 Kb cache size. - 4 way set associative. Hence, Line size = 4*32 bit = 16 bytes. • Here, in the question "word" is mentioned and even "Where in the cache is the word from memory location“ is asked. So, word addressing is in use. So, offset bits = 2 for 4 words. • No. of sets=no of lines/p(way) • No. of lines=cache size /line size • Identifying the Set Number: Address -- ABCDE8F8 its binary from is :1010 1011 1100 1101 1110 1000 1111 1000 <1010 1011 1100 1101 1110 10> <00 1111 10> <00> • it is mapped to set number 62 in cache. tag(22 bit) sets( 8 bit) block size(2 bit)
  • 153. Block Placement Direct Mapping Set Associative Block Identification Fully Associative Tag Index Offset Block Replacement FCFS LRU, MRU Update Policies Optimal Write Through Write back Write around Write allocate CACHE MEMORY Management Techniques R Cache Memory Management Techniques Block eplacement Strategies
  • 154. CACHE : BLOCK REPLACEMENT POLICIES • Cache memory size < main memory size • Processor fetches data from cache memory to perform execution operation • So, when required block is not found within cache, then main memory block is transferred to cache and previously present block is replaced---- Cache replacement policies are needed…. • Replacement policies – used in • Set-associative cache • Fully – Associative/ Associative Cache • Cache Replacement Policies: • FIFO • Optimal Algorithm • LRU • MRU
  • 155. CACHE : BLOCK REPLACEMENT POLICIES - FIFO
  • 156. CACHE : BLOCK REPLACEMENT POLICIES - FIFO
  • 157. CACHE : BLOCK REPLACEMENT POLICIES - OPTIMAL
  • 158. CACHE : BLOCK REPLACEMENT POLICIES - OPTIMAL
  • 159. CACHE : BLOCK REPLACEMENT POLICIES - LRU
  • 160. CACHE : BLOCK REPLACEMENT POLICIES - LRU
  • 161. CACHE : BLOCK REPLACEMENT POLICIES - MRU Most Recently Used Most Recently Used (MRU)
  • 162. CACHE : BLOCK REPLACEMENT POLICIES - MRU
  • 163. Block Placement Direct Mapping Set Associative Block Identification Fully Associative Tag Index Offset Block Replacement FCFS LRU, MRU Update Policies Optimal Write Through Write back Write around Write allocate CACHE MEMORY Management Techniques Cache Memory Management Techniques Update Policies
  • 164. CACHE MEMORY – UPDATE POLICIES • Update policy - determines how a cache & Main Memory is updated after an operation. • Write through Used on Write HIT • Write back • Write around Used on Write MISS • Write Allocate
  • 165. UPDATE POLICY - WRITE-THROUGH • Correspond to items currently in the cache (i.e. write Hit) • Systems that write to main memory each time as well as to cache • It's the easiest policy to implement, but it lowers the cache's performance. • It's used when there are no frequent writes to the cache. CPU Cache Main Memory
  • 166. UPDATE POLICY - WRITE-BACK/COPY BACK CPU Cache Main Memory • Correspond to items currently in the cache (i.e. write Hit) • Updating main memory until the block containing the altered item is removed from the cache On Replacement of cache line
  • 167. UPDATE POLICY - WRITE-AROUND • Correspond to items not currently in the cache (i.e. write misses) • The item could be updated in main memory only without affecting the cache. CPU Cache Main Memory
  • 168. UPDATE POLICY - WRITE-ALLOCATE • Correspond to items not currently in the cache (i.e. write misses) • Update the item in main memory and bring the block containing the updated item into the cache. CPU Cache Main Memory
  • 169. CACHE UPDATE POLICY - SUMMARY • Update policies • Write Through • On write hit, Update cache as well as Main Memory simultaneous ly • Write Back • On write hit, Update cache instantly and update main memory on cache line replacement • Write –Around • On write miss, Update main memory without affecting Cache
  • 170. PERFORMANCE METRICS • Cache Hit • Hit Ratio • Cache Miss • Miss Ratio • Miss Penalty • Average Memory Access Time
  • 171. PERFORMANCE METRICS • Hit Ratio: • Number of references found in the cache against Total number of references •Hit Ratio (h)= 𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆𝒔 𝒇𝒐𝒖𝒏𝒅 𝒊𝒏 𝒕𝒉𝒆 𝑪𝒂𝒄𝒉𝒆 𝒕𝒐𝒕𝒂𝒍 𝒏𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒎𝒆𝒎𝒐𝒓𝒚 𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆𝒔 • Miss Ratio (m) =1- Hit Ratio • Hit Ratio + Miss Ratio=1 • TC is the average cache access time • TM is Mean memory access time • TA is average access time
  • 172. MEAN MEMORY ACCESS TIME (MMAT) • Average Memory Access Time = Hit ratio * Cache Memory Access Time + (1 – Hit ratio) * (Cache Memory Access Time + Time required to access a block of main memory)
  • 173. LOOK THROUGH CPU Cache Main Memory On Miss • The cache is checked first for a hit, and if a miss occurs then the access to main memory is started TC is the average cache access time TM is Mean memory access time TAvg = (TC ) + (1-h) × TM
  • 174. EXAMPLE •Assume that A computer system employs A cache with an access time of 20ns and A main memory with A cycle time of 200ns. Suppose that the hit ratio for reads is 90%, •A) what would be the average access time for reads if the cache is A • “Look-through” cache? • The average read access time • TAVG = (TC ) + (1-H) × TM • = 20NS + 0.10 * 200NS = 40NS
  • 175. LOOK ASIDE • Access to main memory in parallel with the cache lookup CPU Cache Main Memory Cache Memory Access Time during Cache Hit (Tc ) =Tc Cache Memory Access Time during Cache miss (Tc ) =0 TC is the average cache access time TM is Mean memory access time TAvg = (h × TC ) + (1-h) × TM
  • 176. EXAMPLE •Assume that a computer system employs a cache with an access time of 20ns and a main memory with a cycle time of 200ns. Suppose that the hit ratio for reads is 90%, •B) what would be the average access time for reads if the cache is a •“Look-aside” cache? • The average read access time in this case • Tavg = (h × TC ) + (1-h) × TM • = 0.9*20NS + 0.10 * 200NS = 38NS
  • 178. MEAN MEMORY ACCESS TIME • TAvg => Average Memory Access Time • h => Hit rate, For a Two Level cache implementation, • Average Memory Access Time = Hit ratio × Cache Memory Access Time + (1 – Hit ratio) × Time required to access a block of main memory. • TC => time to access information in cache (Cache access time) • TM => time to access information in main memory (Memory access time) h1 is the Hit rate in L1 Cache TAvg = (h1 × TC1 ) + (1-h1) × h2TC2+ (1-h1) ×(1-h2) × TM h2 is the Hit rate in L2 Cache TC1 is the time to access information in L1 Cache TC2 is the time to access information in L2 Cache
  • 179. A COMPUTER SYSTEM EMPLOYS A WRITE-BACK CACHE WITH A 90% HIT RATIO FOR WRITES. THE CACHE OPERATES IN "LOOK THROUGH" AND HAS A 80% READ HIT RATIO. READS ACCOUNT FOR 60% OF ALL MEMORY REFERENCES AND WRITES ACCOUNT FOR 40%.IF THE MAIN MEMORY CYCLE TIME IS 300NS AND THE CACHE ACCESS TIME IS 30NS, WHAT WOULD BE THE AVERAGE ACCESS TIME FOR ALL REFERENCES (READS AS WELL AS WRITES)?
  • 180. A computer system employs a write-back cache with a 90% hit ratio for writes. The cache operates in "Look Aside" and has a 80% read hit ratio. Reads account for 60% of all memory references and writes account for 40%.If the main memory cycle time is 300ns and the cache access time is 30ns, what would be the average access time for all references (reads as well as writes)? Ans: