SlideShare a Scribd company logo
Lecture 5
Pipelining of Processors
Computer Architecture
Lecturer: Irfan Ali
1
Characterize Pipelines
1) Hardware or software implementation – pipelining can be implemented in
either software or hardware.
2) Large or Small Scale – Stations in a pipeline can range from simplistic to
powerful, and a pipeline can range in length from short to long.
3) Synchronous or asynchronous flow – A synchronous pipeline operates like an
assembly line: at a given time, each station is processing some amount of
information. A asynchronous pipeline, allow a station to forward information at
any time.
4) Buffered or unbuffered flow – One stage of pipeline sends data directly to
another one or a buffer is place between each pairs of stages.
5) Finite Chunks or Continuous Bit Streams – The digital information that passes
though a pipeline can consist of a sequence or small data items or an arbitrarily
long bit stream.
6) Automatic Data Feed Or Manual Data Feed – Some implementations of
pipelines use a separate mechanism to move information, and other
implementations require each stage to participate in moving information.
2
What is Pipelining
• Pipelining is an implementation technique whereby multiple instructions are
overlapped in execution; it takes advantage of parallelism that exists among
the actions needed to execute an instruction. Today, pipelining is the key
implementation technique used to make fast CPUs.
• A technique used in advanced microprocessors where the microprocessor
begins executing a second instruction before the first has been completed.
• A Pipeline is a series of stages, where some work is done at each stage. The
work is not finished until it has passed through all stages.
• With pipelining, the computer architecture allows the next instructions to
be fetched while the processor is performing arithmetic operations, holding
them in a buffer close to the processor until each instruction operation can
performed.
3
Pipelining: Its Natural!
• Laundry Example
• Ann, Brian, Cathy, Dave
each have one load of clothes
to wash, dry, and fold
• Washer takes 30 minutes
• Dryer takes 40 minutes
• “Folder” takes 20 minutes
A B C D
4
Sequential Laundry
• Sequential laundry takes 6 hours for 4 loads
• If they learned pipelining, how long would laundry take?
A
B
C
D
30 40 20 30 40 20 30 40 20 30 40 20
6 PM 7 8 9 10 11 Midnight
T
a
s
k
O
r
d
e
r
Time
5
Pipelined Laundry
Start work ASAP
• Pipelined laundry takes 3.5 hours for 4 loads
A
B
C
D
6 PM 7 8 9 10 11 Midnight
T
a
s
k
O
r
d
e
r
Time
30 40 40 40 40 20
6
Pipelining Lessons
• Pipelining doesn’t help latency
of single task, it helps
throughput of entire workload
• Pipeline rate limited by
slowest pipeline stage
• Multiple tasks operating
simultaneously
• Potential speedup = Number
pipe stages
• Unbalanced lengths of pipe
stages reduces speedup
• Time to “fill” pipeline and time
to “drain” it reduces speedup
A
B
C
D
6 PM 7 8 9
T
a
s
k
O
r
d
e
r
Time
30 40 40 40 40 20
7
How Pipelines Works
• The pipeline is divided into segments and each
segment can execute it operation concurrently with
the other segments. Once a segment completes an
operations, it passes the result to the next segment in
the pipeline and fetches the next operations from the
preceding segment.
8
Before there was pipelining…
• Single-cycle control: hardwired
– Low CPI (1)
– Long clock period (to accommodate slowest instruction)
• Multi-cycle control: micro-programmed
– Short clock period
– High CPI
Single-cycle
Multi-cycle
insn0.(fetch,decode,exec) insn1.(fetch,decode,exec)
insn0.fetch insn0.dec insn0.exec insn1.fetch insn1.dec insn1.exec
time
9
Example
Instruction 1 Instruction 2
Instruction 3Instruction 4
X X
XX
Four sample instructions, executed linearly
10
Four Pipelined Instructions
IF
IF
IF
IF
ID
ID
ID
ID
EX
EX
EX
EX M
M
M
M
W
W
W
W
5
1
1
1
11
Instructions in the pipeline stages
12
Pipelining
Multi-cycle insn0.fetch insn1.decinsn1.fetchinsn0.dec insn0.exec insn1.exec
time
Pipelined
insn0.fetch insn0.dec
insn1.fetch
insn0.exec
insn1.dec
insn2.fetch
insn1.exec
insn2.dec insn2.exec
• Start with multi-cycle design
• When insn0 goes from stage 1 to stage 2
… insn1 starts stage 1
• Each instruction passes through all stages
… but instructions enter and leave at faster rate
Can have as many insns in flight as there are stages 13
CPI Of Various Microarchitectures
14
Processor Pipeline Review
I-cache
(Instruction
Cache)
Reg
File
PC
+4
D-cache
(Data
Cache)
ALU
Fetch Decode Memory
(Write-back)
Execute
15
Instruction Pipeline
• To implement pipelining, a designer s divides a processor's data path
into sections (stages), and places pipeline latches (also called buffers)
between each section (stage)
16
Unpipelined processor
17
Pipelined five stages processor
18
A Simple Implementation of a RISC Instruction Set
• Every instruction in this RISC subset can be implemented in at most
5 clock cycles. The 5 clock cycles are as follows:
Instructions Fetch(IF)
• The instruction Fetch (IF) stage is responsible for obtaining the requested
instruction from memory. The instruction and the program counter (which is
incremented to the next instruction) are stored in the IF/ID pipeline register as
temporary storage so that may be used in the next stage at the start of the next
clock cycle.
• Send the program counter (PC) to memory and fetch the current instruction
from memory. Update the PC to the next sequential PC by adding 4 (since
each instruction is 4 bytes) to the PC.
19
Stage 1: Fetch
• Fetch an instruction from memory every cycle
– Use PC to index memory
– Increment PC (assume no branches for now)
• Write state to the pipeline register (IF/ID)
– The next stage will read this pipeline register
20
Stage 1: Fetch Diagram
Instructio
n
bits
IF /ID
Pipelineregister
Instruction
Cache
PC
en
en
1
+
M
U
X
PC+1
Decode
target
21
Instruction Decode
• The Registers Fetch (REG)and Instruction Decode (ID) stage is responsible for
decoding the instruction and sending out the various control lines to the other
parts of the processor. The instruction is sent to the control unit where it is
decoded and the registers are fetched from the register file.
22
Stage 2: Decode
• Decodes opcode bits
– Set up Control signals for later stages
• Read input operands from register file
– Specified by decoded instruction bits
• Write state to the pipeline register (ID/EX)
– Opcode
– Register contents
– PC+1 (even though decode didn’t use it)
– Control signals (from insn) for opcode and destReg
23
Stage 2: Decode Diagram
ID /EX
Pipelineregister
regA
content
s
regB
content
s
Register File
regA
regB
en
Instructio
n
bits
IF /ID
Pipelineregister
PC+1
PC+1
Control
signals
Fetch
Execute
destReg
data
target
24
Execution
• The Effective Address/Execution (EX) stage is where any calculations are performed.
The main component in this stage is the ALU. The ALU is made up of arithmetic, logic
and capabilities.
• The ALU operates on the operands prepared in the prior cycle, performing one of
three functions depending on the instruction type.
■ Memory reference—The ALU adds the base register and the offset to form
the effective address.
■ Register-Register ALU instruction—The ALU performs the operation
specified by the ALU opcode on the values read from the register file.
■ Register-Immediate ALU instruction—The ALU performs the operation specified by the ALU
opcode on the first value read from the register file and the sign-extended immediate.
• In a load-store architecture the effective address and execution cycles can be
combined into a single clock cycle, since no instruction needs to simultaneously
calculate a data address and perform an operation on the data.
25
Stage 3: Execute
• Perform ALU operations
– Calculate result of instruction
• Control signals select operation
• Contents of regA used as one input
• Either regB or constant offset (from insn) used as second input
– Calculate PC-relative branch target
• PC+1+(constant offset)
• Write state to the pipeline register (EX/Mem)
– ALU result, contents of regB, and PC+1+offset
– Control signals (from insn) for opcode and destReg
26
Stage 3: Execute Diagram
ID /EX
Pipelineregister
regA
content
s
regB
content
s
ALU
resul
t
EX/Mem
Pipelineregister
PC+1
Control
signals
Control
signals
PC+1
+offse
t
+
regB
content
s
A
L
UM
U
X
Decode
Memory
destReg
data
target
27
Memory and IO
• The Memory Access and IO (MEM) stage is responsible for
storing and loading values to and from memory. It also
responsible for input or output from the processor. If the
current instruction is not of Memory or IO type than the result
from the ALU is passed through to the write back stage.
• If the instruction is a load, the memory does a read using the
effective address computed in the previous cycle. If it is a
store, then the memory writes the data from the second
register read from the register file using the effective address.
28
Stage 4: Memory
• Perform data cache access
– ALU result contains address for LD or ST
– Opcode bits control R/W and enable signals
• Write state to the pipeline register (Mem/WB)
– ALU result and Loaded data
– Control signals (from insn) for opcode and destReg
29
Stage 4: Memory Diagram
ALU
resul
t
Mem/WB
Pipelineregister
ALU
resul
t
EX/Mem
Pipelineregister
Control
signals
PC+1
+offse
t
regB
content
s
Loade
d
data
Control
signals
Execute
Write-
back
in_data
in_addr
Data Cache
en R/W
destReg
data
target
30
Write Back
• The Write Back (WB) stage is responsible for writing
the result of a calculation, memory access or input
into the register file.
• Register-Register ALU instruction or load instruction:
• Write the result into the register file, whether it
comes from the memory system (for a load) or from
the ALU (for an ALU instruction).
31
Stage 5: Write-back
• Writing result to register file (if required)
– Write Loaded data to destReg for LD
– Write ALU result to destReg for arithmetic insn
– Opcode bits control register write enable signal
32
Stage 5: Write-back Diagram
ALU
resul
t
Mem/WB
Pipelineregister
Control
signals
Loade
d
data
M
U
X
data
destReg
M
U
X
Memory
33
Putting It All Together
PC Inst
Cache
Register
file
M
UA
L
U
1
Data
Cache
+
+
M
U
X
IF/ID EX/Mem Mem/WB
M
U
X
dest
op
ID/EX
offset
valB
valA
PC+1PC+1
target
XALU
result
dest
op
valB
dest
op
ALU
result
mdata
eq?
instructio
n R0 0
R1
R2
R3
R4
R5
R6
R7
regA
regB
data
dest
M
U
X
34

More Related Content

PPT
Pipelining
PPT
Computer architecture pipelining
PDF
Unit IV Memory and I/O Organization
PDF
COMPUTER ORGANIZATION NOTES Unit 7
PPTX
Arithmetic logic shift unit
PDF
Programed I/O Modul..
PPTX
Unit 6 inter processor communication and synchronization
PPT
04 cache memory.ppt 1
Pipelining
Computer architecture pipelining
Unit IV Memory and I/O Organization
COMPUTER ORGANIZATION NOTES Unit 7
Arithmetic logic shift unit
Programed I/O Modul..
Unit 6 inter processor communication and synchronization
04 cache memory.ppt 1

What's hot (20)

PPTX
Cpu & its execution of instruction
PPTX
Direct Memory Access(DMA)
PPTX
Register organization, stack
PPT
Pipelining
PPTX
Superscalar Processor
PPTX
Input Output Organization
PPTX
Microprocessor - Intel Pentium Series
PPTX
Unit 4-booth algorithm
PPT
Arbitration in computer organization
PPTX
CS304PC:Computer Organization and Architecture Session 28 Direct memory acces...
PPT
Input output organization
PDF
Pipelining
PPT
Instruction cycle
PPT
Pipelining slides
PPTX
Advanced Pipelining in ARM Processors.pptx
PPT
Addition and subtraction with signed magnitude data (mano
PPTX
Clock-8086 bus cycle
PPTX
Data Link Control
PPTX
Pipeline processing - Computer Architecture
PPS
Synchronous and-asynchronous-data-transfer
Cpu & its execution of instruction
Direct Memory Access(DMA)
Register organization, stack
Pipelining
Superscalar Processor
Input Output Organization
Microprocessor - Intel Pentium Series
Unit 4-booth algorithm
Arbitration in computer organization
CS304PC:Computer Organization and Architecture Session 28 Direct memory acces...
Input output organization
Pipelining
Instruction cycle
Pipelining slides
Advanced Pipelining in ARM Processors.pptx
Addition and subtraction with signed magnitude data (mano
Clock-8086 bus cycle
Data Link Control
Pipeline processing - Computer Architecture
Synchronous and-asynchronous-data-transfer
Ad

Similar to Pipelining of Processors (20)

PPTX
Pipelining of Processors Computer Architecture
PPTX
Pipeline & Nonpipeline Processor
PPTX
Design pipeline architecture for various stage pipelines
PPT
Unit 3
PPT
Pipelining In computer
PPTX
pipelining
PPT
CH-5-Pipelining Computer architecture and organization.ppt
PPT
Pipelining _
PPTX
3 Pipelining
PPT
PDF
Pipelining 16 computers Artitacher pdf
PDF
Module 2 of apj Abdul kablam university hpc.pdf
PPTX
complete DLD.pptxbjngjjgujjhhujhhhuujhguh
PPTX
PPTX
Lecture-9 Parallel-processing .pptx
PPTX
pipeline in computer architecture design
PPTX
lec5 - The processor.pptx
PPT
Pipeline hazard
PPT
Performance Enhancement with Pipelining
PPT
IT209 Cpu Structure Report
Pipelining of Processors Computer Architecture
Pipeline & Nonpipeline Processor
Design pipeline architecture for various stage pipelines
Unit 3
Pipelining In computer
pipelining
CH-5-Pipelining Computer architecture and organization.ppt
Pipelining _
3 Pipelining
Pipelining 16 computers Artitacher pdf
Module 2 of apj Abdul kablam university hpc.pdf
complete DLD.pptxbjngjjgujjhhujhhhuujhguh
Lecture-9 Parallel-processing .pptx
pipeline in computer architecture design
lec5 - The processor.pptx
Pipeline hazard
Performance Enhancement with Pipelining
IT209 Cpu Structure Report
Ad

More from Gaditek (20)

PPTX
Digital marketing strategy and planning | About Business
PPT
Intro to social network analysis | What is Network Analysis? | History of (So...
PPT
Marketing ethics and social responsibility | Criticisms of Marketing
PPT
understanding and capturing customer value | What Is a Price?
PPT
The marketing environment | Suppliers | Marketing intermediaries
PPT
strategic planning | Customer Relationships | Partnering to Build
PPT
Digital marketing | what is marketing?
PPT
Fundamentals of Computer Design including performance measurements & quantita...
PPTX
Dealing with exceptions Computer Architecture part 2
PPTX
Dealing with Exceptions Computer Architecture part 1
PPT
Instruction Set Architecture (ISA)
PDF
differential equation Lecture#14
PDF
differential equation Lecture#12
PDF
differential equation Lecture#11
PDF
differential equation Lecture#13
PDF
differential equation Lecture#10
PDF
differential equation Lecture#9
PDF
differential equation Lecture#8
PDF
differential equation Lecture#7
PDF
differential equation Lecture#5
Digital marketing strategy and planning | About Business
Intro to social network analysis | What is Network Analysis? | History of (So...
Marketing ethics and social responsibility | Criticisms of Marketing
understanding and capturing customer value | What Is a Price?
The marketing environment | Suppliers | Marketing intermediaries
strategic planning | Customer Relationships | Partnering to Build
Digital marketing | what is marketing?
Fundamentals of Computer Design including performance measurements & quantita...
Dealing with exceptions Computer Architecture part 2
Dealing with Exceptions Computer Architecture part 1
Instruction Set Architecture (ISA)
differential equation Lecture#14
differential equation Lecture#12
differential equation Lecture#11
differential equation Lecture#13
differential equation Lecture#10
differential equation Lecture#9
differential equation Lecture#8
differential equation Lecture#7
differential equation Lecture#5

Recently uploaded (20)

PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
PDF
Business Ethics Teaching Materials for college
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PPTX
Cell Structure & Organelles in detailed.
PPTX
Institutional Correction lecture only . . .
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
RMMM.pdf make it easy to upload and study
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
Complications of Minimal Access Surgery at WLH
PPTX
Final Presentation General Medicine 03-08-2024.pptx
O7-L3 Supply Chain Operations - ICLT Program
Microbial disease of the cardiovascular and lymphatic systems
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
Business Ethics Teaching Materials for college
O5-L3 Freight Transport Ops (International) V1.pdf
Cell Structure & Organelles in detailed.
Institutional Correction lecture only . . .
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Supply Chain Operations Speaking Notes -ICLT Program
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
RMMM.pdf make it easy to upload and study
102 student loan defaulters named and shamed – Is someone you know on the list?
Module 4: Burden of Disease Tutorial Slides S2 2025
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Renaissance Architecture: A Journey from Faith to Humanism
Complications of Minimal Access Surgery at WLH
Final Presentation General Medicine 03-08-2024.pptx

Pipelining of Processors

  • 1. Lecture 5 Pipelining of Processors Computer Architecture Lecturer: Irfan Ali 1
  • 2. Characterize Pipelines 1) Hardware or software implementation – pipelining can be implemented in either software or hardware. 2) Large or Small Scale – Stations in a pipeline can range from simplistic to powerful, and a pipeline can range in length from short to long. 3) Synchronous or asynchronous flow – A synchronous pipeline operates like an assembly line: at a given time, each station is processing some amount of information. A asynchronous pipeline, allow a station to forward information at any time. 4) Buffered or unbuffered flow – One stage of pipeline sends data directly to another one or a buffer is place between each pairs of stages. 5) Finite Chunks or Continuous Bit Streams – The digital information that passes though a pipeline can consist of a sequence or small data items or an arbitrarily long bit stream. 6) Automatic Data Feed Or Manual Data Feed – Some implementations of pipelines use a separate mechanism to move information, and other implementations require each stage to participate in moving information. 2
  • 3. What is Pipelining • Pipelining is an implementation technique whereby multiple instructions are overlapped in execution; it takes advantage of parallelism that exists among the actions needed to execute an instruction. Today, pipelining is the key implementation technique used to make fast CPUs. • A technique used in advanced microprocessors where the microprocessor begins executing a second instruction before the first has been completed. • A Pipeline is a series of stages, where some work is done at each stage. The work is not finished until it has passed through all stages. • With pipelining, the computer architecture allows the next instructions to be fetched while the processor is performing arithmetic operations, holding them in a buffer close to the processor until each instruction operation can performed. 3
  • 4. Pipelining: Its Natural! • Laundry Example • Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold • Washer takes 30 minutes • Dryer takes 40 minutes • “Folder” takes 20 minutes A B C D 4
  • 5. Sequential Laundry • Sequential laundry takes 6 hours for 4 loads • If they learned pipelining, how long would laundry take? A B C D 30 40 20 30 40 20 30 40 20 30 40 20 6 PM 7 8 9 10 11 Midnight T a s k O r d e r Time 5
  • 6. Pipelined Laundry Start work ASAP • Pipelined laundry takes 3.5 hours for 4 loads A B C D 6 PM 7 8 9 10 11 Midnight T a s k O r d e r Time 30 40 40 40 40 20 6
  • 7. Pipelining Lessons • Pipelining doesn’t help latency of single task, it helps throughput of entire workload • Pipeline rate limited by slowest pipeline stage • Multiple tasks operating simultaneously • Potential speedup = Number pipe stages • Unbalanced lengths of pipe stages reduces speedup • Time to “fill” pipeline and time to “drain” it reduces speedup A B C D 6 PM 7 8 9 T a s k O r d e r Time 30 40 40 40 40 20 7
  • 8. How Pipelines Works • The pipeline is divided into segments and each segment can execute it operation concurrently with the other segments. Once a segment completes an operations, it passes the result to the next segment in the pipeline and fetches the next operations from the preceding segment. 8
  • 9. Before there was pipelining… • Single-cycle control: hardwired – Low CPI (1) – Long clock period (to accommodate slowest instruction) • Multi-cycle control: micro-programmed – Short clock period – High CPI Single-cycle Multi-cycle insn0.(fetch,decode,exec) insn1.(fetch,decode,exec) insn0.fetch insn0.dec insn0.exec insn1.fetch insn1.dec insn1.exec time 9
  • 10. Example Instruction 1 Instruction 2 Instruction 3Instruction 4 X X XX Four sample instructions, executed linearly 10
  • 12. Instructions in the pipeline stages 12
  • 13. Pipelining Multi-cycle insn0.fetch insn1.decinsn1.fetchinsn0.dec insn0.exec insn1.exec time Pipelined insn0.fetch insn0.dec insn1.fetch insn0.exec insn1.dec insn2.fetch insn1.exec insn2.dec insn2.exec • Start with multi-cycle design • When insn0 goes from stage 1 to stage 2 … insn1 starts stage 1 • Each instruction passes through all stages … but instructions enter and leave at faster rate Can have as many insns in flight as there are stages 13
  • 14. CPI Of Various Microarchitectures 14
  • 16. Instruction Pipeline • To implement pipelining, a designer s divides a processor's data path into sections (stages), and places pipeline latches (also called buffers) between each section (stage) 16
  • 18. Pipelined five stages processor 18
  • 19. A Simple Implementation of a RISC Instruction Set • Every instruction in this RISC subset can be implemented in at most 5 clock cycles. The 5 clock cycles are as follows: Instructions Fetch(IF) • The instruction Fetch (IF) stage is responsible for obtaining the requested instruction from memory. The instruction and the program counter (which is incremented to the next instruction) are stored in the IF/ID pipeline register as temporary storage so that may be used in the next stage at the start of the next clock cycle. • Send the program counter (PC) to memory and fetch the current instruction from memory. Update the PC to the next sequential PC by adding 4 (since each instruction is 4 bytes) to the PC. 19
  • 20. Stage 1: Fetch • Fetch an instruction from memory every cycle – Use PC to index memory – Increment PC (assume no branches for now) • Write state to the pipeline register (IF/ID) – The next stage will read this pipeline register 20
  • 21. Stage 1: Fetch Diagram Instructio n bits IF /ID Pipelineregister Instruction Cache PC en en 1 + M U X PC+1 Decode target 21
  • 22. Instruction Decode • The Registers Fetch (REG)and Instruction Decode (ID) stage is responsible for decoding the instruction and sending out the various control lines to the other parts of the processor. The instruction is sent to the control unit where it is decoded and the registers are fetched from the register file. 22
  • 23. Stage 2: Decode • Decodes opcode bits – Set up Control signals for later stages • Read input operands from register file – Specified by decoded instruction bits • Write state to the pipeline register (ID/EX) – Opcode – Register contents – PC+1 (even though decode didn’t use it) – Control signals (from insn) for opcode and destReg 23
  • 24. Stage 2: Decode Diagram ID /EX Pipelineregister regA content s regB content s Register File regA regB en Instructio n bits IF /ID Pipelineregister PC+1 PC+1 Control signals Fetch Execute destReg data target 24
  • 25. Execution • The Effective Address/Execution (EX) stage is where any calculations are performed. The main component in this stage is the ALU. The ALU is made up of arithmetic, logic and capabilities. • The ALU operates on the operands prepared in the prior cycle, performing one of three functions depending on the instruction type. ■ Memory reference—The ALU adds the base register and the offset to form the effective address. ■ Register-Register ALU instruction—The ALU performs the operation specified by the ALU opcode on the values read from the register file. ■ Register-Immediate ALU instruction—The ALU performs the operation specified by the ALU opcode on the first value read from the register file and the sign-extended immediate. • In a load-store architecture the effective address and execution cycles can be combined into a single clock cycle, since no instruction needs to simultaneously calculate a data address and perform an operation on the data. 25
  • 26. Stage 3: Execute • Perform ALU operations – Calculate result of instruction • Control signals select operation • Contents of regA used as one input • Either regB or constant offset (from insn) used as second input – Calculate PC-relative branch target • PC+1+(constant offset) • Write state to the pipeline register (EX/Mem) – ALU result, contents of regB, and PC+1+offset – Control signals (from insn) for opcode and destReg 26
  • 27. Stage 3: Execute Diagram ID /EX Pipelineregister regA content s regB content s ALU resul t EX/Mem Pipelineregister PC+1 Control signals Control signals PC+1 +offse t + regB content s A L UM U X Decode Memory destReg data target 27
  • 28. Memory and IO • The Memory Access and IO (MEM) stage is responsible for storing and loading values to and from memory. It also responsible for input or output from the processor. If the current instruction is not of Memory or IO type than the result from the ALU is passed through to the write back stage. • If the instruction is a load, the memory does a read using the effective address computed in the previous cycle. If it is a store, then the memory writes the data from the second register read from the register file using the effective address. 28
  • 29. Stage 4: Memory • Perform data cache access – ALU result contains address for LD or ST – Opcode bits control R/W and enable signals • Write state to the pipeline register (Mem/WB) – ALU result and Loaded data – Control signals (from insn) for opcode and destReg 29
  • 30. Stage 4: Memory Diagram ALU resul t Mem/WB Pipelineregister ALU resul t EX/Mem Pipelineregister Control signals PC+1 +offse t regB content s Loade d data Control signals Execute Write- back in_data in_addr Data Cache en R/W destReg data target 30
  • 31. Write Back • The Write Back (WB) stage is responsible for writing the result of a calculation, memory access or input into the register file. • Register-Register ALU instruction or load instruction: • Write the result into the register file, whether it comes from the memory system (for a load) or from the ALU (for an ALU instruction). 31
  • 32. Stage 5: Write-back • Writing result to register file (if required) – Write Loaded data to destReg for LD – Write ALU result to destReg for arithmetic insn – Opcode bits control register write enable signal 32
  • 33. Stage 5: Write-back Diagram ALU resul t Mem/WB Pipelineregister Control signals Loade d data M U X data destReg M U X Memory 33
  • 34. Putting It All Together PC Inst Cache Register file M UA L U 1 Data Cache + + M U X IF/ID EX/Mem Mem/WB M U X dest op ID/EX offset valB valA PC+1PC+1 target XALU result dest op valB dest op ALU result mdata eq? instructio n R0 0 R1 R2 R3 R4 R5 R6 R7 regA regB data dest M U X 34