SlideShare a Scribd company logo
Chapter 12
Instruction Pipelining
Basic Concepts
Making the Execution of
Programs Faster
 Use faster circuit technology to build the
processor and the main memory.
 Arrange the hardware so that more than one
operation can be performed at the same time.
 In the latter way, the number of operations
performed per second is increased even
though the elapsed time needed to perform
any one operation is not changed.
Traditional Pipeline Concept
Laundry Example
Ann, Brian, Cathy, Dave
each have one load of clothes
to wash, dry, and fold
Washer takes 30 minutes
Dryer takes 40 minutes
“Folder” takes 20 minutes
A B C D
Traditional Pipeline Concept
 Sequential laundry takes 6
hours for 4 loads
 If they learned pipelining,
how long would laundry
take?
A
B
C
D
30 40 20 30 40 20 30 40 20 30 40 20
6 PM 7 8 9 10 11 Midnight
Time
Traditional Pipeline Concept
Pipelined laundry takes
3.5 hours for 4 loads
A
B
C
D
6 PM 7 8 9 10 11 Midnight
T
a
s
k
O
r
d
e
r
Time
30 40 40 40 40 20
Instruction Pipelining
 First stage fetches the instruction and
buffers it.
 When the second stage is free, the first
stage passes it the buffered instruction.
 While the second stage is executing the
instruction, the first stage takes
advantages of any unused memory cycles
to fetch and buffer the next instruction.
 This is called instruction prefetch or
fetch overlap.
Inefficiency in two stage
instruction pipelining
There are two reasons
• The execution time will generally be longer than the
fetch time. Thus the fetch stage may have to wait for
some time before it can empty the buffer.
• When conditional branch occurs, then the address of
next instruction to be fetched become unknown. Then
the execution stage have to wait while the next
instruction is fetched.
Two stage instruction pipelining
Simplified view
wait new address wait
Instruction Instruction
Result
discard EXPANDED VIEW
Fetch Execute
Use the Idea of Pipelining in a
Computer
F
1
E
1
F
2
E
2
F
3
E
3
I1 I2 I3
(a) Sequential execution
Instruction
fetch
unit
Execution
unit
Interstage buffer
B1
(b) Hardware organization
Time
F1 E1
F2 E2
F3 E3
I1
I2
I3
Instruction
(c) Pipelined execution
Figure 8.1. Basic idea of instruction pipelining.
Clock cycle 1 2 3 4
Time
Fetch + Execution
Decomposition of instruction
processing
To gain further speedup, the pipeline have
more stages(6 stages)
 Fetch instruction(FI)
 Decode instruction(DI)
 Calculate operands (i.e. EAs)(CO)
 Fetch operands(FO)
 Execute instructions(EI)
 Write operand(WO)
Use the Idea of Pipelining in a
Computer
F4I4
F1
F2
F3
I1
I2
I3
D1
D2
D3
D4
E1
E2
E3
E4
W1
W2
W3
W4
Instruction
Figure 8.2. A 4­stage pipeline.
Clock cycle 1 2 3 4 5 6 7
(a) Instruction execution divided into four steps
F : Fetch
instruction
D : Decode
instruction
and fetch
operands
E: Execute
operation
W : Write
results
Interstage buffers
(b) Hardware organization
B1 B2 B3
Time
Fetch + Decode
+ Execution + Write
Textbook page: 457
SIX STAGE OF INSTRUCTION PIPELINING
 Fetch Instruction(FI)
Read the next expected instruction into a buffer
 Decode Instruction(DI)
Determine the opcode and the operand specifiers.
 Calculate Operands(CO)
Calculate the effective address of each source operand.
 Fetch Operands(FO)
Fetch each operand from memory. Operands in registers
need not be fetched.
 Execute Instruction(EI)
Perform the indicated operation and store the result
 Write Operand(WO)
Store the result in memory.
Timing diagram for instruction pipeline
operation
High efficiency of instruction pipelining
Assume all the below in diagram
• All stages will be of equal duration.
• Each instruction goes through all the six stages
of the pipeline.
• All the stages can be performed parallel.
• No memory conflicts.
• All the accesses occur simultaneously.
 In the previous diagram the instruction
pipelining works very efficiently and give high
performance
Limits to performance enhancement
The factors affecting the performance are
1. If six stages are not of equal duration, then there will be
some waiting time at various stages.
2. Conditional branch instruction which can invalidate
several instruction fetches.
3. Interrupt which is unpredictable event.
4. Register and memory conflicts.
5. CO stage may depend on the contents of a register that
could be altered by a previous instruction that is still in
pipeline.
Effect of conditional branch on
instruction pipeline operation
Conditional branch instructions
 Assume that the instruction 3 is a conditional branch
to instruction 15.
 Until the instruction is executed there is no way of
knowing which instruction will come next
 The pipeline will simply loads the next instruction in
the sequence and execute.
 Branch is not determined until the end of time unit 7.
 During time unit 8,instruction 15 enters into the
pipeline.
 No instruction complete during time units 9 through
12.
 This is the performance penalty incurred because we
could not anticipate the branch.
Six-stage CPU instruction pipeline
Role of Cache Memory
 Each pipeline stage is expected to complete in one
clock cycle.
 The clock period should be long enough to let the
slowest pipeline stage to complete.
 Faster stages can only wait for the slowest one to
complete.
 Since main memory is very slow compared to the
execution, if each instruction needs to be fetched
from main memory, pipeline is almost useless.
 Fortunately, we have cache.
Pipeline Performance
 The potential increase in performance
resulting from pipelining is proportional to the
number of pipeline stages.
 However, this increase would be achieved
only if all pipeline stages require the same
time to complete, and there is no interruption
throughout program execution.
 Unfortunately, this is not true.
Pipeline Performance
F1
F2
F3
I1
I2
I3
E1
E2
E3
D1
D2
D3
W1
W2
W3
Instruction
F4 D4I4
Clock cycle 1 2 3 4 5 6 7 8 9
Figure 8.3. Effect of an execution operation taking more than one clock cycle.
E4
F5I5 D5
Time
E5
W4
Quiz
 Four instructions, the I2 takes two clock
cycles for execution. Pls draw the figure for 4-
stage pipeline, and figure out the total cycles
needed for the four instructions to complete.
Pipeline Performance
 The previous pipeline is said to have been stalled for two clock
cycles.
 Any condition that causes a pipeline to stall is called a hazard.
 Data hazard
 – any condition in which either the source or the destination operands of an
instruction are not available at the time expected in the pipeline. So some
operation has to be delayed, and the pipeline stalls.
 Instruction (control) hazard –
 a delay in the availability of an instruction causes the pipeline to stall.
 Structural hazard –
 the situation when two instructions require the use of a given hardware
resource at the same time.
Pipeline Performance
F1
F2
F3
I1
I2
I3
D1
D2
D3
E1
E2
E3
W1
W2
W3
Instruction
Figure 8.4. Pipeline stall caused by a cache miss in F2.
1 2 3 4 5 6 7 8 9Clock cycle
(a) Instruction execution steps in successive clock cycles
1 2 3 4 5 6 7 8Clock cycle
Stage
F: Fetch
D: Decode
E: Execute
W: Write
F1 F2 F3
D1 D2 D3idle idle idle
E1 E2 E3idle idle idle
W1 W2idle idle idle
(b) Function performed by each processor stage in successive clock cycles
9
W3
F2 F2 F2
Time
Time
Idle periods –
stalls (bubbles)
Instruction
hazard
Pipeline Performance
F1
F2
F3
I1
I2 (Load)
I3
E1
M2
D1
D2
D3
W1
W2
Instruction
F4I4
Clock cycle 1 2 3 4 5 6 7
Figure 8.5. Effect of a Load instruction on pipeline timing.
F5I5 D5
Time
E2
E3 W3
E4D4
Load X(R1), R2
Structural
hazard
Pipeline Performance
 Again, pipelining does not result in individual
instructions being executed faster; rather, it is the
throughput that increases.
 Throughput is measured by the rate at which
instruction execution is completed.
 Pipeline stall causes degradation in pipeline
performance.
 We need to identify all hazards that may cause the
pipeline to stall and to find ways to minimize their
impact.
Data Hazards Example
 We must ensure that the results obtained when instructions are
executed in a pipelined processor are identical to those obtained
when the same instructions are executed sequentially.
 Hazard occurs
A ← 3 + A
B ← 4 × A
 No hazard
A ← 5 × C
B ← 20 + C
 When two operations depend on each other, they must be
executed sequentially in the correct order.
 Another example:
Mul R2, R3, R4
Add R5, R4, R6
Data Hazards
F1
F2
F3
I1 (Mul)
I2 (Add)
I3
D1
D3
E1
E3
E2
W3
Instruction
Figure 8.6. Pipeline stalled by data dependency between D2 and W1.
1 2 3 4 5 6 7 8 9Clock cycle
W1
D2A W2
F4 D4 E4 W4I4
D2
Time
Figure 8.6. Pipeline stalled by data dependency between D2 and W1.
Handling Data Hazards in
Software
 Let the compiler detect and handle the
hazard:
I1: Mul R2, R3, R4
NOP
NOP
I2: Add R5, R4, R6
 The compiler can reorder the instructions to
perform some useful work during the NOP
slots.
Thank You

More Related Content

PPTX
Instruction pipeline: Computer Architecture
PPT
2D transformation (Computer Graphics)
PPTX
Charcteristics of System
PPTX
Apache HBase™
PPTX
COMPUTER GRAPHICS-"Projection"
PPTX
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
PPTX
Instruction pipelining
PPTX
Convolution Neural Network (CNN)
Instruction pipeline: Computer Architecture
2D transformation (Computer Graphics)
Charcteristics of System
Apache HBase™
COMPUTER GRAPHICS-"Projection"
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
Instruction pipelining
Convolution Neural Network (CNN)

What's hot (20)

PPTX
Instruction Pipelining
PPTX
Operating system Dead lock
PDF
Queue as data_structure
PPT
pipelining
PPTX
pipelining
PPTX
Demand paging
PPTX
Linked list
PPTX
Page replacement algorithms
PPT
Graphs
PPT
Modes Of Transfer in Input/Output Organization
PPTX
B and B+ tree
PDF
8 memory management strategies
PPSX
5bit field
PPTX
PPTX
Priority queue in DSA
PPTX
bfs and dfs (data structures).pptx
PPTX
Modes of transfer
PPTX
Application layer
PPTX
General register organization (computer organization)
PDF
Process scheduling (CPU Scheduling)
Instruction Pipelining
Operating system Dead lock
Queue as data_structure
pipelining
pipelining
Demand paging
Linked list
Page replacement algorithms
Graphs
Modes Of Transfer in Input/Output Organization
B and B+ tree
8 memory management strategies
5bit field
Priority queue in DSA
bfs and dfs (data structures).pptx
Modes of transfer
Application layer
General register organization (computer organization)
Process scheduling (CPU Scheduling)
Ad

Viewers also liked (20)

PPTX
Pipelining and vector processing
PDF
Instruction pipelining (ii)
PPT
Pipelinig hazardous
PDF
Comp archch06l01pipeline
PPT
Pipelining
PPTX
PPTX
Input output in computer Orgranization and architecture
PPT
Ct213 processor design_pipelinehazard
PPTX
Memory mapping techniques and low power memory design
PPTX
Memory mapping
PPTX
Cache memory
PPTX
Pipelining Coputing
PPTX
3 Pipelining
PPT
Memory Mapping Cache
PDF
Aca2 06 new
PDF
Cpu pipeline basics
PPT
Pipeline
PPT
Pipeline hazard
PPT
Performance Enhancement with Pipelining
PPT
pipeline and vector processing
Pipelining and vector processing
Instruction pipelining (ii)
Pipelinig hazardous
Comp archch06l01pipeline
Pipelining
Input output in computer Orgranization and architecture
Ct213 processor design_pipelinehazard
Memory mapping techniques and low power memory design
Memory mapping
Cache memory
Pipelining Coputing
3 Pipelining
Memory Mapping Cache
Aca2 06 new
Cpu pipeline basics
Pipeline
Pipeline hazard
Performance Enhancement with Pipelining
pipeline and vector processing
Ad

Similar to Instruction pipelining (20)

PPT
chapter6- Pipelining.ppt chaptPipelining
PPT
Pipelining in computer architecture
PPSX
Concept of Pipelining
PDF
Pipeline Organization Overview and Performance.pdf
PPTX
pipelining.pptx
PPT
Chapter6 pipelining
PPT
Computer architecture pipelining
PPTX
Core pipelining
PPTX
print.pptx
PPT
Pipelining _
PPTX
CPU Pipelining and Hazards - An Introduction
PDF
Pipelining 16 computers Artitacher pdf
PPTX
Assembly p1
PPT
Pipelining_Lecture computer Organisation .ppt
PDF
COA_Unit-3_slides_Pipeline Processing .pdf
PPTX
pipeline in computer architecture design
PPTX
Computer organisation and architecture .
PPT
Pipelining and co processor.
PDF
Topic2a ss pipelines
PDF
instruction pipeline in computer architecture and organization.pdf
chapter6- Pipelining.ppt chaptPipelining
Pipelining in computer architecture
Concept of Pipelining
Pipeline Organization Overview and Performance.pdf
pipelining.pptx
Chapter6 pipelining
Computer architecture pipelining
Core pipelining
print.pptx
Pipelining _
CPU Pipelining and Hazards - An Introduction
Pipelining 16 computers Artitacher pdf
Assembly p1
Pipelining_Lecture computer Organisation .ppt
COA_Unit-3_slides_Pipeline Processing .pdf
pipeline in computer architecture design
Computer organisation and architecture .
Pipelining and co processor.
Topic2a ss pipelines
instruction pipeline in computer architecture and organization.pdf

Recently uploaded (20)

PDF
Business Ethics Teaching Materials for college
PDF
01-Introduction-to-Information-Management.pdf
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
PPTX
Pharma ospi slides which help in ospi learning
PPTX
Week 4 Term 3 Study Techniques revisited.pptx
PPTX
Institutional Correction lecture only . . .
PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
Basic Mud Logging Guide for educational purpose
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
Complications of Minimal Access Surgery at WLH
PPTX
Cell Structure & Organelles in detailed.
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
Business Ethics Teaching Materials for college
01-Introduction-to-Information-Management.pdf
O5-L3 Freight Transport Ops (International) V1.pdf
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
human mycosis Human fungal infections are called human mycosis..pptx
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
Pharma ospi slides which help in ospi learning
Week 4 Term 3 Study Techniques revisited.pptx
Institutional Correction lecture only . . .
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
2.FourierTransform-ShortQuestionswithAnswers.pdf
Anesthesia in Laparoscopic Surgery in India
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
TR - Agricultural Crops Production NC III.pdf
Basic Mud Logging Guide for educational purpose
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Complications of Minimal Access Surgery at WLH
Cell Structure & Organelles in detailed.
Renaissance Architecture: A Journey from Faith to Humanism

Instruction pipelining

  • 3. Making the Execution of Programs Faster  Use faster circuit technology to build the processor and the main memory.  Arrange the hardware so that more than one operation can be performed at the same time.  In the latter way, the number of operations performed per second is increased even though the elapsed time needed to perform any one operation is not changed.
  • 4. Traditional Pipeline Concept Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer takes 40 minutes “Folder” takes 20 minutes A B C D
  • 5. Traditional Pipeline Concept  Sequential laundry takes 6 hours for 4 loads  If they learned pipelining, how long would laundry take? A B C D 30 40 20 30 40 20 30 40 20 30 40 20 6 PM 7 8 9 10 11 Midnight Time
  • 6. Traditional Pipeline Concept Pipelined laundry takes 3.5 hours for 4 loads A B C D 6 PM 7 8 9 10 11 Midnight T a s k O r d e r Time 30 40 40 40 40 20
  • 7. Instruction Pipelining  First stage fetches the instruction and buffers it.  When the second stage is free, the first stage passes it the buffered instruction.  While the second stage is executing the instruction, the first stage takes advantages of any unused memory cycles to fetch and buffer the next instruction.  This is called instruction prefetch or fetch overlap.
  • 8. Inefficiency in two stage instruction pipelining There are two reasons • The execution time will generally be longer than the fetch time. Thus the fetch stage may have to wait for some time before it can empty the buffer. • When conditional branch occurs, then the address of next instruction to be fetched become unknown. Then the execution stage have to wait while the next instruction is fetched.
  • 9. Two stage instruction pipelining Simplified view wait new address wait Instruction Instruction Result discard EXPANDED VIEW Fetch Execute
  • 10. Use the Idea of Pipelining in a Computer F 1 E 1 F 2 E 2 F 3 E 3 I1 I2 I3 (a) Sequential execution Instruction fetch unit Execution unit Interstage buffer B1 (b) Hardware organization Time F1 E1 F2 E2 F3 E3 I1 I2 I3 Instruction (c) Pipelined execution Figure 8.1. Basic idea of instruction pipelining. Clock cycle 1 2 3 4 Time Fetch + Execution
  • 11. Decomposition of instruction processing To gain further speedup, the pipeline have more stages(6 stages)  Fetch instruction(FI)  Decode instruction(DI)  Calculate operands (i.e. EAs)(CO)  Fetch operands(FO)  Execute instructions(EI)  Write operand(WO)
  • 12. Use the Idea of Pipelining in a Computer F4I4 F1 F2 F3 I1 I2 I3 D1 D2 D3 D4 E1 E2 E3 E4 W1 W2 W3 W4 Instruction Figure 8.2. A 4­stage pipeline. Clock cycle 1 2 3 4 5 6 7 (a) Instruction execution divided into four steps F : Fetch instruction D : Decode instruction and fetch operands E: Execute operation W : Write results Interstage buffers (b) Hardware organization B1 B2 B3 Time Fetch + Decode + Execution + Write Textbook page: 457
  • 13. SIX STAGE OF INSTRUCTION PIPELINING  Fetch Instruction(FI) Read the next expected instruction into a buffer  Decode Instruction(DI) Determine the opcode and the operand specifiers.  Calculate Operands(CO) Calculate the effective address of each source operand.  Fetch Operands(FO) Fetch each operand from memory. Operands in registers need not be fetched.  Execute Instruction(EI) Perform the indicated operation and store the result  Write Operand(WO) Store the result in memory.
  • 14. Timing diagram for instruction pipeline operation
  • 15. High efficiency of instruction pipelining Assume all the below in diagram • All stages will be of equal duration. • Each instruction goes through all the six stages of the pipeline. • All the stages can be performed parallel. • No memory conflicts. • All the accesses occur simultaneously.  In the previous diagram the instruction pipelining works very efficiently and give high performance
  • 16. Limits to performance enhancement The factors affecting the performance are 1. If six stages are not of equal duration, then there will be some waiting time at various stages. 2. Conditional branch instruction which can invalidate several instruction fetches. 3. Interrupt which is unpredictable event. 4. Register and memory conflicts. 5. CO stage may depend on the contents of a register that could be altered by a previous instruction that is still in pipeline.
  • 17. Effect of conditional branch on instruction pipeline operation
  • 18. Conditional branch instructions  Assume that the instruction 3 is a conditional branch to instruction 15.  Until the instruction is executed there is no way of knowing which instruction will come next  The pipeline will simply loads the next instruction in the sequence and execute.  Branch is not determined until the end of time unit 7.  During time unit 8,instruction 15 enters into the pipeline.  No instruction complete during time units 9 through 12.  This is the performance penalty incurred because we could not anticipate the branch.
  • 20. Role of Cache Memory  Each pipeline stage is expected to complete in one clock cycle.  The clock period should be long enough to let the slowest pipeline stage to complete.  Faster stages can only wait for the slowest one to complete.  Since main memory is very slow compared to the execution, if each instruction needs to be fetched from main memory, pipeline is almost useless.  Fortunately, we have cache.
  • 21. Pipeline Performance  The potential increase in performance resulting from pipelining is proportional to the number of pipeline stages.  However, this increase would be achieved only if all pipeline stages require the same time to complete, and there is no interruption throughout program execution.  Unfortunately, this is not true.
  • 22. Pipeline Performance F1 F2 F3 I1 I2 I3 E1 E2 E3 D1 D2 D3 W1 W2 W3 Instruction F4 D4I4 Clock cycle 1 2 3 4 5 6 7 8 9 Figure 8.3. Effect of an execution operation taking more than one clock cycle. E4 F5I5 D5 Time E5 W4
  • 23. Quiz  Four instructions, the I2 takes two clock cycles for execution. Pls draw the figure for 4- stage pipeline, and figure out the total cycles needed for the four instructions to complete.
  • 24. Pipeline Performance  The previous pipeline is said to have been stalled for two clock cycles.  Any condition that causes a pipeline to stall is called a hazard.  Data hazard  – any condition in which either the source or the destination operands of an instruction are not available at the time expected in the pipeline. So some operation has to be delayed, and the pipeline stalls.  Instruction (control) hazard –  a delay in the availability of an instruction causes the pipeline to stall.  Structural hazard –  the situation when two instructions require the use of a given hardware resource at the same time.
  • 25. Pipeline Performance F1 F2 F3 I1 I2 I3 D1 D2 D3 E1 E2 E3 W1 W2 W3 Instruction Figure 8.4. Pipeline stall caused by a cache miss in F2. 1 2 3 4 5 6 7 8 9Clock cycle (a) Instruction execution steps in successive clock cycles 1 2 3 4 5 6 7 8Clock cycle Stage F: Fetch D: Decode E: Execute W: Write F1 F2 F3 D1 D2 D3idle idle idle E1 E2 E3idle idle idle W1 W2idle idle idle (b) Function performed by each processor stage in successive clock cycles 9 W3 F2 F2 F2 Time Time Idle periods – stalls (bubbles) Instruction hazard
  • 26. Pipeline Performance F1 F2 F3 I1 I2 (Load) I3 E1 M2 D1 D2 D3 W1 W2 Instruction F4I4 Clock cycle 1 2 3 4 5 6 7 Figure 8.5. Effect of a Load instruction on pipeline timing. F5I5 D5 Time E2 E3 W3 E4D4 Load X(R1), R2 Structural hazard
  • 27. Pipeline Performance  Again, pipelining does not result in individual instructions being executed faster; rather, it is the throughput that increases.  Throughput is measured by the rate at which instruction execution is completed.  Pipeline stall causes degradation in pipeline performance.  We need to identify all hazards that may cause the pipeline to stall and to find ways to minimize their impact.
  • 28. Data Hazards Example  We must ensure that the results obtained when instructions are executed in a pipelined processor are identical to those obtained when the same instructions are executed sequentially.  Hazard occurs A ← 3 + A B ← 4 × A  No hazard A ← 5 × C B ← 20 + C  When two operations depend on each other, they must be executed sequentially in the correct order.  Another example: Mul R2, R3, R4 Add R5, R4, R6
  • 29. Data Hazards F1 F2 F3 I1 (Mul) I2 (Add) I3 D1 D3 E1 E3 E2 W3 Instruction Figure 8.6. Pipeline stalled by data dependency between D2 and W1. 1 2 3 4 5 6 7 8 9Clock cycle W1 D2A W2 F4 D4 E4 W4I4 D2 Time Figure 8.6. Pipeline stalled by data dependency between D2 and W1.
  • 30. Handling Data Hazards in Software  Let the compiler detect and handle the hazard: I1: Mul R2, R3, R4 NOP NOP I2: Add R5, R4, R6  The compiler can reorder the instructions to perform some useful work during the NOP slots.