SlideShare a Scribd company logo
Chapter 12,
William Stallings
Computer Organization and Architecture
7th
Edition
CPU Structure and
Function
CPU Function
• CPU must:
– Fetch instructions
– Interpret/decode instructions
– Fetch data
– Process data
– Write data
CPU With Systems Bus
Registers
• CPU must have some working space
(temporary storage) - registers
• Number and function vary between
processor designs - one of the major
design decisions
• Top level of memory hierarchy
User Visible Registers
• General Purpose
• Data
• Address
• Condition Codes
General Purpose Registers (1)
• May be true general purpose
• May be restricted
• May be used for data or addressing
• Data: accumulator (AC)
• Addressing: segment (cf. virtual memory),
stack (points to top of stack, cf. implicit
addressing)
General Purpose Registers (2)
• Make them general purpose
– Increased flexibility and programmer options
– Increased instruction size & complexity,
addressing
• Make them specialized
– Smaller (faster) but more instructions
– Less flexibility, addresses implicit in opcode
How Many GP Registers?
• Between 8 - 32
• Less = more memory references
• More takes up processor real estate
• See also RISC
How big?
• Large enough to hold full address
• Large enough to hold full data types
• But often possible to combine two data
registers or two address registers by using
more complex addressing (e.g., page and
offset)
Condition Code Registers – Flags
• Sets of individual bits, flags
– e.g., result of last operation was zero
• Can be read by programs
– e.g., Jump if zero – simplifies branch taking
• Can not (usually) be set by programs
Control & Status Registers
• Program Counter (PC)
• Instruction Register (IR)
• Memory Address Register (MAR) –
connects to address bus
• Memory Buffer Register (MBR) – connects
to data bus, feeds other registers
Program Status Word
• A set of bits
• Condition Codes:
– Sign (of last result)
– Zero (last result)
– Carry (multiword arithmetic)
– Equal (two latest results)
– Overflow
• Interrupts enabled/disabled
• Supervisor/user mode
Supervisor Mode
• Intel ring zero
• Kernel mode
• Allows privileged instructions to execute
• Used by operating system
• Not available to user programs
Other Registers
• May have registers pointing to:
– Process control blocks (see OS)
– Interrupt Vectors (see OS)
• N.B. CPU design and operating system
design are closely linked
pipeline and pipeline hazards
MC68000 and Intel registers
• Motorola:
– Largely general purpose registers – explicit
addressing
– Data registers also for indexing
– A7 and A7’ for user and kernel stacks
• Intel
– Largely specific purpose registers – implicit
addressing
– Segment, Pointer & Index, Data/General
purpose
– Pentium II – backward compatibility
Indirect Cycle
• Same address can refer to different
arguments (by changing the content of the
location the address is pointing to)
• Indirect addressing requires more memory
accesses to fetch operands
• Can be thought of as additional instruction
subcycle
Instruction Cycle with Indirect
Instruction Cycle State Diagram
Data Flow (Instruction Fetch)
• PC contains address of next instruction
• Address moved to MAR
• Address placed on address bus
• Control unit requests memory read
• Result placed on data bus, copied to
MBR, then to IR
• Meanwhile PC incremented by 1
Data Flow (Fetch Diagram)
Data Flow (Data Fetch)
• IR is examined
• If indirect addressing, indirect cycle is
performed
– Rightmost n bits of MBR (address part of
instruction) transferred to MAR
– Control unit requests memory read
– Result (address of operand) moved to MBR
Data Flow (Indirect Diagram)
Data Flow (Execute)
• May take many forms, depends on
instruction being executed
• May include
– Memory read/write
– Input/Output
– Register transfers
– ALU operations
Data Flow (Interrupt)
• Current PC saved to allow resumption
after interrupt
• Contents of PC copied to MBR
• Special memory location (e.g., stack
pointer) loaded to MAR
• MBR written to memory according to
content of MAR
• PC loaded with address of interrupt
handling routine
• Next instruction (first of interrupt handler)
can be fetched
Data Flow (Interrupt Diagram)
Prefetch
• Fetch involves accessing main memory
• Execution of ALU operations do not
access main memory
• Can fetch next instruction during execution
of current instruction, cf. assembly line
• Called instruction prefetch
Improved Performance
• But not doubled:
– Fetch usually shorter than execution (cf.
reading and storing operands)
• Prefetch more than one instruction?
– Any jump or branch means that prefetched
instructions are not the required instructions
• Add more stages to improve performance
Two Stage Instruction Pipeline
Pipelining (six stages)
1. Fetch instruction
2. Decode instruction
3. Calculate operands (i.e., EAs)
4. Fetch operands
5. Execute instructions
6. Write result
• Overlap these operations
Timing Diagram for Instruction Pipeline
Operation (assuming independence)
The Effect of a Conditional Branch/Interrupt
on Instruction Pipeline Operation
Six Stage
Instruction
Pipeline
Speedup
Factors
with
Instruction
Pipelining:
nk/(n+k-1)
(ideally)
Dealing with Branches
1.Prefetch Branch Target
2.Loop buffer
3.Branch prediction
4.Delayed branching (see RISC)
Prefetch Branch Target
• Target of branch is prefetched in addition
to instructions following branch
• Keep target until branch is executed
• Used by IBM 360/91
Loop Buffer
• Very fast memory
• Maintained by fetch stage of pipeline
• Check buffer before fetching from memory
• Very good for small loops or jumps
• cf. cache
Branch Prediction (1)
• Predict never taken
– Assume that jump will not happen
– Always (almost) fetch next instruction
– VAX will not prefetch after branch if a page
fault would result (OS v CPU design)
• Predict always taken
– Assume that jump will happen (at least 50%)
– Always fetch target instruction
Branch Prediction (2)
• Predict by Opcode
– Some instructions are more likely to result in a
jump than others
– Can get up to 75% success
• Taken/Not taken switch
– Based on previous history
– Good for loops
• Delayed branch – rearrange instructions
(see RISC)
Branch Prediction State Diagram (two bits)
Branch Prediction Flowchart
Intel 80486 Pipelining
1. Fetch
– Put in one of two 16-byte prefetch buffers
– Fill buffer with new data as soon as old data consumed
– Average 5 instructions fetched per load (variable size)
– Independent of other stages to keep buffers full
1. Decode stage 1
– Opcode & address-mode info
– At most first 3 bytes of instruction needed for this
– Can direct D2 stage to get rest of instruction
1. Decode stage 2
– Expand opcode into control signals
– Computation of complex addressing modes
1. Execute
– ALU operations, cache access, register update
1. Writeback
– Update registers & flags
– Results sent to cache
Pentium 4 Registers
EFLAGS Register
Control Registers
Pentium Interrupt Processing
• Interrupts (hardware): (non-)maskable
• Exceptions (software): processor detected
(error) or programmed (exception)
• Interrupt vector table
– Each interrupt type assigned a number
– Index to vector table
– 256 * 32 bit interrupt vectors (address of ISR)
• 5 priority classes: 1. exception by previous
instruction 2. external interrupt, 3.-5. faults
from fetching, decoding or executing
instruction

More Related Content

PPT
Pipeline hazard
PDF
Pipeline and data hazard
PPTX
3 Pipelining
PPTX
INSTRUCTION LEVEL PARALLALISM
PPT
Ct213 processor design_pipelinehazard
PPT
Pipelinig hazardous
PPTX
Chapter 04 the processor
PPT
Pipelining In computer
Pipeline hazard
Pipeline and data hazard
3 Pipelining
INSTRUCTION LEVEL PARALLALISM
Ct213 processor design_pipelinehazard
Pipelinig hazardous
Chapter 04 the processor
Pipelining In computer

What's hot (20)

PPT
Pipelining
PPT
pipelining
PPTX
Instruction pipelining
PPTX
pipelining
PPTX
Pipelining , structural hazards
PPSX
Concept of Pipelining
PPT
Pipeline hazards in computer Architecture ppt
PDF
Pipelining
PPT
Pipelining in computer architecture
PPTX
Pipelining powerpoint presentation
PPT
Chapter6 pipelining
PPT
Computer architecture pipelining
PPTX
Pipelining of Processors
PPT
Performance Enhancement with Pipelining
PPT
Pipelining & All Hazards Solution
PPTX
Presentation on risc pipeline
PPTX
Pipeline hazard
PPTX
Pipeline processing - Computer Architecture
DOC
Pipeline Mechanism
Pipelining
pipelining
Instruction pipelining
pipelining
Pipelining , structural hazards
Concept of Pipelining
Pipeline hazards in computer Architecture ppt
Pipelining
Pipelining in computer architecture
Pipelining powerpoint presentation
Chapter6 pipelining
Computer architecture pipelining
Pipelining of Processors
Performance Enhancement with Pipelining
Pipelining & All Hazards Solution
Presentation on risc pipeline
Pipeline hazard
Pipeline processing - Computer Architecture
Pipeline Mechanism
Ad

Similar to pipeline and pipeline hazards (20)

PPT
IT209 Cpu Structure Report
PPT
12 processor structure and function
PPT
12 processor structure and function
PPT
Cs intro-ca
PPTX
1.1.2 Processor and primary storage components.pptx
PPT
12 processor structure and function
PPTX
CPU Structure and Function.pptx
PPTX
Computer Organization: Introduction to Microprocessor and Microcontroller
PPT
03_Buses (1).ppt
PDF
Motivation for multithreaded architectures
PDF
Ch8 main memory
PPTX
Computer organization & architecture chapter-1
PPT
03 top level view of computer function and interconnection
PPT
07 input output
PPTX
Advanced Pipelining in ARM Processors.pptx
PPT
03_top-level-view-of-computer-function-and-interconnection.ppt
PPTX
Basic Computer Architecture
PPTX
UNIT 3 - General Purpose Processors
PPT
10 instruction sets characteristics
PDF
For students wk4_computer_function_and_interconnection
IT209 Cpu Structure Report
12 processor structure and function
12 processor structure and function
Cs intro-ca
1.1.2 Processor and primary storage components.pptx
12 processor structure and function
CPU Structure and Function.pptx
Computer Organization: Introduction to Microprocessor and Microcontroller
03_Buses (1).ppt
Motivation for multithreaded architectures
Ch8 main memory
Computer organization & architecture chapter-1
03 top level view of computer function and interconnection
07 input output
Advanced Pipelining in ARM Processors.pptx
03_top-level-view-of-computer-function-and-interconnection.ppt
Basic Computer Architecture
UNIT 3 - General Purpose Processors
10 instruction sets characteristics
For students wk4_computer_function_and_interconnection
Ad

Recently uploaded (20)

PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
Welding lecture in detail for understanding
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
Well-logging-methods_new................
PPTX
CH1 Production IntroductoryConcepts.pptx
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Construction Project Organization Group 2.pptx
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
web development for engineering and engineering
bas. eng. economics group 4 presentation 1.pptx
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Welding lecture in detail for understanding
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Foundation to blockchain - A guide to Blockchain Tech
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Well-logging-methods_new................
CH1 Production IntroductoryConcepts.pptx
UNIT 4 Total Quality Management .pptx
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Arduino robotics embedded978-1-4302-3184-4.pdf
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
CYBER-CRIMES AND SECURITY A guide to understanding
Construction Project Organization Group 2.pptx
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Operating System & Kernel Study Guide-1 - converted.pdf
web development for engineering and engineering

pipeline and pipeline hazards

  • 1. Chapter 12, William Stallings Computer Organization and Architecture 7th Edition CPU Structure and Function
  • 2. CPU Function • CPU must: – Fetch instructions – Interpret/decode instructions – Fetch data – Process data – Write data
  • 4. Registers • CPU must have some working space (temporary storage) - registers • Number and function vary between processor designs - one of the major design decisions • Top level of memory hierarchy
  • 5. User Visible Registers • General Purpose • Data • Address • Condition Codes
  • 6. General Purpose Registers (1) • May be true general purpose • May be restricted • May be used for data or addressing • Data: accumulator (AC) • Addressing: segment (cf. virtual memory), stack (points to top of stack, cf. implicit addressing)
  • 7. General Purpose Registers (2) • Make them general purpose – Increased flexibility and programmer options – Increased instruction size & complexity, addressing • Make them specialized – Smaller (faster) but more instructions – Less flexibility, addresses implicit in opcode
  • 8. How Many GP Registers? • Between 8 - 32 • Less = more memory references • More takes up processor real estate • See also RISC
  • 9. How big? • Large enough to hold full address • Large enough to hold full data types • But often possible to combine two data registers or two address registers by using more complex addressing (e.g., page and offset)
  • 10. Condition Code Registers – Flags • Sets of individual bits, flags – e.g., result of last operation was zero • Can be read by programs – e.g., Jump if zero – simplifies branch taking • Can not (usually) be set by programs
  • 11. Control & Status Registers • Program Counter (PC) • Instruction Register (IR) • Memory Address Register (MAR) – connects to address bus • Memory Buffer Register (MBR) – connects to data bus, feeds other registers
  • 12. Program Status Word • A set of bits • Condition Codes: – Sign (of last result) – Zero (last result) – Carry (multiword arithmetic) – Equal (two latest results) – Overflow • Interrupts enabled/disabled • Supervisor/user mode
  • 13. Supervisor Mode • Intel ring zero • Kernel mode • Allows privileged instructions to execute • Used by operating system • Not available to user programs
  • 14. Other Registers • May have registers pointing to: – Process control blocks (see OS) – Interrupt Vectors (see OS) • N.B. CPU design and operating system design are closely linked
  • 16. MC68000 and Intel registers • Motorola: – Largely general purpose registers – explicit addressing – Data registers also for indexing – A7 and A7’ for user and kernel stacks • Intel – Largely specific purpose registers – implicit addressing – Segment, Pointer & Index, Data/General purpose – Pentium II – backward compatibility
  • 17. Indirect Cycle • Same address can refer to different arguments (by changing the content of the location the address is pointing to) • Indirect addressing requires more memory accesses to fetch operands • Can be thought of as additional instruction subcycle
  • 20. Data Flow (Instruction Fetch) • PC contains address of next instruction • Address moved to MAR • Address placed on address bus • Control unit requests memory read • Result placed on data bus, copied to MBR, then to IR • Meanwhile PC incremented by 1
  • 21. Data Flow (Fetch Diagram)
  • 22. Data Flow (Data Fetch) • IR is examined • If indirect addressing, indirect cycle is performed – Rightmost n bits of MBR (address part of instruction) transferred to MAR – Control unit requests memory read – Result (address of operand) moved to MBR
  • 24. Data Flow (Execute) • May take many forms, depends on instruction being executed • May include – Memory read/write – Input/Output – Register transfers – ALU operations
  • 25. Data Flow (Interrupt) • Current PC saved to allow resumption after interrupt • Contents of PC copied to MBR • Special memory location (e.g., stack pointer) loaded to MAR • MBR written to memory according to content of MAR • PC loaded with address of interrupt handling routine • Next instruction (first of interrupt handler) can be fetched
  • 27. Prefetch • Fetch involves accessing main memory • Execution of ALU operations do not access main memory • Can fetch next instruction during execution of current instruction, cf. assembly line • Called instruction prefetch
  • 28. Improved Performance • But not doubled: – Fetch usually shorter than execution (cf. reading and storing operands) • Prefetch more than one instruction? – Any jump or branch means that prefetched instructions are not the required instructions • Add more stages to improve performance
  • 30. Pipelining (six stages) 1. Fetch instruction 2. Decode instruction 3. Calculate operands (i.e., EAs) 4. Fetch operands 5. Execute instructions 6. Write result • Overlap these operations
  • 31. Timing Diagram for Instruction Pipeline Operation (assuming independence)
  • 32. The Effect of a Conditional Branch/Interrupt on Instruction Pipeline Operation
  • 35. Dealing with Branches 1.Prefetch Branch Target 2.Loop buffer 3.Branch prediction 4.Delayed branching (see RISC)
  • 36. Prefetch Branch Target • Target of branch is prefetched in addition to instructions following branch • Keep target until branch is executed • Used by IBM 360/91
  • 37. Loop Buffer • Very fast memory • Maintained by fetch stage of pipeline • Check buffer before fetching from memory • Very good for small loops or jumps • cf. cache
  • 38. Branch Prediction (1) • Predict never taken – Assume that jump will not happen – Always (almost) fetch next instruction – VAX will not prefetch after branch if a page fault would result (OS v CPU design) • Predict always taken – Assume that jump will happen (at least 50%) – Always fetch target instruction
  • 39. Branch Prediction (2) • Predict by Opcode – Some instructions are more likely to result in a jump than others – Can get up to 75% success • Taken/Not taken switch – Based on previous history – Good for loops • Delayed branch – rearrange instructions (see RISC)
  • 40. Branch Prediction State Diagram (two bits)
  • 42. Intel 80486 Pipelining 1. Fetch – Put in one of two 16-byte prefetch buffers – Fill buffer with new data as soon as old data consumed – Average 5 instructions fetched per load (variable size) – Independent of other stages to keep buffers full 1. Decode stage 1 – Opcode & address-mode info – At most first 3 bytes of instruction needed for this – Can direct D2 stage to get rest of instruction 1. Decode stage 2 – Expand opcode into control signals – Computation of complex addressing modes 1. Execute – ALU operations, cache access, register update 1. Writeback – Update registers & flags – Results sent to cache
  • 46. Pentium Interrupt Processing • Interrupts (hardware): (non-)maskable • Exceptions (software): processor detected (error) or programmed (exception) • Interrupt vector table – Each interrupt type assigned a number – Index to vector table – 256 * 32 bit interrupt vectors (address of ISR) • 5 priority classes: 1. exception by previous instruction 2. external interrupt, 3.-5. faults from fetching, decoding or executing instruction