SlideShare a Scribd company logo
Chapter 4 The Processor Cheng-Jung Tsai  Assistant Professor Department of Mathematics, National Changhua University of Education, Taiwan, R.O.C. Email:  [email_address] Modified from the instructor’s teaching material provided by Elsevier inc. Copyright 2009
Outline 4.1  Introduction 4.2  Logic design conventions 4.3  Building the  datapath 4.4  A simple implementation scheme 4.5  An overview of  pipelining 4.6  Pipelined datapath and control  4.7  Data hazards: forwarding versus stalling 4.8  Control hazards   4.9  Exceptions 4.10  Parallelism and advanced instruction-level parallel   4.11 Real stuff: the AMD Opteron X4 Pipeline 4.12 Advanced Topic 4.13 Fallacies and pitfalls 4.14 Concluding remarks 4.15 Historical perspective and further reading § 5 .1 Introduction
1.1  Introduction CPU performance factors  introduced in ch1 Instruction count Determined by ISA and compiler CPI  and  Cycle   time Determined by CPU hardware This chapter contains an explanation of  the principles and techniques used in implementing a processor We will examine two MIPS implementations A simplified version A more realistic pipelined version §4.1 Introduction
A basic MIPS implementation We will examine an implementation that includes  a subset of the core MIP S  instruction set Memory reference:  lw ,  sw Arithmetic/logical:  add ,  sub ,  and ,  or ,  slt Control transfer:  beq ,  j You will   see how the instruction set architecture determines many aspects of the implementation
An overview of the implementation:  Instruction Execution Generic Implementation for each instruction Use the  program counter (PC)  to supply instruction address Get the  instruction from memory Read one or two  registers All instructions (except jump)  use the   ALU   after reading the registers   de pending on instruction class Use ALU to calculate Arithmetic result Memory address for load/store Comparison for b ranch target address Access data memory for load/store PC = PC+4 or target address  (for branch and jump) to get the address of the next instruction
FIGURE 4.1 An abstract view of the implementation of the MIPS subset   branch on equal beq $1,$2,25 if ($1 == $2) go to PC+4+100 This figure  showing the major functional units and the major connections between them .  All instructions start by using the   program counter  to supply the instruction address to the  instruction memory . After the instruction is fetched,  the  register operands  used by an instruction are specified by fields of that instruction . Once the register operands have been fetched, they can be operated on  by  ALU   to  compute a memory address (for a load or store), to compute an arithmetic result (for an integer arithmetic-logical instruction), or a compare (for a branch).   If the instruction is an arithmetic-logical instruction, the result from the ALU must be  written to a register . If the operation is a load or store, the ALU result is used as an address to either  store a value from the registers  or  load a value from memory into the registers .  The result from the ALU or memory is written back into the register file .  Branches require the use of the ALU output to  determine the next instruction address , which  comes either from the ALU  (where the PC and branch offset are summed) or from an adder that  increments the current PC by 4 . The thick lines interconnecting the functional units represent  buses , which consist of multiple signals.
CPU Overview  &  Multiplexers  ( 多工器 ) Can’t just join wires together Use  multiplexers :  selects from among several inputs based on the setting of its control lines
Multiplexors   in appendix C Multiplexor more properly be called a  selector its output is one of the inputs that is selected by a control selects one of the inputs if it is true (1) and the other if it is false (0). If there are  n  data inputs , there will need to be  selector inputs .  AND OR S = 1    C = B S = 0    C = A
FIGURE 4.2 The basic implementation of the MIPS subset, including the necessary  multiplexors  and  control lines .   The  top multiplexor (“Mux”)  controls what value replaces the PC ( PC + 4 or the branch destination address ); the multiplexor  is controlled by the  gate that “ANDs ” together the  Zero output of the ALU  and  a control signal  that indicates that the instruction is a branch .  The middle multiplexor , whose output returns to the register file, is used to  steer the output of the ALU (in the case of an arithmetic-logical instruction) or the output of the data memory (in the case of a load) for writing into the register file . Finally, the  bottommost multiplexor  is used to  determine whether the second ALU input is from the registers (for an arithmetic-logical instruction OR a branch) or from the offset field of the instruction (for a load or store).  The added control lines are straightforward and determine the operation performed at the ALU, whether the data memory should read or write, and whether the registers should perform a write operation. The control lines are shown in color to make them easier to see.
4.2 Logic Design Conventions This section reviews a few key ideas in  digital logic  that we will use in this chapter. Read  Appendix C  if you are interested about digital logic Information encoded in binary Low voltage = 0, High voltage = 1 One wire per bit Multi-bit data encoded on  multi-wire buses Combinational  element Operate on data Output is a function of input State (sequential)  elements Store information   such as   instruction memory   and   data   memory   §4.2 Logic Design Conventions
Recall in Computer Science: Logic operations at the bit level Truth table:  a truth table defines the values of the output for each possible input or inputs. exclusive-or
Combinational Elements AND-gate Y = A & B Multiplexer Y = S ? I1 : I0 Adder Y = A + B Arithmetic/Logic Unit Y = F(A, B) A B Y I0 I1 Y M u x S A B Y + A B Y ALU F
Sequential Elements Register : stores data in a circuit Uses a  clock signal  to  determine when to update the stored value Edge-triggered : update when Cl ock  changes from 0 to 1 D Clk Q Clk D Q
Clocking Methodology Combinational logic   transforms data during clock cycles Between clock edges Input from state elements, output to state element Longest delay determines clock period An edge-triggered methodology  allows a state element to be read and written in the same clock cycle  without creating a race that could lead to indeterminate data values.
4.3  Building a Datapath Datapath   elements Units used to operate on or hold data within a CPU In the MIPS implementation, there are  instruction and data memories , the  register   files , the  ALU ,  mu ltiplexers ,  adders … We will build a MIPS datapath incrementally Remember that  Instruction set for implementation  in this subsection Memory-reference instructions:  lw, sw   Arithmetic-logical instructions:  add, sub, and, or, slt Control flow instructions:  beq, j § 4 .3 Building a Datapath
Datapath element for each instruction First, we need to access instructions: two  state (sequential)   element s are needed to store and access instructions and one  combinational elements  is needed to compute the next instruction address instruction memory is used to store instructions of a program and supply instructions given an address A program counter (PC) keeps the address of the current instruction An adder increments the PC to the address of the next instruction
FIGURE 4.6 A portion of the datapath used for fetching instructions and incrementing the program counter 32-bit register Increment by 4 for next instruction Combine the three elements form previous slide to form a data path that fetches instructions and increments the PC to obtain the address of the next sequential instruction
Two elements needed to implement  R-type  ALU operations Let’s consider  R-type instructions Arithmetic-logical: add, sub, and, or, slt e.g.:  add  $t1, $t2, $t3  # reads $t2 and $t3 and writes $t1 Perform an ALU operation on the contents of the registers Instruction fields op:  operation code  (opcode) rs: first source register number rt: second source register number rd: destination register number shamt: shift amount (00000 for now) funct:  function code  (extends opcode); it selects the specific variant of opcode. op rs rt rd shamt funct 6 bits 6 bits 5 bits 5 bits 5 bits 5 bits
Two elements needed to implement R-type ALU operations Two datapath elements needed to implement R-type ALU operations Register file (Sequential Logic) A collection of registers (32 registers)  in which any register can be read or written by specifying the number of the register in the file Read two data words from the register file and write one data word into the register file ALU (Combinational circuit) Operates on the values read from the registers Control signals:  ALU control 32 The operation to be performed by the ALU is controlled with the ALU operation signal, which will be  4 bits  wide use the Zero detection output of the ALU shortly to implement  branches  in this simple implementation .  5 bits  wide to specify one of the 32 registers
Two extra units needed to implement and  Load/Store   Read register operands Calculate address using 16-bit offset Use ALU, but sign-extend offset  ( in ch2, see next slide for a recall ) Load: Read memory and update register Store: Write register value to memory I-type Has a 16-bit input that is sign-extended into a 32-bit result appearing on the output op rs rt constant or address 6 bits 5 bits 5 bits 16 bits
Recall:  Useful 3 shortcuts of 2’s complement numbers Sign Extension shortcut Representing a number using more bits Preserve the numeric value Replicate the sign bit to fill the new bits of the larger quantity c.f. unsigned values: extend with 0s Examples in p.93: 8-bit to 16-bit +2:  0 000 0010   0000 0000   0 000 0010 – 2:  1 111 1110     1111 1111   1 111 1110
Datapath for  Branch Instructions beq $t1, $t2, offset:  I-type Read  two  register operands Compare operands Use ALU, subtract and check  Zero output Calculate target address Sign-extend Shift left 2 places (word displacement) Add to PC + 4 Already calculated by instruction fetch Jump Addressing:  J-type Replacing the lower 28 bits of PC with the lower 26 bits of the instruction shifted by 2 bits op address 6 bits 26 bits
FIGURE 4.9: The datapath for a branch   Just re-routes wires Sign-bit wire replicated PC+4  or branch target
Creating a Single datapath:  Composing the Elements The simplest  datapath  does an instruction in one clock cycle Each datapath element can only do one function at a time Hence, we need separate instruction and data memories Use  multiplexers  where alternate data sources are used for different instructions
Example in p.313 : building a datapath:  R-Type/Load/Store Datapath Building a datapath for the operational portion of the  memory reference  and  R-type instructions  that uses a single register file and a single ALU  Support  two different sources for the ALU input , as well as two  different sources for the data stored into the register file. Two  multiplexors  are added
Figure 4.11  in p.315   Full Datapath PC+ 4 or branch target Arithmetic or address calculation  load or ALU result Give you 5 minutes to be familiar with this datapath & Please try the “Check Yourself” in P.315 What is missed in this Figure?  Control Unit
4.4 A Simple Implementation Scheme:  The  ALU Control A simplest Implementation Scheme of MIPS subset (lw, sw, beq, add, sub, and, or, slt, j). ALU used for Load/Store: add R-type:  AND, OR, subtract, add, or set on less than Branch: subtract § 4 .4 A Simple Implementation Scheme How to generate above 4-bit  ALU Control  input ? See next slide...... NOR 1100 set-on-less-than 0111 subtract 0110 add 0010 OR 0001 AND 0000 Function ALU control  lines
Generate the 4-bit  ALU Control  input 2-bit  ALUOp derived from opcode  +  6-bit function code The opcode in the first column   determines the setting of the  ALUOp   bits.  XXXXXX   “don’t care” : W hen the  ALUOp code  is  00  or  01 , the desired ALU action does not depend on the function code field When the ALUOp value is  10 , then the  function code  is used to set the ALU control input.  6-bit
FIGURE 4.13: The truth table for the 4 ALU control bits  Some  don’t-care entries  have been added.  T he ALUOp does not use the encoding 11, so the truth table can contain entries 1X and X1. when the function field is used, the first 2 bits (F5 and F4) of these instructions are always 10, so they are don’t-care terms Once the truth table has been constructed, it can be optimized and then turned into gates . This process can be completely mechanical. (in Appendix C)
Designing t he Main Control Unit Some observations: opcode (Op[5-0]) is always in  bits 31-26 two registers to be read are always in rs  (bits 25-21)  and rt  (bits 20-16)  (for R-type, beq, sw) base register for lw and sw is always in rs  (25-21) 16-bit offset for beq, lw, sw is always in  15-0 destination register is in one of two positions: lw: in bits 20-16 (rt) R-type: in bits 15-11 (rd) => need a multiplexer to select the address for written register opcode always read read, except for load write for R-type and load
FIGURE 4.15 The datapath with all necessary multiplexors and all control lines   identified See F.4.16 in the next slide for the detail of each control line The PC does not require a write control, since it is written once at the end of every clock cycle Used to decide  which operation is performed rs rt rd
FIGURE 4.16 & 4.18: The effect of each of the seven control signals 0 1
FIGURE 4.17: The simple datapath with the control unit.  The  PCsrc control line should be set if the instruction is branch on equal and the Zero output of ALU is true. Used to decide  which operation is performed Used to decide  if a branch is taken rs rt rd op address or f-code
FIGURE 4.22: implementation the truth table The input is the 6-bit opcode and outputs are control lines This truth table is then used to implement logic gates
Implementing Main Control in Appendix D
Operation of the Datapath for an R-type instruction:  add add rd, rs, rt Although everything occurs in  one clock cycle , we can think of four steps to execute the instruction mem[PC] 1. Fetch the instruction from memory PC+4   and the PC is incremented R[rs], R[rt] 2. Instruction decode and read    operands R[rs] + R[rt] 3. Execute the actual operation R[rd] <- ALU 4. Write back to target register PC <- PC+4    PC is updated op rs rt rd shamt funct 0 6 11 16 21 26 31 6 bits 6 bits 5 bits 5 bits 5 bits 5 bits
Fig. 4. 19 in p.324 Operation of Datapath:  add  Instruction Fetch at Start of  Add instruction <- mem[PC];  PC + 4
Operation of Datapath:  add Instruction Decode of Add Fetch the two operands  and  decode instruction : rs rt rd op address or shamt + f-code op rs rt rd shamt funct 0 6 11 16 21 26 31 6 bits 6 bits 5 bits 5 bits 5 bits 5 bits
Operation of Datapath:  add ALU Operation during Add R[rs]  +  R[rt]
Operation of Datapath:  add Write Back at the End of   Add R[rd] <- ALU;  PC <- PC + 4
Figure 4.19: Operation of Datapath in textbook: add
Figure 4.20: Datapath Operation for  I-type  instruction:  lw R[rt]  <-  Memory {R[rs] + SignExt[imm16]}
Figure 4.21:   Datapath Operation for  I-type  instruction:  beq if (R[rs]-R[rt]==0) then Zero  1 else  Zero  0 if (Zero==1) then PC=PC+4+signExt[imm16]*4;  else  PC = PC + 4
Implementing Jumps Jump uses  word address Update PC with concatenation of Top 4 bits of old PC (PC+4) 26-bit jump address Like a branch,  the low-order 2 bits of a jump are always 00   (shift by * 4) Need  an extra control signal  for the additional multiplexor. This control signal call Jump, is asserted only when the instruction is a jump (decoded from opcode) J -type 000010 address  (26-bit immediate) 31:26 25:0
Figure 4.24:   Datapath With  Jumps Added
Why a Single-Cycle Implementation Is Not Used Today ? We have learned how to implement  a single-cycle CPU Inefficient  both in performance and hardware cost The clock cycle is the same for all instructions in this design Longest delay determines clock period Critical path :   load  instruction Use all five functional units in series Instruction memory    register file    ALU    data memory    register file Violates design principle :  Making the common case fast We will improve performance by  pipelining
Performance of Single-Cycle Machine-  Example  in p.315 of the 3-ed. Assume that the operation time for the major functional units in this implementation are: Memory units: 2 ns, ALU and adders:  2 ns, Register file (read or write):  1 ns. Assuming that the multiplexers, control unit, PC accesses, sign-extension unit, and wires  have no delay , which of the following implementations would be faster and by how much? An implementation in which  every instruction operates in one clock cycle of a fixed length. An implementation where  every instruction executes in one clock cycle using a variable-length clock , which for each instruction is only as long as it needed to be.(Such an approach is not terribly practical.) Use the instruction mix: 24% loads, 12% stores, 44% R-format instructions, 18% branches, and 2% jump.
Answer:  Recall  CPU time  =  Instruction count x CPI x Clock cycle time  = Instruction count x 1 x Clock cycle time  = Instruction count x Clock cycle time  The critical path of the different instruction types is: Performance of Single-Cycle Machine-  Example  in p.315 of the 3-ed. Instruction Fetch Jump ALU Register Access Instruction Fetch Branch Memory Access ALU Register Access Instruction Fetch Store Word Register Access Memory Access ALU Register Access Instruction Fetch Load Word Register Access ALU Register Access Instruction Fetch R-Format Used Function Units Instruction class
Performance of Single-Cycle Machine-  Example  in p.315 of the 3-ed. The required length for each instruction type: Memory units: 2 ns, ALU and adders:  2 ns, Register file (read or write):  1 ns. 2 ns 2 Jump 5 ns 2 1 2 Branch 7 ns 2 2 1 2 Store Word 8 ns 1 2 2 1 2 Load Word 6 ns 1 0 2 1 2 R-Format Total Register Write Data Memory ALU Operation Register Read Instruction Memory Instruction Class
Performance of Single-Cycle Machine-  Example  in p.315 of the 3-ed. The clock cycle : For a machine with a single fixed-length clock for all instructions:  8 ns For a machine with a variable-length clock: The average time per instruction with a variable clock = 8 x 24% + 7 x 12% + 6 x 44% + 5 x 18% + 2 x 2% =  6.3 ns The performance ratio: 8/6.3 = 1.27 The variable clock implementation is 1.27 times faster. Note:  Implementing a variable-speed clock is extremely difficult.
4.5 An Overview of Pipelining Pipelined  laundry: overlapping execution Parallelism improves performance Assume that each step needs 30 mins §4.5 An Overview of Pipelining Four loads: Speedup = 8/3.5 = 2.3 Non-stop: Speedup = 2n/0.5n + 1.5 ≈ 4 =  number of stages
Pipelining Lessons Doesn’t help  latency   (time for each  task ) , but  throughput   of entire Pipeline rate  limited by slowest stage Multiple tasks  working at same time  using different resources Ideal speedup = Number pipe stages Unbalanced stage length  reduce speedup Stall  for  dependences 6 PM 7 8 9 T a s k O r d e r Time A B C D 30 40 40 40 40 20
MIPS Pipeline Five stages, one step per stage IF :  Instruction fetch from memory ID :  Instruction decode & register read EX :  Execute operation or calculate   address MEM :  Access memory operand WB :  Write result back to register
Pipeline Performance : Example  in P.333 Assume time for stages is 100ps for register read or write 200ps for other stages Compare pipelined datapath with single-cycle datapath 500ps 200ps 100 ps 200ps beq 600ps 100 ps 200ps 100 ps 200ps R-format 700ps 200ps 200ps 100 ps 200ps sw 800ps 100 ps 200ps 200ps 100 ps 200ps lw Total time Register write Memory access ALU op Register read Instr fetch Instr
Pipeline Performance : Fig. 4.27  in P.333 Single-cycle (T c = 800ps) Pipelined (T c = 200ps) Unbalanced stage length  reduce the speedup of pipeline
Pipeline Speedup If all stages are  balanced i.e., all take the same time Time between instructions pipelined = Time between instructions nonpipelined Number of stages If not balanced, speedup is less Speedup due to  increased throughput Latency  (time for each instruction) does not decrease
Pipelining and ISA Design MIPS ISA designed for pipelining All instructions are 32-bits Easier to fetch and decode in one cycle c.f. x86: 1- to 17-byte instructions Few and regular instruction formats Can decode and read registers in one step Load/store addressing Can calculate address in 3 rd  stage, access memory in 4 th  stage Alignment of memory operands Memory access takes only one cycle
Hazards ( 危障 ) Situations that prevent starting the next instruction in the next cycle Structure hazards A required resource is busy Data hazard Need to wait for previous instruction to complete its data read/write Control hazard  (branch hazard) Deciding on control action depends on previous instruction Need to worry about  branch instructions
Structure Hazards Conflict for use of a resource Suppose that  we has only  a single memory  instead of two memories (instruction and data)  In  the  MIPS  design Load/store requires  data access Instruction fetch  would have to  stall  for that cycle Would cause a pipeline “ bubble ” Hence, pipelined datapaths require separate instruction/data memories Lw $4, 400 (s0) stall
Structural Hazard: Single Memory Mem I n s t r. O r d e r Time Load Instr 1 Instr 2 Instr 3 Instr 4 Reg Mem Reg Reg Mem Reg ALU Mem ALU Mem Reg Mem Reg ALU Mem Reg Mem Reg ALU ALU Mem Reg Mem Reg
Data Hazards An instruction  depends on completion of data access by a previous instruction add $s0 , $t0, $t1 sub $t2,  $s0 , $t3 sub’s $s0 depends on add’s $s0 means write means read Three bubbles Need $ s0
Solution for data hazard Solution Software:  Complier optimization (assembler) Hardware:  Forwarding (or Bypassing) Forwarding (or Bypassing) Use result when it is computed Don’t wait for it to be stored in a register Requires extra connections in the datapath
Load-Use Data Hazard Can’t always avoid stalls by forwarding If value not computed when needed Can’t forward backward in time! Forwarding need to stall   when R-format instruction following a load instruction
Complier:   Code Scheduling to Avoid Stalls  –  example in p.338 Reorder code  to avoid use of load result in the next instruction C code for  A = B + E; C = B + F; Forwarding need  a stall   when R-format instruction following a load instruction lw $t1, 0($t0) lw $t2 , 4($t0) add $t3, $t1,  $t2 sw $t3, 12($t0) lw $t4 , 8($t0) add $t5, $t1,  $t4 sw $t5, 16($t0) stall stall lw $t1, 0($t0) lw $t2 , 4($t0) lw $t4 , 8($t0) add $t3, $t1,  $t2 sw $t3, 12($t0) add $t5, $t1,  $t4 sw $t5, 16($t0) 11 cycles 13 cycles
Control Hazards  (Branch  Hazards ) Branch  determines flow of control Fetching next instruction  (PC+4 or branch target)  depends on branch outcome Pipeline can’t always fetch correct instruction Next instruction  stall  on I F  stage  to wait for the result of a branch instruction obtained in  EXE stage    two stall  (see next slide for the illustration) In MIPS pipeline Beq  Need to  compare registers and compute target  early in the pipeline Add  hardware  to do it in ID stage  to avoid stall 3 solutions
Control Hazards  (Branch  Hazards ) Branch  determines flow of control Fetching next instruction  (PC+4 or branch target)  depends on branch outcome Pipeline can’t always fetch correct instruction Next instruction  stall  on I F  stage  to wait for the result of a branch instruction obtained in  EXE stage    two stall  In MIPS pipeline Beq  Need to  compare registers and compute target  early in the pipeline Add  hardware  to do it in  ID stage  to avoid stall 3 solutions
Solution 1:  Stall on Branch Wait until branch outcome determined before fetching next instruction Assume that we put in enough extra hardware so that  we can test registers, calculated the branch target, and update the PC during the  ID stage  (see section 4.8 for details), there is still  one stall
Solution 2:  Branch Prediction Longer pipelines can’t readily determine branch outcome early Stall penalty becomes unacceptable Predict outcome of branch Only stall if prediction is wrong The simplest approach is to  predict branches  will be   always  not taken back up if wrong A more sophisticated version is to predict  some branches will be taken   and some will be not taken See next slide Fetch instruction after branch, with no delay
More-Realistic Branch Prediction Static branch prediction Based on  typical branch behavior Example: loop and if-statement branches Predict backward branches  taken Predict forward branches  not taken Dynamic branch prediction Hardware  measures  actual branch behavior e.g., record recent history of each branch Assume future behavior will continue the trend When wrong, stall while re-fetching, and update history More popular approach will be introduced in Section 4.8
MIPS with  Predict Not Taken Prediction correct Prediction incorrect
Solution 3:  Delayed Branch Solution Reorganize (Reorder) code  to make use of the stall slot Done by assembler Example Original code add $4, $5, $6 beq  $1, $2,  40 lw  $3, 300($0) Bubble is removed by  inserting add between beq and lw
Pipeline Summary Pipelining improves performance by increasing instruction throughput Executes multiple instructions in parallel Each instruction has the same latency What makes it easy all instructions are the same length just a few instruction formats memory operands appear only in loads and stores What makes it hard? structural hazards : suppose we had only one memory control hazards :  need to worry about branch instructions data hazards :  an instruction depends on a previous instruction Instruction set design affects complexity of pipeline implementation Please try the Check Yourself in P.343 The BIG Picture
Partition datapath into   5 stages : IF   ( instruction fetch ),  ID  ( instruction decode and register file read ),  EX  ( execution or address calculation ),  MEM  ( data memory access ),  WB  ( write back to register) Associate resources  with  stages Ensure that flows  do not conflict , or figure out how to resolve Assert control in appropriate stage 4.6 Pipelined Datapath and Control §4.6 Pipelined Datapath and Control
We have a  structural hazard: Two instructions try to  write to the register file at the same time! Why all instructions have the same number of stages Clock Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 R-type R-type Load R-type R-type Ifetch Reg/Dec Exec Wr Ifetch Reg/Dec Exec Wr Ifetch Reg/Dec Exec Mem Wr Ifetch Reg/Dec Exec Wr Ifetch Reg/Dec Exec Wr Ops!  We have a problem!
Each functional unit  can only be used  once  per instruction Each functional unit  must be used  at the same stage  for all instructions: Load uses Register File’s write port during  its  5th  stage R-type uses Register File’s write port during its  4th  stage  if  all instructions have different number of stages Important Observation for pipeline Ifetch Reg/Dec Exec Mem Wr Load 1 2 3 4 5 Ifetch Reg/Dec Exec Wr R-type 1 2 3 4
MIPS Pipelined Datapath WB MEM Right-to-left flow leads to hazards FIGURE  4 . 33   Each step of the instruction can be mapped onto the datapath  from left to right . The only exceptions  (left-to-right)  are the update of the PC and the write-back step, which sends either the ALU result  (can lead to control hazard)  or the data from memory to the left to be written into the register file  (can lead to data hazard) Control hazard data hazard
FIGURE 4.35 Pipeline registers To solve the problem in previous slide, we n eed registers  ( Pipeline registers )  between stages To hold information produced in previous cycle FIGURE 4.35  The  pipeline registers   are labeled by the stages that they separate; for example, the first is labeled   IF/ID .  The registers must be wide enough to store all the data corresponding to the lines that go through them . For example,  the  IF/ID register must be 64 bits wide,  because it must hold both the 32-bit instruction fetched from memory and the incremented 32-bit PC address . We will expand these registers over the course of this chapter, but for now  the other three pipeline registers contain 128, 97, and 64 bits, respectively .
Graphically representing pipelines Two basic styles of pipeline figures “ Single-clock-cycle” pipeline diagram Shows pipeline usage in a single cycle Highlight resources used U sed to show the details of what is happening within the pipeline operation during each clock cycle E .g.  Figure 4.36~4.38 “ multi-clock-cycle” diagram Graph of operation over time U sed to give overviews of pipelining situations E .g.  Figure 4.34 W ill give more illustrations later We’ll  first  look at “ single-clock-cycle” diagrams  for load & store
5 stages:  IF : Instruction Fetch Fetch the instruction from the   Instruction Memory ID : Instruction Decode Registers  fetch and   instruction decode EX :  Calculate the   memory address MEM :  Read the data from the  Data Memory WB :  Write   the data back to the  register file 5 functional units in the pipeline datapath are: Instruction Memory  for the IF stage Register File’s Read ports  (busA and busB) for the ID stage ALU  for the Exe stage Data Memory  for the MEM stage Register File’s Write port  (busW) for the WB stage Considering  load Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Load IF ID Exe Mem Wr
single-clock-cycle diagrams   Figure 4.36  IF for Load, Store, … instruction Read, PC+4 M eans read
ID for Load, Store, … FIGURE 4.36  Although the load  needs only the top register in stage 2, the processor  doesn’t know what instruction is being decoded , so it  sign-extends the 16-bit constant  and  reads both registers  into the ID/EX pipeline register.
EX E  for Load FIGURE 4.37  The register is added to the sign-extended immediate, and the sum is placed in the EX/MEM pipeline register.
MEM for Load FIGURE 4.38  Data memory is read using the address in the EX/MEM pipeline registers, and the data is placed in the MEM/WB pipeline register.
WB for Load FIGURE 4.38  Next, data is read from the MEM/WB pipeline register and written into the register file in the middle of the datapath.  Note: there is a bug in this design that is repaired in Figure 4.41. Who will supply this address?
FIGURE  4 . 41  Corrected Datapath for Load The write register number now comes from the MEM/WB pipeline register along with the data.  The register number is passed from the ID pipe stage until it reaches the MEM/WB pipeline register, adding  5  more bits to the last three pipeline registers . I-type
FIGURE 4.39  EX for Store Unlike the third stage of the load instruction,  the second register value is loaded into the EX/MEM  pipeline register to be used in the next stage .  The IF and ID  stages of store is the same to those of load I-type
FIGURE 4.40  MEM for Store In the fourth stage, the data is  written into data memory for the store .  Note that the data comes from the EX/MEM pipeline register and that nothing is changed in the MEM/WB pipeline register.
FIGURE 4.40  WB for Store Once the data is written in memory,  there is nothing left for the store instruction to do, so   nothing happens in stage 5 .
Figure 4.43 in p.357 Multi-Cycle Pipeline  Diagram Form showing  resource usage
Figure 4.44 in p.357 Multi-Cycle Pipeline  Diagram Traditional form C ommonly used when you  take an exam
Figure 4.45 in p.358 Single-Cycle Pipeline  Diagram The  Single-Cycle Pipeline   diagram  corresponding  Figures 4.43 and 4.44 State of pipeline in a given cycle (c.f. Figures 4.46~4.38, those figures only show the detail of a instruction) Take more space Please try the check yourself in P.358
FIGURE  4 . 46  Pipelined Control (Simplified  version ) FIGURE 4.46 The pipelined datapath of Figure 4.41 with the control signals identified.  This datapath borrows the control logic for PC source, register destination number, and ALU control from Section 4.4. Note that  we now  need the 6-bit function code of the instruction in the  EX stage  as input to ALU control , so these bits must also be included in the ID/EX pipeline register .  Just as we add  control  to the  single-cycle datapath  in Section 4.3 (see next slide for  a recall ), we now add   control   to the  pipelined datapath
Recall:  FIGURE 4.17: The simple datapath with the control unit.  The  PCsrc control line should be set if the instruction is branch on equal and the Zero output of ALU is true. Used to decide  which operation is performed rs rt rd op address or f-code Used to decide  if a branch is taken
Pipelined Control Control signals derived from instruction As in single-cycle implementation For  IF  and  ID  stage, there is nothing special to the control in this pipeline control  However,  EXE ,  MEM and   WB  stages have to  pass control signals  along just like the data by using pipeline register Main control generates control signals during ID   FIGURE 4.50 The control lines for the final three stages.  Note that  four of the nine  control lines are used in the  EX  phase, with  the remaining five control lines passed on  to the  EX/MEM  pipeline register extended to hold the control lines;  three  are used during the  MEM  stage, and the last  two  are passed to  MEM/WB  for use in the  WB  stage.
Pipelined Control   (cont.) Signals for  EX  (ExtOp, ALUSrc, ...) are used  1 cycle later Signals for  MEM  (MemWr, Branch) are used  2 cycles later Signals for  WB  (MemtoReg, MemWr) are used  3 cycles   later IF/ID Register ID/Ex Register Ex/MEM Register MEM/WB Register ID EX MEM ExtOp ALUOp RegDst ALUSrc Branch MemWr MemtoReg RegWr Main Control ExtOp ALUOp RegDst ALUSrc MemtoReg RegWr MemtoReg RegWr MemtoReg RegWr Branch MemWr Branch MemW WB
FIGURE 4.51  The pipelined datapath of Figure 4.46 with the control signals connected to the control portions of the pipe line registers.   The control values for the last three stages are  created during the   ID  stage and then placed in the  ID/EX pipeline register . The control lines for each pipe stage are used, and remaining control lines are then passed to the next pipeline stage.
Example: Show these five instructions going through the pipeline: lw  $10, 20($1) sub $11, $2, $3 and $12, $4, $5 or $13, $6, $7 add $14, $8, $9 Answer: See the following figures. Supplement: Pipelined control (Write the control signals) -- Let’s Try it Out
Cycle 1
Cycle 2
Cycle 3
Cycle 4
Cycle 5
Cycle 6
Cycle 7 Fig. 6.34
Cycle 8 Fig. 6.34
Cycle 9
4.7  Data Hazards :  Forwarding  vs.  Stalling Data Hazards in ALU Instructions Consider this sequence: sub  $2 , $1,$3 and $12, $2 ,$5 or  $13,$6, $2 add $14, $2 , $2 sw  $15,100( $2 ) We can resolve hazards with forwarding How do we detect when to forward? S ee next slide §4.7 Data Hazards: Forwarding vs. Stalling
Dependencies & Forwarding If register $2 had the value 10 before and -20 afterwards
Detecting the Need to Forward Pass register numbers along pipeline e.g.,  ID/EX.RegisterRs  = register number for Rs sitting in ID/EX pipeline register This notation allows to easily find dependences in data hazard E .g.  ALU operand register numbers  used  in EX stage are given by   ID/EX.RegisterRs ,  ID/EX.RegisterRt Data hazards  when 1a. EX/MEM.Register Rd  = ID/EX.Register Rs 1b. EX/MEM.Register Rd  = ID/EX.Register Rt 2a. MEM/WB.Register Rd  = ID/EX.Register Rs 2b. MEM/WB.Register Rd  = ID/EX.Register Rt Fwd from EX/MEM pipeline reg Fwd from MEM/WB pipeline reg
Recall:  the example in slide 108 Detecting the Need to Forward Data hazard in sub-and:  1a .  EX/MEM.RegisterRd = ID/EX.RegisterRs  = $2 sub-or:  2b .  MEM/WB.RegisterRd = ID/EX.RegisterRt  = $2
Detecting the Need to Forward But only   instruction  which  write to a register   needs  forwarding! Some instruction do not write registers such as branch T h e policy mentioned above ( in slide 109 ) is not accurate in all conditions One solution is to  check if the  RegWrite   signal  will be active And only if  Rd  for that instruction is not  $ zero MIPS  requires $0 as its destination can be used when we want to avoid forwarding its possibly nonzero result E.g. sll $0, $1, 2 ($0 = 1 if $1 < 2) Add  EX/MEM.RegisterRd ≠ 0  to the first hazard condition and  MEM/WB.RegisterRd ≠ 0  to the second
FIGURE 4.54  Forwarding Paths  hardware On the top are the ALU and pipeline registers before adding forwarding.  On the bottom, the  multiplexors  have been expanded to add the forwarding paths , and we show the forwarding unit. This figure is a stylized drawing, how ever, leaving out details from the full datapath such as the sign extension hardware. Note that the ID/EX. RegisterRt field is shown twice, once to connect to the mux and once to the forwarding unit, but it is a single signal.
FIGURE  4 . 55  The control values for the forwarding multiplexors
Forwarding Conditions Now we can combine  the conditions for detecting data hazard  and  the control signals  learned above: EX hazard if (EX/MEM.RegWrite and (EX/MEM.RegisterRd ≠ 0)   and (EX/MEM.RegisterRd = ID/EX.RegisterRs))   ForwardA =   10 if (EX/MEM.RegWrite and (EX/MEM.RegisterRd ≠ 0)   and (EX/MEM.RegisterRd = ID/EX.RegisterRt))   ForwardB =   10 MEM hazard if (MEM/WB.RegWrite and (MEM/WB.RegisterRd ≠ 0)   and (MEM/WB.RegisterRd = ID/EX.RegisterRs))   ForwardA =   01 if (MEM/WB.RegWrite and (MEM/WB.RegisterRd ≠ 0)   and (MEM/WB.RegisterRd = ID/EX.RegisterRt))   ForwardB =   01
More complicated  Data Hazard : Double Data Hazard Consider the sequence: add  $1 ,$1,$2 add  $1 , $1 ,$3 add $1, $1 ,$4 Potential  data hazards  between  the result of the instruction in WB ,  the result of instruction in MEM , and  the source operand of the instruction in the ALU Both hazards  (EX and MEM)  occur Want to  use the most recent  result The result is forward from  MEM  stage  because the result in the MEM is the more recent result
Revised Forwarding Condition  for double data hazard Continuing with the problem mentioned in previous slide, we have to r evise MEM hazard condition Only  f orwarding if EX hazard condition isn’t true Revised  M EM hazard  condition if (MEM/WB.RegWrite and (MEM/WB.RegisterRd ≠ 0)   and not (EX/MEM.RegWrite and (EX/MEM.RegisterRd ≠0)    and (EX/MEM.RegisterRd ≠ ID/EX.RegisterRs)   and (MEM/WB.RegisterRd = ID/EX.RegisterRs))   ForwardA =  01 if (MEM/WB.RegWrite and (MEM/WB.RegisterRd ≠ 0)   and not (EX/MEM.RegWrite and (EX/MEM.RegisterRd ≠0)    and (EX/MEM.RegisterRd ≠ ID/EX.RegisterRt)   and (MEM/WB.RegisterRd = ID/EX.RegisterRt))   ForwardB =  01
FIGURE  4 . 56  The datapath modified to resolve   hazards via forwarding
Load-Use Data Hazard Need to stall for one cycle
Load-Use Hazard Detection Check when using instruction is decoded in ID stage ALU operand register numbers in ID stage are given by IF/ID.RegisterRs, IF/ID.RegisterRt Load-use hazard when ID/EX.MemRead and   ((ID/EX.RegisterRt = IF/ID.RegisterRs) or   (ID/EX.RegisterRt = IF/ID.RegisterRt)) If detected, stall and insert bubble
How to Stall the Pipeline Force control values in ID/EX register to 0 EX, MEM and WB do  nop  (no-operation) Prevent update of PC and IF/ID register Using instruction is decoded again Following instruction is fetched again 1-cycle stall allows MEM to read data for  lw Can subsequently forward to EX stage
Stall/Bubble in the Pipeline Stall inserted here
Stall/Bubble in the Pipeline Or, more accurately…
Datapath with Hazard Detection
Stalls and Performance Stalls reduce performance But are required to get correct results Compiler can arrange code to avoid hazards and stalls Requires knowledge of the pipeline structure The BIG Picture
Branch Hazards If branch outcome determined in MEM §4.8 Control Hazards PC Flush these instructions (Set control values to 0)
Reducing Branch Delay Move hardware to determine outcome to ID stage Target address adder Register comparator Example: branch taken 36:  sub  $10, $4, $8 40:  beq  $1,  $3, 7 44:  and  $12, $2, $5 48:  or  $13, $2, $6 52:  add  $14, $4, $2 56:  slt  $15, $6, $7   ... 72:  lw  $4, 50($7)
Example: Branch Taken
Example: Branch Taken
Data Hazards for Branches If a comparison register is a destination of 2 nd  or 3 rd  preceding ALU instruction … add  $4 , $5, $6 add  $1 , $2, $3 beq  $1 ,  $4 , target Can resolve using forwarding IF ID EX MEM WB IF ID EX MEM WB IF ID EX MEM WB IF ID EX MEM WB
Data Hazards for Branches If a comparison register is a destination of preceding ALU instruction or 2 nd  preceding load instruction Need 1 stall cycle beq  stalled IF ID ID EX MEM WB add  $4 , $5, $6 lw  $1 , addr beq  $1 ,  $4 , target IF ID EX MEM WB IF ID EX MEM WB
Data Hazards for Branches If a comparison register is a destination of immediately preceding load instruction Need 2 stall cycles beq  stalled IF ID ID ID EX MEM WB beq  stalled lw  $1 , addr beq  $1 ,  $0 , target IF ID EX MEM WB
Dynamic Branch Prediction In deeper and superscalar pipelines, branch penalty is more significant Use dynamic prediction Branch prediction buffer (aka branch history table) Indexed by recent branch instruction addresses Stores outcome (taken/not taken) To execute a branch Check table, expect the same outcome Start fetching from fall-through or target If wrong, flush pipeline and flip prediction
1-Bit Predictor: Shortcoming Inner loop branches mispredicted twice! outer: …   … inner: … … beq …, …, inner   …   beq …, …, outer Mispredict as taken on last iteration of inner loop Then mispredict as not taken on first iteration of inner loop next time around
2-Bit Predictor Only change prediction on two successive mispredictions
Calculating the Branch Target Even with predictor, still need to calculate the target address 1-cycle penalty for a taken branch Branch target buffer Cache of target addresses Indexed by PC when instruction fetched If hit and instruction is branch predicted taken, can fetch target immediately
Exceptions and Interrupts “ Unexpected” events requiring change in flow of control Different ISAs use the terms differently Exception Arises within the CPU e.g., undefined opcode, overflow, syscall, … Interrupt From an external I/O controller Dealing with them without sacrificing performance is hard §4.9 Exceptions
Handling Exceptions In MIPS, exceptions managed by a System Control Coprocessor (CP0) Save PC of offending (or interrupted) instruction In MIPS: Exception Program Counter (EPC) Save indication of the problem In MIPS: Cause register We’ll assume 1-bit 0 for undefined opcode, 1 for overflow Jump to handler at 8000 00180
An Alternate Mechanism Vectored Interrupts Handler address determined by the cause Example: Undefined opcode: C000 0000 Overflow: C000 0020 … : C000 0040 Instructions either Deal with the interrupt, or Jump to real handler
Handler Actions Read cause, and transfer to relevant handler Determine action required If restartable Take corrective action use EPC to return to program Otherwise Terminate program Report error using EPC, cause, …
Exceptions in a Pipeline Another form of control hazard Consider overflow on add in EX stage add $1, $2, $1 Prevent $1 from being clobbered Complete previous instructions Flush  add  and subsequent instructions Set Cause and EPC register values Transfer control to handler Similar to mispredicted branch Use much of the same hardware
Pipeline with Exceptions
Exception Properties Restartable exceptions Pipeline can flush the instruction Handler executes, then returns to the instruction Refetched and executed from scratch PC saved in EPC register Identifies causing instruction Actually PC + 4 is saved Handler must adjust
Exception Example Exception on  add  in 40 sub  $11, $2, $4 44 and  $12, $2, $5 48 or  $13, $2, $6 4C add  $1,  $2, $1 50 slt  $15, $6, $7 54 lw  $16, 50($7) … Handler 80000180 sw  $25, 1000($0) 80000184 sw  $26, 1004($0) …
Exception Example
Exception Example
Multiple Exceptions Pipelining overlaps multiple instructions Could have multiple exceptions at once Simple approach: deal with exception from earliest instruction Flush subsequent instructions “ Precise” exceptions In complex pipelines Multiple instructions issued per cycle Out-of-order completion Maintaining precise exceptions is difficult!
Imprecise Exceptions Just stop pipeline and save state Including exception cause(s) Let the handler work out Which instruction(s) had exceptions Which to complete or flush May require “manual” completion Simplifies hardware, but more complex handler software Not feasible for complex multiple-issue out-of-order pipelines
Instruction-Level Parallelism (ILP) Pipelining: executing multiple instructions in parallel To increase ILP Deeper pipeline Less work per stage    shorter clock cycle Multiple issue Replicate pipeline stages    multiple pipelines Start multiple instructions per clock cycle CPI < 1, so use Instructions Per Cycle (IPC) E.g., 4GHz 4-way multiple-issue 16 BIPS, peak CPI = 0.25, peak IPC = 4 But dependencies reduce this in practice §4.10  Parallelism and Advanced Instruction Level Parallelism
Multiple Issue Static multiple issue Compiler groups instructions to be issued together Packages them into “issue slots” Compiler detects and avoids hazards Dynamic multiple issue CPU examines instruction stream and chooses instructions to issue each cycle Compiler can help by reordering instructions CPU resolves hazards using advanced techniques at runtime
Speculation “ Guess” what to do with an instruction Start operation as soon as possible Check whether guess was right If so, complete the operation If not, roll-back and do the right thing Common to static and dynamic multiple issue Examples Speculate on branch outcome Roll back if path taken is different Speculate on load Roll back if location is updated
Compiler/Hardware Speculation Compiler can reorder instructions e.g., move load before branch Can include “fix-up” instructions to recover from incorrect guess Hardware can look ahead for instructions to execute Buffer results until it determines they are actually needed Flush buffers on incorrect speculation
Speculation and Exceptions What if exception occurs on a speculatively executed instruction? e.g., speculative load before null-pointer check Static speculation Can add ISA support for deferring exceptions Dynamic speculation Can buffer exceptions until instruction completion (which may not occur)
Static Multiple Issue Compiler groups instructions into “issue packets” Group of instructions that can be issued on a single cycle Determined by pipeline resources required Think of an issue packet as a very long instruction Specifies multiple concurrent operations    Very Long Instruction Word (VLIW)
Scheduling Static Multiple Issue Compiler must remove some/all hazards Reorder instructions into issue packets No dependencies with a packet Possibly some dependencies between packets Varies between ISAs; compiler must know! Pad with nop if necessary
MIPS with Static Dual Issue Two-issue packets One ALU/branch instruction One load/store instruction 64-bit aligned ALU/branch, then load/store Pad an unused instruction with nop n + 20 n + 16 n + 12 n + 8 n + 4 n Address WB MEM EX ID IF Load/store WB MEM EX ID IF ALU/branch WB MEM EX ID IF Load/store WB MEM EX ID IF ALU/branch WB MEM EX ID IF Load/store WB MEM EX ID IF ALU/branch Pipeline Stages Instruction type
MIPS with Static Dual Issue
Hazards in the Dual-Issue MIPS More instructions executing in parallel EX data hazard Forwarding avoided stalls with single-issue Now can’t use ALU result in load/store in same packet add  $t0 , $s0, $s1 load $s2, 0( $t0 ) Split into two packets, effectively a stall Load-use hazard Still one cycle use latency, but now two instructions More aggressive scheduling required
Scheduling Example Schedule this for dual-issue MIPS Loop: lw  $t0 , 0($s1)  # $t0=array element   addu  $t0 ,  $t0 , $s2  # add scalar in $s2   sw  $t0 , 0($s1)  # store result   addi  $s1 , $s1,–4  # decrement pointer   bne  $s1 , $zero, Loop # branch $s1!=0 IPC = 5/4 = 1.25 (c.f. peak IPC = 2) 4 sw  $t0 , 4($s1) bne  $s1 , $zero, Loop 3 nop addu  $t0 ,  $t0 , $s2 2 nop addi  $s1 , $s1,–4 1 lw  $t0 , 0($s1) nop Loop: cycle Load/store ALU/branch
Loop Unrolling Replicate loop body to expose more parallelism Reduces loop-control overhead Use different registers per replication Called “register renaming” Avoid loop-carried “anti-dependencies” Store followed by a load of the same register Aka “name dependence”   Reuse of a register name
Loop Unrolling Example IPC = 14/8 = 1.75 Closer to 2, but at cost of registers and code size 3 lw  $t2 , 8($s1) addu  $t0 ,  $t0 , $s2 4 lw  $t3 , 4($s1) addu  $t1 ,  $t1 , $s2 5 sw  $t0 , 16($s1) addu  $t2 ,  $t2 , $s2 6 sw  $t1 , 12($s1) addu  $t3 ,  $t4 , $s2 8 sw  $t3 , 4($s1) bne  $s1 , $zero, Loop 7 sw  $t2 , 8($s1) nop 2 lw  $t1 , 12($s1) nop 1 lw  $t0 , 0($s1) addi  $s1 , $s1,–16 Loop: cycle Load/store ALU/branch
Dynamic Multiple Issue “ Superscalar” processors CPU decides whether to issue 0, 1, 2, … each cycle Avoiding structural and data hazards Avoids the need for compiler scheduling Though it may still help Code semantics ensured by the CPU
Dynamic Pipeline Scheduling Allow the CPU to execute instructions out of order to avoid stalls But commit result to registers in order Example lw  $t0 , 20($s2) addu  $t1,  $t0 , $t2 sub  $s4, $s4, $t3 slti  $t5, $s4, 20 Can start  sub  while  addu  is waiting for lw
Dynamically Scheduled CPU Results also sent to any waiting reservation stations Reorders buffer for register writes Can supply operands for issued instructions Preserves dependencies Hold pending operands
Register Renaming Reservation stations and reorder buffer effectively provide register renaming On instruction issue to reservation station If operand is available in register file or reorder buffer Copied to reservation station No longer required in the register; can be overwritten If operand is not yet available It will be provided to the reservation station by a function unit Register update may not be required
Speculation Predict branch and continue issuing Don’t commit until branch outcome determined Load speculation Avoid load and cache miss delay Predict the effective address Predict loaded value Load before completing outstanding stores Bypass stored values to load unit Don’t commit load until speculation cleared
Why Do Dynamic Scheduling? Why not just let the compiler schedule code? Not all stalls are predicable e.g., cache misses Can’t always schedule around branches Branch outcome is dynamically determined Different implementations of an ISA have different latencies and hazards
Does Multiple Issue Work? Yes, but not as much as we’d like Programs have real dependencies that limit ILP Some dependencies are hard to eliminate e.g., pointer aliasing Some parallelism is hard to expose Limited window size during instruction issue Memory delays and limited bandwidth Hard to keep pipelines full Speculation can help if done well The BIG Picture
Power Efficiency Complexity of dynamic scheduling and speculations requires power Multiple simpler cores may be better 70W 8 No 1 6 1200MHz 2005 UltraSparc T1 90W 1 No 4 14 1950MHz 2003 UltraSparc III 75W 2 Yes 4 14 2930MHz 2006 Core 103W 1 Yes 3 31 3600MHz 2004 P4 Prescott 75W 1 Yes 3 22 2000MHz 2001 P4 Willamette 29W 1 Yes 3 10 200MHz 1997 Pentium Pro 10W 1 No 2 5 66MHz 1993 Pentium 5W 1 No 1 5 25MHz 1989 i486 Power Cores Out-of-order/ Speculation Issue width Pipeline Stages Clock Rate Year Microprocessor
The Opteron X4 Microarchitecture §4.11 Real Stuff: The AMD Opteron X4 (Barcelona) Pipeline 72 physical registers
The Opteron X4 Pipeline Flow For integer operations FP is 5 stages longer Up to 106 RISC-ops in progress Bottlenecks Complex instructions with long dependencies Branch mispredictions Memory access delays
Fallacies Pipelining is easy (!) The basic idea is easy The devil is in the details e.g., detecting data hazards Pipelining is independent of technology So why haven’t we always done pipelining? More transistors make more advanced techniques feasible Pipeline-related ISA design needs to take account of technology trends e.g., predicated instructions §4.13 Fallacies and Pitfalls
Pitfalls Poor ISA design can make pipelining harder e.g., complex instruction sets (VAX, IA-32) Significant overhead to make pipelining work IA-32 micro-op approach e.g., complex addressing modes Register update side effects, memory indirection e.g., delayed branches Advanced pipelines have long delay slots
Concluding Remarks ISA influences design of datapath and control Datapath and control influence design of ISA Pipelining improves instruction throughput using parallelism More instructions completed per second Latency for each instruction not reduced Hazards: structural, data, control Multiple issue and dynamic scheduling (ILP) Dependencies limit achievable parallelism Complexity leads to the power wall §4.14 Concluding Remarks

More Related Content

PDF
Final Exam OS fall 2012-2013 with answers
PPTX
Computer architecture instruction formats
PPT
Chapter 4 The Processor
PPT
Microprogram Control
PPTX
memory reference instruction
PPTX
Control unit
PPTX
Register organization, stack
PPTX
Instruction Set Architecture
Final Exam OS fall 2012-2013 with answers
Computer architecture instruction formats
Chapter 4 The Processor
Microprogram Control
memory reference instruction
Control unit
Register organization, stack
Instruction Set Architecture

What's hot (20)

PPT
Memory Reference instruction
PPT
Flynns classification
PPT
Unit 3-pipelining &amp; vector processing
PPTX
Pipelining and vector processing
PPTX
Assembly language
PPTX
arithmetic logic unit
PPTX
Restoring and Non-Restoring division algo for CSE
PPTX
Input Output Organization
PPTX
Pci,usb,scsi bus
PPTX
General register organization (computer organization)
PPT
Chapter 2 instructions language of the computer
PPTX
Register Reference Instructions | Computer Science
PDF
COMPUTER ORGANIZATION NOTES Unit 2
PPT
Chapter 5 c
PDF
Cache replacement policies,cache miss,writingtechniques
PPT
Types of instructions
PPT
Pipeline hazards in computer Architecture ppt
PPT
Chapter 1 computer abstractions and technology
PPTX
Arithmetic Logic Unit .
PPTX
Basic Processing Unit
Memory Reference instruction
Flynns classification
Unit 3-pipelining &amp; vector processing
Pipelining and vector processing
Assembly language
arithmetic logic unit
Restoring and Non-Restoring division algo for CSE
Input Output Organization
Pci,usb,scsi bus
General register organization (computer organization)
Chapter 2 instructions language of the computer
Register Reference Instructions | Computer Science
COMPUTER ORGANIZATION NOTES Unit 2
Chapter 5 c
Cache replacement policies,cache miss,writingtechniques
Types of instructions
Pipeline hazards in computer Architecture ppt
Chapter 1 computer abstractions and technology
Arithmetic Logic Unit .
Basic Processing Unit
Ad

Similar to Chapter 4 the processor (20)

PPTX
Computer Architecture and Organization- THE PROCESSOR DESIGN
DOCX
Hennchthree
DOCX
Hennchthree 161102111515
PPTX
BASICS OF MIPS ARCHITECTURE AND THEIR INSTRUCTION SET
PPT
Addressing modes (detailed data path)
PPTX
Unit iii
PPTX
Chapter 04 the processor
PPT
pdfslide.net_morris-mano-ppt.ppt
PPTX
Understanding Single-Cycle Datapath Architecture in Computer Systems.pptx
PPTX
CAO-Unit-I.pptx
PPTX
Datapath Design of Computer Architecture
PPT
Computer Organization Unit 4 Processor &Control Unit
DOCX
IntroductionCPU performance factorsInstruction countDeterm.docx
DOCX
Computer Organization and 8085 microprocessor notes
PDF
4bit PC report
PDF
4bit pc report[cse 08-section-b2_group-02]
PPT
Chapter 4
PPTX
THE PROCESSOR
PPTX
Ch 3 CPU.pptx Architecture computer organization
PDF
Data Manipulation
Computer Architecture and Organization- THE PROCESSOR DESIGN
Hennchthree
Hennchthree 161102111515
BASICS OF MIPS ARCHITECTURE AND THEIR INSTRUCTION SET
Addressing modes (detailed data path)
Unit iii
Chapter 04 the processor
pdfslide.net_morris-mano-ppt.ppt
Understanding Single-Cycle Datapath Architecture in Computer Systems.pptx
CAO-Unit-I.pptx
Datapath Design of Computer Architecture
Computer Organization Unit 4 Processor &Control Unit
IntroductionCPU performance factorsInstruction countDeterm.docx
Computer Organization and 8085 microprocessor notes
4bit PC report
4bit pc report[cse 08-section-b2_group-02]
Chapter 4
THE PROCESSOR
Ch 3 CPU.pptx Architecture computer organization
Data Manipulation
Ad

Recently uploaded (20)

PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
Cell Types and Its function , kingdom of life
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Insiders guide to clinical Medicine.pdf
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
01-Introduction-to-Information-Management.pdf
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
Basic Mud Logging Guide for educational purpose
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
Pre independence Education in Inndia.pdf
Renaissance Architecture: A Journey from Faith to Humanism
Microbial diseases, their pathogenesis and prophylaxis
TR - Agricultural Crops Production NC III.pdf
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Cell Types and Its function , kingdom of life
Final Presentation General Medicine 03-08-2024.pptx
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
VCE English Exam - Section C Student Revision Booklet
Insiders guide to clinical Medicine.pdf
2.FourierTransform-ShortQuestionswithAnswers.pdf
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPH.pptx obstetrics and gynecology in nursing
Supply Chain Operations Speaking Notes -ICLT Program
01-Introduction-to-Information-Management.pdf
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Basic Mud Logging Guide for educational purpose
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Pre independence Education in Inndia.pdf

Chapter 4 the processor

  • 1. Chapter 4 The Processor Cheng-Jung Tsai Assistant Professor Department of Mathematics, National Changhua University of Education, Taiwan, R.O.C. Email: [email_address] Modified from the instructor’s teaching material provided by Elsevier inc. Copyright 2009
  • 2. Outline 4.1 Introduction 4.2 Logic design conventions 4.3 Building the datapath 4.4 A simple implementation scheme 4.5 An overview of pipelining 4.6 Pipelined datapath and control 4.7 Data hazards: forwarding versus stalling 4.8 Control hazards 4.9 Exceptions 4.10 Parallelism and advanced instruction-level parallel 4.11 Real stuff: the AMD Opteron X4 Pipeline 4.12 Advanced Topic 4.13 Fallacies and pitfalls 4.14 Concluding remarks 4.15 Historical perspective and further reading § 5 .1 Introduction
  • 3. 1.1 Introduction CPU performance factors introduced in ch1 Instruction count Determined by ISA and compiler CPI and Cycle time Determined by CPU hardware This chapter contains an explanation of the principles and techniques used in implementing a processor We will examine two MIPS implementations A simplified version A more realistic pipelined version §4.1 Introduction
  • 4. A basic MIPS implementation We will examine an implementation that includes a subset of the core MIP S instruction set Memory reference: lw , sw Arithmetic/logical: add , sub , and , or , slt Control transfer: beq , j You will see how the instruction set architecture determines many aspects of the implementation
  • 5. An overview of the implementation: Instruction Execution Generic Implementation for each instruction Use the program counter (PC) to supply instruction address Get the instruction from memory Read one or two registers All instructions (except jump) use the ALU after reading the registers de pending on instruction class Use ALU to calculate Arithmetic result Memory address for load/store Comparison for b ranch target address Access data memory for load/store PC = PC+4 or target address (for branch and jump) to get the address of the next instruction
  • 6. FIGURE 4.1 An abstract view of the implementation of the MIPS subset branch on equal beq $1,$2,25 if ($1 == $2) go to PC+4+100 This figure showing the major functional units and the major connections between them . All instructions start by using the program counter to supply the instruction address to the instruction memory . After the instruction is fetched, the register operands used by an instruction are specified by fields of that instruction . Once the register operands have been fetched, they can be operated on by ALU to compute a memory address (for a load or store), to compute an arithmetic result (for an integer arithmetic-logical instruction), or a compare (for a branch). If the instruction is an arithmetic-logical instruction, the result from the ALU must be written to a register . If the operation is a load or store, the ALU result is used as an address to either store a value from the registers or load a value from memory into the registers . The result from the ALU or memory is written back into the register file . Branches require the use of the ALU output to determine the next instruction address , which comes either from the ALU (where the PC and branch offset are summed) or from an adder that increments the current PC by 4 . The thick lines interconnecting the functional units represent buses , which consist of multiple signals.
  • 7. CPU Overview & Multiplexers ( 多工器 ) Can’t just join wires together Use multiplexers : selects from among several inputs based on the setting of its control lines
  • 8. Multiplexors in appendix C Multiplexor more properly be called a selector its output is one of the inputs that is selected by a control selects one of the inputs if it is true (1) and the other if it is false (0). If there are n data inputs , there will need to be selector inputs . AND OR S = 1  C = B S = 0  C = A
  • 9. FIGURE 4.2 The basic implementation of the MIPS subset, including the necessary multiplexors and control lines . The top multiplexor (“Mux”) controls what value replaces the PC ( PC + 4 or the branch destination address ); the multiplexor is controlled by the gate that “ANDs ” together the Zero output of the ALU and a control signal that indicates that the instruction is a branch . The middle multiplexor , whose output returns to the register file, is used to steer the output of the ALU (in the case of an arithmetic-logical instruction) or the output of the data memory (in the case of a load) for writing into the register file . Finally, the bottommost multiplexor is used to determine whether the second ALU input is from the registers (for an arithmetic-logical instruction OR a branch) or from the offset field of the instruction (for a load or store). The added control lines are straightforward and determine the operation performed at the ALU, whether the data memory should read or write, and whether the registers should perform a write operation. The control lines are shown in color to make them easier to see.
  • 10. 4.2 Logic Design Conventions This section reviews a few key ideas in digital logic that we will use in this chapter. Read Appendix C if you are interested about digital logic Information encoded in binary Low voltage = 0, High voltage = 1 One wire per bit Multi-bit data encoded on multi-wire buses Combinational element Operate on data Output is a function of input State (sequential) elements Store information such as instruction memory and data memory §4.2 Logic Design Conventions
  • 11. Recall in Computer Science: Logic operations at the bit level Truth table: a truth table defines the values of the output for each possible input or inputs. exclusive-or
  • 12. Combinational Elements AND-gate Y = A & B Multiplexer Y = S ? I1 : I0 Adder Y = A + B Arithmetic/Logic Unit Y = F(A, B) A B Y I0 I1 Y M u x S A B Y + A B Y ALU F
  • 13. Sequential Elements Register : stores data in a circuit Uses a clock signal to determine when to update the stored value Edge-triggered : update when Cl ock changes from 0 to 1 D Clk Q Clk D Q
  • 14. Clocking Methodology Combinational logic transforms data during clock cycles Between clock edges Input from state elements, output to state element Longest delay determines clock period An edge-triggered methodology allows a state element to be read and written in the same clock cycle without creating a race that could lead to indeterminate data values.
  • 15. 4.3 Building a Datapath Datapath elements Units used to operate on or hold data within a CPU In the MIPS implementation, there are instruction and data memories , the register files , the ALU , mu ltiplexers , adders … We will build a MIPS datapath incrementally Remember that Instruction set for implementation in this subsection Memory-reference instructions: lw, sw Arithmetic-logical instructions: add, sub, and, or, slt Control flow instructions: beq, j § 4 .3 Building a Datapath
  • 16. Datapath element for each instruction First, we need to access instructions: two state (sequential) element s are needed to store and access instructions and one combinational elements is needed to compute the next instruction address instruction memory is used to store instructions of a program and supply instructions given an address A program counter (PC) keeps the address of the current instruction An adder increments the PC to the address of the next instruction
  • 17. FIGURE 4.6 A portion of the datapath used for fetching instructions and incrementing the program counter 32-bit register Increment by 4 for next instruction Combine the three elements form previous slide to form a data path that fetches instructions and increments the PC to obtain the address of the next sequential instruction
  • 18. Two elements needed to implement R-type ALU operations Let’s consider R-type instructions Arithmetic-logical: add, sub, and, or, slt e.g.: add $t1, $t2, $t3 # reads $t2 and $t3 and writes $t1 Perform an ALU operation on the contents of the registers Instruction fields op: operation code (opcode) rs: first source register number rt: second source register number rd: destination register number shamt: shift amount (00000 for now) funct: function code (extends opcode); it selects the specific variant of opcode. op rs rt rd shamt funct 6 bits 6 bits 5 bits 5 bits 5 bits 5 bits
  • 19. Two elements needed to implement R-type ALU operations Two datapath elements needed to implement R-type ALU operations Register file (Sequential Logic) A collection of registers (32 registers) in which any register can be read or written by specifying the number of the register in the file Read two data words from the register file and write one data word into the register file ALU (Combinational circuit) Operates on the values read from the registers Control signals: ALU control 32 The operation to be performed by the ALU is controlled with the ALU operation signal, which will be 4 bits wide use the Zero detection output of the ALU shortly to implement branches in this simple implementation . 5 bits wide to specify one of the 32 registers
  • 20. Two extra units needed to implement and Load/Store Read register operands Calculate address using 16-bit offset Use ALU, but sign-extend offset ( in ch2, see next slide for a recall ) Load: Read memory and update register Store: Write register value to memory I-type Has a 16-bit input that is sign-extended into a 32-bit result appearing on the output op rs rt constant or address 6 bits 5 bits 5 bits 16 bits
  • 21. Recall: Useful 3 shortcuts of 2’s complement numbers Sign Extension shortcut Representing a number using more bits Preserve the numeric value Replicate the sign bit to fill the new bits of the larger quantity c.f. unsigned values: extend with 0s Examples in p.93: 8-bit to 16-bit +2: 0 000 0010  0000 0000 0 000 0010 – 2: 1 111 1110  1111 1111 1 111 1110
  • 22. Datapath for Branch Instructions beq $t1, $t2, offset: I-type Read two register operands Compare operands Use ALU, subtract and check Zero output Calculate target address Sign-extend Shift left 2 places (word displacement) Add to PC + 4 Already calculated by instruction fetch Jump Addressing: J-type Replacing the lower 28 bits of PC with the lower 26 bits of the instruction shifted by 2 bits op address 6 bits 26 bits
  • 23. FIGURE 4.9: The datapath for a branch Just re-routes wires Sign-bit wire replicated PC+4 or branch target
  • 24. Creating a Single datapath: Composing the Elements The simplest datapath does an instruction in one clock cycle Each datapath element can only do one function at a time Hence, we need separate instruction and data memories Use multiplexers where alternate data sources are used for different instructions
  • 25. Example in p.313 : building a datapath: R-Type/Load/Store Datapath Building a datapath for the operational portion of the memory reference and R-type instructions that uses a single register file and a single ALU Support two different sources for the ALU input , as well as two different sources for the data stored into the register file. Two multiplexors are added
  • 26. Figure 4.11 in p.315 Full Datapath PC+ 4 or branch target Arithmetic or address calculation load or ALU result Give you 5 minutes to be familiar with this datapath & Please try the “Check Yourself” in P.315 What is missed in this Figure? Control Unit
  • 27. 4.4 A Simple Implementation Scheme: The ALU Control A simplest Implementation Scheme of MIPS subset (lw, sw, beq, add, sub, and, or, slt, j). ALU used for Load/Store: add R-type: AND, OR, subtract, add, or set on less than Branch: subtract § 4 .4 A Simple Implementation Scheme How to generate above 4-bit ALU Control input ? See next slide...... NOR 1100 set-on-less-than 0111 subtract 0110 add 0010 OR 0001 AND 0000 Function ALU control lines
  • 28. Generate the 4-bit ALU Control input 2-bit ALUOp derived from opcode + 6-bit function code The opcode in the first column determines the setting of the ALUOp bits. XXXXXX  “don’t care” : W hen the ALUOp code is 00 or 01 , the desired ALU action does not depend on the function code field When the ALUOp value is 10 , then the function code is used to set the ALU control input. 6-bit
  • 29. FIGURE 4.13: The truth table for the 4 ALU control bits Some don’t-care entries have been added. T he ALUOp does not use the encoding 11, so the truth table can contain entries 1X and X1. when the function field is used, the first 2 bits (F5 and F4) of these instructions are always 10, so they are don’t-care terms Once the truth table has been constructed, it can be optimized and then turned into gates . This process can be completely mechanical. (in Appendix C)
  • 30. Designing t he Main Control Unit Some observations: opcode (Op[5-0]) is always in bits 31-26 two registers to be read are always in rs (bits 25-21) and rt (bits 20-16) (for R-type, beq, sw) base register for lw and sw is always in rs (25-21) 16-bit offset for beq, lw, sw is always in 15-0 destination register is in one of two positions: lw: in bits 20-16 (rt) R-type: in bits 15-11 (rd) => need a multiplexer to select the address for written register opcode always read read, except for load write for R-type and load
  • 31. FIGURE 4.15 The datapath with all necessary multiplexors and all control lines identified See F.4.16 in the next slide for the detail of each control line The PC does not require a write control, since it is written once at the end of every clock cycle Used to decide which operation is performed rs rt rd
  • 32. FIGURE 4.16 & 4.18: The effect of each of the seven control signals 0 1
  • 33. FIGURE 4.17: The simple datapath with the control unit. The PCsrc control line should be set if the instruction is branch on equal and the Zero output of ALU is true. Used to decide which operation is performed Used to decide if a branch is taken rs rt rd op address or f-code
  • 34. FIGURE 4.22: implementation the truth table The input is the 6-bit opcode and outputs are control lines This truth table is then used to implement logic gates
  • 35. Implementing Main Control in Appendix D
  • 36. Operation of the Datapath for an R-type instruction: add add rd, rs, rt Although everything occurs in one clock cycle , we can think of four steps to execute the instruction mem[PC] 1. Fetch the instruction from memory PC+4 and the PC is incremented R[rs], R[rt] 2. Instruction decode and read operands R[rs] + R[rt] 3. Execute the actual operation R[rd] <- ALU 4. Write back to target register PC <- PC+4 PC is updated op rs rt rd shamt funct 0 6 11 16 21 26 31 6 bits 6 bits 5 bits 5 bits 5 bits 5 bits
  • 37. Fig. 4. 19 in p.324 Operation of Datapath: add Instruction Fetch at Start of Add instruction <- mem[PC]; PC + 4
  • 38. Operation of Datapath: add Instruction Decode of Add Fetch the two operands and decode instruction : rs rt rd op address or shamt + f-code op rs rt rd shamt funct 0 6 11 16 21 26 31 6 bits 6 bits 5 bits 5 bits 5 bits 5 bits
  • 39. Operation of Datapath: add ALU Operation during Add R[rs] + R[rt]
  • 40. Operation of Datapath: add Write Back at the End of Add R[rd] <- ALU; PC <- PC + 4
  • 41. Figure 4.19: Operation of Datapath in textbook: add
  • 42. Figure 4.20: Datapath Operation for I-type instruction: lw R[rt] <- Memory {R[rs] + SignExt[imm16]}
  • 43. Figure 4.21: Datapath Operation for I-type instruction: beq if (R[rs]-R[rt]==0) then Zero  1 else Zero  0 if (Zero==1) then PC=PC+4+signExt[imm16]*4; else PC = PC + 4
  • 44. Implementing Jumps Jump uses word address Update PC with concatenation of Top 4 bits of old PC (PC+4) 26-bit jump address Like a branch, the low-order 2 bits of a jump are always 00 (shift by * 4) Need an extra control signal for the additional multiplexor. This control signal call Jump, is asserted only when the instruction is a jump (decoded from opcode) J -type 000010 address (26-bit immediate) 31:26 25:0
  • 45. Figure 4.24: Datapath With Jumps Added
  • 46. Why a Single-Cycle Implementation Is Not Used Today ? We have learned how to implement a single-cycle CPU Inefficient both in performance and hardware cost The clock cycle is the same for all instructions in this design Longest delay determines clock period Critical path : load instruction Use all five functional units in series Instruction memory  register file  ALU  data memory  register file Violates design principle : Making the common case fast We will improve performance by pipelining
  • 47. Performance of Single-Cycle Machine- Example in p.315 of the 3-ed. Assume that the operation time for the major functional units in this implementation are: Memory units: 2 ns, ALU and adders: 2 ns, Register file (read or write): 1 ns. Assuming that the multiplexers, control unit, PC accesses, sign-extension unit, and wires have no delay , which of the following implementations would be faster and by how much? An implementation in which every instruction operates in one clock cycle of a fixed length. An implementation where every instruction executes in one clock cycle using a variable-length clock , which for each instruction is only as long as it needed to be.(Such an approach is not terribly practical.) Use the instruction mix: 24% loads, 12% stores, 44% R-format instructions, 18% branches, and 2% jump.
  • 48. Answer: Recall CPU time = Instruction count x CPI x Clock cycle time = Instruction count x 1 x Clock cycle time = Instruction count x Clock cycle time The critical path of the different instruction types is: Performance of Single-Cycle Machine- Example in p.315 of the 3-ed. Instruction Fetch Jump ALU Register Access Instruction Fetch Branch Memory Access ALU Register Access Instruction Fetch Store Word Register Access Memory Access ALU Register Access Instruction Fetch Load Word Register Access ALU Register Access Instruction Fetch R-Format Used Function Units Instruction class
  • 49. Performance of Single-Cycle Machine- Example in p.315 of the 3-ed. The required length for each instruction type: Memory units: 2 ns, ALU and adders: 2 ns, Register file (read or write): 1 ns. 2 ns 2 Jump 5 ns 2 1 2 Branch 7 ns 2 2 1 2 Store Word 8 ns 1 2 2 1 2 Load Word 6 ns 1 0 2 1 2 R-Format Total Register Write Data Memory ALU Operation Register Read Instruction Memory Instruction Class
  • 50. Performance of Single-Cycle Machine- Example in p.315 of the 3-ed. The clock cycle : For a machine with a single fixed-length clock for all instructions: 8 ns For a machine with a variable-length clock: The average time per instruction with a variable clock = 8 x 24% + 7 x 12% + 6 x 44% + 5 x 18% + 2 x 2% = 6.3 ns The performance ratio: 8/6.3 = 1.27 The variable clock implementation is 1.27 times faster. Note: Implementing a variable-speed clock is extremely difficult.
  • 51. 4.5 An Overview of Pipelining Pipelined laundry: overlapping execution Parallelism improves performance Assume that each step needs 30 mins §4.5 An Overview of Pipelining Four loads: Speedup = 8/3.5 = 2.3 Non-stop: Speedup = 2n/0.5n + 1.5 ≈ 4 = number of stages
  • 52. Pipelining Lessons Doesn’t help latency (time for each task ) , but throughput of entire Pipeline rate limited by slowest stage Multiple tasks working at same time using different resources Ideal speedup = Number pipe stages Unbalanced stage length reduce speedup Stall for dependences 6 PM 7 8 9 T a s k O r d e r Time A B C D 30 40 40 40 40 20
  • 53. MIPS Pipeline Five stages, one step per stage IF : Instruction fetch from memory ID : Instruction decode & register read EX : Execute operation or calculate address MEM : Access memory operand WB : Write result back to register
  • 54. Pipeline Performance : Example in P.333 Assume time for stages is 100ps for register read or write 200ps for other stages Compare pipelined datapath with single-cycle datapath 500ps 200ps 100 ps 200ps beq 600ps 100 ps 200ps 100 ps 200ps R-format 700ps 200ps 200ps 100 ps 200ps sw 800ps 100 ps 200ps 200ps 100 ps 200ps lw Total time Register write Memory access ALU op Register read Instr fetch Instr
  • 55. Pipeline Performance : Fig. 4.27 in P.333 Single-cycle (T c = 800ps) Pipelined (T c = 200ps) Unbalanced stage length reduce the speedup of pipeline
  • 56. Pipeline Speedup If all stages are balanced i.e., all take the same time Time between instructions pipelined = Time between instructions nonpipelined Number of stages If not balanced, speedup is less Speedup due to increased throughput Latency (time for each instruction) does not decrease
  • 57. Pipelining and ISA Design MIPS ISA designed for pipelining All instructions are 32-bits Easier to fetch and decode in one cycle c.f. x86: 1- to 17-byte instructions Few and regular instruction formats Can decode and read registers in one step Load/store addressing Can calculate address in 3 rd stage, access memory in 4 th stage Alignment of memory operands Memory access takes only one cycle
  • 58. Hazards ( 危障 ) Situations that prevent starting the next instruction in the next cycle Structure hazards A required resource is busy Data hazard Need to wait for previous instruction to complete its data read/write Control hazard (branch hazard) Deciding on control action depends on previous instruction Need to worry about branch instructions
  • 59. Structure Hazards Conflict for use of a resource Suppose that we has only a single memory instead of two memories (instruction and data) In the MIPS design Load/store requires data access Instruction fetch would have to stall for that cycle Would cause a pipeline “ bubble ” Hence, pipelined datapaths require separate instruction/data memories Lw $4, 400 (s0) stall
  • 60. Structural Hazard: Single Memory Mem I n s t r. O r d e r Time Load Instr 1 Instr 2 Instr 3 Instr 4 Reg Mem Reg Reg Mem Reg ALU Mem ALU Mem Reg Mem Reg ALU Mem Reg Mem Reg ALU ALU Mem Reg Mem Reg
  • 61. Data Hazards An instruction depends on completion of data access by a previous instruction add $s0 , $t0, $t1 sub $t2, $s0 , $t3 sub’s $s0 depends on add’s $s0 means write means read Three bubbles Need $ s0
  • 62. Solution for data hazard Solution Software: Complier optimization (assembler) Hardware: Forwarding (or Bypassing) Forwarding (or Bypassing) Use result when it is computed Don’t wait for it to be stored in a register Requires extra connections in the datapath
  • 63. Load-Use Data Hazard Can’t always avoid stalls by forwarding If value not computed when needed Can’t forward backward in time! Forwarding need to stall when R-format instruction following a load instruction
  • 64. Complier: Code Scheduling to Avoid Stalls – example in p.338 Reorder code to avoid use of load result in the next instruction C code for A = B + E; C = B + F; Forwarding need a stall when R-format instruction following a load instruction lw $t1, 0($t0) lw $t2 , 4($t0) add $t3, $t1, $t2 sw $t3, 12($t0) lw $t4 , 8($t0) add $t5, $t1, $t4 sw $t5, 16($t0) stall stall lw $t1, 0($t0) lw $t2 , 4($t0) lw $t4 , 8($t0) add $t3, $t1, $t2 sw $t3, 12($t0) add $t5, $t1, $t4 sw $t5, 16($t0) 11 cycles 13 cycles
  • 65. Control Hazards (Branch Hazards ) Branch determines flow of control Fetching next instruction (PC+4 or branch target) depends on branch outcome Pipeline can’t always fetch correct instruction Next instruction stall on I F stage to wait for the result of a branch instruction obtained in EXE stage  two stall (see next slide for the illustration) In MIPS pipeline Beq Need to compare registers and compute target early in the pipeline Add hardware to do it in ID stage to avoid stall 3 solutions
  • 66. Control Hazards (Branch Hazards ) Branch determines flow of control Fetching next instruction (PC+4 or branch target) depends on branch outcome Pipeline can’t always fetch correct instruction Next instruction stall on I F stage to wait for the result of a branch instruction obtained in EXE stage  two stall In MIPS pipeline Beq Need to compare registers and compute target early in the pipeline Add hardware to do it in ID stage to avoid stall 3 solutions
  • 67. Solution 1: Stall on Branch Wait until branch outcome determined before fetching next instruction Assume that we put in enough extra hardware so that we can test registers, calculated the branch target, and update the PC during the ID stage (see section 4.8 for details), there is still one stall
  • 68. Solution 2: Branch Prediction Longer pipelines can’t readily determine branch outcome early Stall penalty becomes unacceptable Predict outcome of branch Only stall if prediction is wrong The simplest approach is to predict branches will be always not taken back up if wrong A more sophisticated version is to predict some branches will be taken and some will be not taken See next slide Fetch instruction after branch, with no delay
  • 69. More-Realistic Branch Prediction Static branch prediction Based on typical branch behavior Example: loop and if-statement branches Predict backward branches taken Predict forward branches not taken Dynamic branch prediction Hardware measures actual branch behavior e.g., record recent history of each branch Assume future behavior will continue the trend When wrong, stall while re-fetching, and update history More popular approach will be introduced in Section 4.8
  • 70. MIPS with Predict Not Taken Prediction correct Prediction incorrect
  • 71. Solution 3: Delayed Branch Solution Reorganize (Reorder) code to make use of the stall slot Done by assembler Example Original code add $4, $5, $6 beq $1, $2, 40 lw $3, 300($0) Bubble is removed by inserting add between beq and lw
  • 72. Pipeline Summary Pipelining improves performance by increasing instruction throughput Executes multiple instructions in parallel Each instruction has the same latency What makes it easy all instructions are the same length just a few instruction formats memory operands appear only in loads and stores What makes it hard? structural hazards : suppose we had only one memory control hazards : need to worry about branch instructions data hazards : an instruction depends on a previous instruction Instruction set design affects complexity of pipeline implementation Please try the Check Yourself in P.343 The BIG Picture
  • 73. Partition datapath into 5 stages : IF ( instruction fetch ), ID ( instruction decode and register file read ), EX ( execution or address calculation ), MEM ( data memory access ), WB ( write back to register) Associate resources with stages Ensure that flows do not conflict , or figure out how to resolve Assert control in appropriate stage 4.6 Pipelined Datapath and Control §4.6 Pipelined Datapath and Control
  • 74. We have a structural hazard: Two instructions try to write to the register file at the same time! Why all instructions have the same number of stages Clock Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 R-type R-type Load R-type R-type Ifetch Reg/Dec Exec Wr Ifetch Reg/Dec Exec Wr Ifetch Reg/Dec Exec Mem Wr Ifetch Reg/Dec Exec Wr Ifetch Reg/Dec Exec Wr Ops! We have a problem!
  • 75. Each functional unit can only be used once per instruction Each functional unit must be used at the same stage for all instructions: Load uses Register File’s write port during its 5th stage R-type uses Register File’s write port during its 4th stage if all instructions have different number of stages Important Observation for pipeline Ifetch Reg/Dec Exec Mem Wr Load 1 2 3 4 5 Ifetch Reg/Dec Exec Wr R-type 1 2 3 4
  • 76. MIPS Pipelined Datapath WB MEM Right-to-left flow leads to hazards FIGURE 4 . 33 Each step of the instruction can be mapped onto the datapath from left to right . The only exceptions (left-to-right) are the update of the PC and the write-back step, which sends either the ALU result (can lead to control hazard) or the data from memory to the left to be written into the register file (can lead to data hazard) Control hazard data hazard
  • 77. FIGURE 4.35 Pipeline registers To solve the problem in previous slide, we n eed registers ( Pipeline registers ) between stages To hold information produced in previous cycle FIGURE 4.35 The pipeline registers are labeled by the stages that they separate; for example, the first is labeled IF/ID . The registers must be wide enough to store all the data corresponding to the lines that go through them . For example, the IF/ID register must be 64 bits wide, because it must hold both the 32-bit instruction fetched from memory and the incremented 32-bit PC address . We will expand these registers over the course of this chapter, but for now the other three pipeline registers contain 128, 97, and 64 bits, respectively .
  • 78. Graphically representing pipelines Two basic styles of pipeline figures “ Single-clock-cycle” pipeline diagram Shows pipeline usage in a single cycle Highlight resources used U sed to show the details of what is happening within the pipeline operation during each clock cycle E .g. Figure 4.36~4.38 “ multi-clock-cycle” diagram Graph of operation over time U sed to give overviews of pipelining situations E .g. Figure 4.34 W ill give more illustrations later We’ll first look at “ single-clock-cycle” diagrams for load & store
  • 79. 5 stages: IF : Instruction Fetch Fetch the instruction from the Instruction Memory ID : Instruction Decode Registers fetch and instruction decode EX : Calculate the memory address MEM : Read the data from the Data Memory WB : Write the data back to the register file 5 functional units in the pipeline datapath are: Instruction Memory for the IF stage Register File’s Read ports (busA and busB) for the ID stage ALU for the Exe stage Data Memory for the MEM stage Register File’s Write port (busW) for the WB stage Considering load Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Load IF ID Exe Mem Wr
  • 80. single-clock-cycle diagrams Figure 4.36 IF for Load, Store, … instruction Read, PC+4 M eans read
  • 81. ID for Load, Store, … FIGURE 4.36 Although the load needs only the top register in stage 2, the processor doesn’t know what instruction is being decoded , so it sign-extends the 16-bit constant and reads both registers into the ID/EX pipeline register.
  • 82. EX E for Load FIGURE 4.37 The register is added to the sign-extended immediate, and the sum is placed in the EX/MEM pipeline register.
  • 83. MEM for Load FIGURE 4.38 Data memory is read using the address in the EX/MEM pipeline registers, and the data is placed in the MEM/WB pipeline register.
  • 84. WB for Load FIGURE 4.38 Next, data is read from the MEM/WB pipeline register and written into the register file in the middle of the datapath. Note: there is a bug in this design that is repaired in Figure 4.41. Who will supply this address?
  • 85. FIGURE 4 . 41 Corrected Datapath for Load The write register number now comes from the MEM/WB pipeline register along with the data. The register number is passed from the ID pipe stage until it reaches the MEM/WB pipeline register, adding 5 more bits to the last three pipeline registers . I-type
  • 86. FIGURE 4.39 EX for Store Unlike the third stage of the load instruction, the second register value is loaded into the EX/MEM pipeline register to be used in the next stage . The IF and ID stages of store is the same to those of load I-type
  • 87. FIGURE 4.40 MEM for Store In the fourth stage, the data is written into data memory for the store . Note that the data comes from the EX/MEM pipeline register and that nothing is changed in the MEM/WB pipeline register.
  • 88. FIGURE 4.40 WB for Store Once the data is written in memory, there is nothing left for the store instruction to do, so nothing happens in stage 5 .
  • 89. Figure 4.43 in p.357 Multi-Cycle Pipeline Diagram Form showing resource usage
  • 90. Figure 4.44 in p.357 Multi-Cycle Pipeline Diagram Traditional form C ommonly used when you take an exam
  • 91. Figure 4.45 in p.358 Single-Cycle Pipeline Diagram The Single-Cycle Pipeline diagram corresponding Figures 4.43 and 4.44 State of pipeline in a given cycle (c.f. Figures 4.46~4.38, those figures only show the detail of a instruction) Take more space Please try the check yourself in P.358
  • 92. FIGURE 4 . 46 Pipelined Control (Simplified version ) FIGURE 4.46 The pipelined datapath of Figure 4.41 with the control signals identified. This datapath borrows the control logic for PC source, register destination number, and ALU control from Section 4.4. Note that we now need the 6-bit function code of the instruction in the EX stage as input to ALU control , so these bits must also be included in the ID/EX pipeline register . Just as we add control to the single-cycle datapath in Section 4.3 (see next slide for a recall ), we now add control to the pipelined datapath
  • 93. Recall: FIGURE 4.17: The simple datapath with the control unit. The PCsrc control line should be set if the instruction is branch on equal and the Zero output of ALU is true. Used to decide which operation is performed rs rt rd op address or f-code Used to decide if a branch is taken
  • 94. Pipelined Control Control signals derived from instruction As in single-cycle implementation For IF and ID stage, there is nothing special to the control in this pipeline control However, EXE , MEM and WB stages have to pass control signals along just like the data by using pipeline register Main control generates control signals during ID FIGURE 4.50 The control lines for the final three stages. Note that four of the nine control lines are used in the EX phase, with the remaining five control lines passed on to the EX/MEM pipeline register extended to hold the control lines; three are used during the MEM stage, and the last two are passed to MEM/WB for use in the WB stage.
  • 95. Pipelined Control (cont.) Signals for EX (ExtOp, ALUSrc, ...) are used 1 cycle later Signals for MEM (MemWr, Branch) are used 2 cycles later Signals for WB (MemtoReg, MemWr) are used 3 cycles later IF/ID Register ID/Ex Register Ex/MEM Register MEM/WB Register ID EX MEM ExtOp ALUOp RegDst ALUSrc Branch MemWr MemtoReg RegWr Main Control ExtOp ALUOp RegDst ALUSrc MemtoReg RegWr MemtoReg RegWr MemtoReg RegWr Branch MemWr Branch MemW WB
  • 96. FIGURE 4.51 The pipelined datapath of Figure 4.46 with the control signals connected to the control portions of the pipe line registers. The control values for the last three stages are created during the ID stage and then placed in the ID/EX pipeline register . The control lines for each pipe stage are used, and remaining control lines are then passed to the next pipeline stage.
  • 97. Example: Show these five instructions going through the pipeline: lw $10, 20($1) sub $11, $2, $3 and $12, $4, $5 or $13, $6, $7 add $14, $8, $9 Answer: See the following figures. Supplement: Pipelined control (Write the control signals) -- Let’s Try it Out
  • 104. Cycle 7 Fig. 6.34
  • 105. Cycle 8 Fig. 6.34
  • 107. 4.7 Data Hazards : Forwarding vs. Stalling Data Hazards in ALU Instructions Consider this sequence: sub $2 , $1,$3 and $12, $2 ,$5 or $13,$6, $2 add $14, $2 , $2 sw $15,100( $2 ) We can resolve hazards with forwarding How do we detect when to forward? S ee next slide §4.7 Data Hazards: Forwarding vs. Stalling
  • 108. Dependencies & Forwarding If register $2 had the value 10 before and -20 afterwards
  • 109. Detecting the Need to Forward Pass register numbers along pipeline e.g., ID/EX.RegisterRs = register number for Rs sitting in ID/EX pipeline register This notation allows to easily find dependences in data hazard E .g. ALU operand register numbers used in EX stage are given by ID/EX.RegisterRs , ID/EX.RegisterRt Data hazards when 1a. EX/MEM.Register Rd = ID/EX.Register Rs 1b. EX/MEM.Register Rd = ID/EX.Register Rt 2a. MEM/WB.Register Rd = ID/EX.Register Rs 2b. MEM/WB.Register Rd = ID/EX.Register Rt Fwd from EX/MEM pipeline reg Fwd from MEM/WB pipeline reg
  • 110. Recall: the example in slide 108 Detecting the Need to Forward Data hazard in sub-and: 1a . EX/MEM.RegisterRd = ID/EX.RegisterRs = $2 sub-or: 2b . MEM/WB.RegisterRd = ID/EX.RegisterRt = $2
  • 111. Detecting the Need to Forward But only instruction which write to a register needs forwarding! Some instruction do not write registers such as branch T h e policy mentioned above ( in slide 109 ) is not accurate in all conditions One solution is to check if the RegWrite signal will be active And only if Rd for that instruction is not $ zero MIPS requires $0 as its destination can be used when we want to avoid forwarding its possibly nonzero result E.g. sll $0, $1, 2 ($0 = 1 if $1 < 2) Add EX/MEM.RegisterRd ≠ 0 to the first hazard condition and MEM/WB.RegisterRd ≠ 0 to the second
  • 112. FIGURE 4.54 Forwarding Paths hardware On the top are the ALU and pipeline registers before adding forwarding. On the bottom, the multiplexors have been expanded to add the forwarding paths , and we show the forwarding unit. This figure is a stylized drawing, how ever, leaving out details from the full datapath such as the sign extension hardware. Note that the ID/EX. RegisterRt field is shown twice, once to connect to the mux and once to the forwarding unit, but it is a single signal.
  • 113. FIGURE 4 . 55 The control values for the forwarding multiplexors
  • 114. Forwarding Conditions Now we can combine the conditions for detecting data hazard and the control signals learned above: EX hazard if (EX/MEM.RegWrite and (EX/MEM.RegisterRd ≠ 0) and (EX/MEM.RegisterRd = ID/EX.RegisterRs)) ForwardA = 10 if (EX/MEM.RegWrite and (EX/MEM.RegisterRd ≠ 0) and (EX/MEM.RegisterRd = ID/EX.RegisterRt)) ForwardB = 10 MEM hazard if (MEM/WB.RegWrite and (MEM/WB.RegisterRd ≠ 0) and (MEM/WB.RegisterRd = ID/EX.RegisterRs)) ForwardA = 01 if (MEM/WB.RegWrite and (MEM/WB.RegisterRd ≠ 0) and (MEM/WB.RegisterRd = ID/EX.RegisterRt)) ForwardB = 01
  • 115. More complicated Data Hazard : Double Data Hazard Consider the sequence: add $1 ,$1,$2 add $1 , $1 ,$3 add $1, $1 ,$4 Potential data hazards between the result of the instruction in WB , the result of instruction in MEM , and the source operand of the instruction in the ALU Both hazards (EX and MEM) occur Want to use the most recent result The result is forward from MEM stage because the result in the MEM is the more recent result
  • 116. Revised Forwarding Condition for double data hazard Continuing with the problem mentioned in previous slide, we have to r evise MEM hazard condition Only f orwarding if EX hazard condition isn’t true Revised M EM hazard condition if (MEM/WB.RegWrite and (MEM/WB.RegisterRd ≠ 0) and not (EX/MEM.RegWrite and (EX/MEM.RegisterRd ≠0) and (EX/MEM.RegisterRd ≠ ID/EX.RegisterRs) and (MEM/WB.RegisterRd = ID/EX.RegisterRs)) ForwardA = 01 if (MEM/WB.RegWrite and (MEM/WB.RegisterRd ≠ 0) and not (EX/MEM.RegWrite and (EX/MEM.RegisterRd ≠0) and (EX/MEM.RegisterRd ≠ ID/EX.RegisterRt) and (MEM/WB.RegisterRd = ID/EX.RegisterRt)) ForwardB = 01
  • 117. FIGURE 4 . 56 The datapath modified to resolve hazards via forwarding
  • 118. Load-Use Data Hazard Need to stall for one cycle
  • 119. Load-Use Hazard Detection Check when using instruction is decoded in ID stage ALU operand register numbers in ID stage are given by IF/ID.RegisterRs, IF/ID.RegisterRt Load-use hazard when ID/EX.MemRead and ((ID/EX.RegisterRt = IF/ID.RegisterRs) or (ID/EX.RegisterRt = IF/ID.RegisterRt)) If detected, stall and insert bubble
  • 120. How to Stall the Pipeline Force control values in ID/EX register to 0 EX, MEM and WB do nop (no-operation) Prevent update of PC and IF/ID register Using instruction is decoded again Following instruction is fetched again 1-cycle stall allows MEM to read data for lw Can subsequently forward to EX stage
  • 121. Stall/Bubble in the Pipeline Stall inserted here
  • 122. Stall/Bubble in the Pipeline Or, more accurately…
  • 123. Datapath with Hazard Detection
  • 124. Stalls and Performance Stalls reduce performance But are required to get correct results Compiler can arrange code to avoid hazards and stalls Requires knowledge of the pipeline structure The BIG Picture
  • 125. Branch Hazards If branch outcome determined in MEM §4.8 Control Hazards PC Flush these instructions (Set control values to 0)
  • 126. Reducing Branch Delay Move hardware to determine outcome to ID stage Target address adder Register comparator Example: branch taken 36: sub $10, $4, $8 40: beq $1, $3, 7 44: and $12, $2, $5 48: or $13, $2, $6 52: add $14, $4, $2 56: slt $15, $6, $7 ... 72: lw $4, 50($7)
  • 129. Data Hazards for Branches If a comparison register is a destination of 2 nd or 3 rd preceding ALU instruction … add $4 , $5, $6 add $1 , $2, $3 beq $1 , $4 , target Can resolve using forwarding IF ID EX MEM WB IF ID EX MEM WB IF ID EX MEM WB IF ID EX MEM WB
  • 130. Data Hazards for Branches If a comparison register is a destination of preceding ALU instruction or 2 nd preceding load instruction Need 1 stall cycle beq stalled IF ID ID EX MEM WB add $4 , $5, $6 lw $1 , addr beq $1 , $4 , target IF ID EX MEM WB IF ID EX MEM WB
  • 131. Data Hazards for Branches If a comparison register is a destination of immediately preceding load instruction Need 2 stall cycles beq stalled IF ID ID ID EX MEM WB beq stalled lw $1 , addr beq $1 , $0 , target IF ID EX MEM WB
  • 132. Dynamic Branch Prediction In deeper and superscalar pipelines, branch penalty is more significant Use dynamic prediction Branch prediction buffer (aka branch history table) Indexed by recent branch instruction addresses Stores outcome (taken/not taken) To execute a branch Check table, expect the same outcome Start fetching from fall-through or target If wrong, flush pipeline and flip prediction
  • 133. 1-Bit Predictor: Shortcoming Inner loop branches mispredicted twice! outer: … … inner: … … beq …, …, inner … beq …, …, outer Mispredict as taken on last iteration of inner loop Then mispredict as not taken on first iteration of inner loop next time around
  • 134. 2-Bit Predictor Only change prediction on two successive mispredictions
  • 135. Calculating the Branch Target Even with predictor, still need to calculate the target address 1-cycle penalty for a taken branch Branch target buffer Cache of target addresses Indexed by PC when instruction fetched If hit and instruction is branch predicted taken, can fetch target immediately
  • 136. Exceptions and Interrupts “ Unexpected” events requiring change in flow of control Different ISAs use the terms differently Exception Arises within the CPU e.g., undefined opcode, overflow, syscall, … Interrupt From an external I/O controller Dealing with them without sacrificing performance is hard §4.9 Exceptions
  • 137. Handling Exceptions In MIPS, exceptions managed by a System Control Coprocessor (CP0) Save PC of offending (or interrupted) instruction In MIPS: Exception Program Counter (EPC) Save indication of the problem In MIPS: Cause register We’ll assume 1-bit 0 for undefined opcode, 1 for overflow Jump to handler at 8000 00180
  • 138. An Alternate Mechanism Vectored Interrupts Handler address determined by the cause Example: Undefined opcode: C000 0000 Overflow: C000 0020 … : C000 0040 Instructions either Deal with the interrupt, or Jump to real handler
  • 139. Handler Actions Read cause, and transfer to relevant handler Determine action required If restartable Take corrective action use EPC to return to program Otherwise Terminate program Report error using EPC, cause, …
  • 140. Exceptions in a Pipeline Another form of control hazard Consider overflow on add in EX stage add $1, $2, $1 Prevent $1 from being clobbered Complete previous instructions Flush add and subsequent instructions Set Cause and EPC register values Transfer control to handler Similar to mispredicted branch Use much of the same hardware
  • 142. Exception Properties Restartable exceptions Pipeline can flush the instruction Handler executes, then returns to the instruction Refetched and executed from scratch PC saved in EPC register Identifies causing instruction Actually PC + 4 is saved Handler must adjust
  • 143. Exception Example Exception on add in 40 sub $11, $2, $4 44 and $12, $2, $5 48 or $13, $2, $6 4C add $1, $2, $1 50 slt $15, $6, $7 54 lw $16, 50($7) … Handler 80000180 sw $25, 1000($0) 80000184 sw $26, 1004($0) …
  • 146. Multiple Exceptions Pipelining overlaps multiple instructions Could have multiple exceptions at once Simple approach: deal with exception from earliest instruction Flush subsequent instructions “ Precise” exceptions In complex pipelines Multiple instructions issued per cycle Out-of-order completion Maintaining precise exceptions is difficult!
  • 147. Imprecise Exceptions Just stop pipeline and save state Including exception cause(s) Let the handler work out Which instruction(s) had exceptions Which to complete or flush May require “manual” completion Simplifies hardware, but more complex handler software Not feasible for complex multiple-issue out-of-order pipelines
  • 148. Instruction-Level Parallelism (ILP) Pipelining: executing multiple instructions in parallel To increase ILP Deeper pipeline Less work per stage  shorter clock cycle Multiple issue Replicate pipeline stages  multiple pipelines Start multiple instructions per clock cycle CPI < 1, so use Instructions Per Cycle (IPC) E.g., 4GHz 4-way multiple-issue 16 BIPS, peak CPI = 0.25, peak IPC = 4 But dependencies reduce this in practice §4.10 Parallelism and Advanced Instruction Level Parallelism
  • 149. Multiple Issue Static multiple issue Compiler groups instructions to be issued together Packages them into “issue slots” Compiler detects and avoids hazards Dynamic multiple issue CPU examines instruction stream and chooses instructions to issue each cycle Compiler can help by reordering instructions CPU resolves hazards using advanced techniques at runtime
  • 150. Speculation “ Guess” what to do with an instruction Start operation as soon as possible Check whether guess was right If so, complete the operation If not, roll-back and do the right thing Common to static and dynamic multiple issue Examples Speculate on branch outcome Roll back if path taken is different Speculate on load Roll back if location is updated
  • 151. Compiler/Hardware Speculation Compiler can reorder instructions e.g., move load before branch Can include “fix-up” instructions to recover from incorrect guess Hardware can look ahead for instructions to execute Buffer results until it determines they are actually needed Flush buffers on incorrect speculation
  • 152. Speculation and Exceptions What if exception occurs on a speculatively executed instruction? e.g., speculative load before null-pointer check Static speculation Can add ISA support for deferring exceptions Dynamic speculation Can buffer exceptions until instruction completion (which may not occur)
  • 153. Static Multiple Issue Compiler groups instructions into “issue packets” Group of instructions that can be issued on a single cycle Determined by pipeline resources required Think of an issue packet as a very long instruction Specifies multiple concurrent operations  Very Long Instruction Word (VLIW)
  • 154. Scheduling Static Multiple Issue Compiler must remove some/all hazards Reorder instructions into issue packets No dependencies with a packet Possibly some dependencies between packets Varies between ISAs; compiler must know! Pad with nop if necessary
  • 155. MIPS with Static Dual Issue Two-issue packets One ALU/branch instruction One load/store instruction 64-bit aligned ALU/branch, then load/store Pad an unused instruction with nop n + 20 n + 16 n + 12 n + 8 n + 4 n Address WB MEM EX ID IF Load/store WB MEM EX ID IF ALU/branch WB MEM EX ID IF Load/store WB MEM EX ID IF ALU/branch WB MEM EX ID IF Load/store WB MEM EX ID IF ALU/branch Pipeline Stages Instruction type
  • 156. MIPS with Static Dual Issue
  • 157. Hazards in the Dual-Issue MIPS More instructions executing in parallel EX data hazard Forwarding avoided stalls with single-issue Now can’t use ALU result in load/store in same packet add $t0 , $s0, $s1 load $s2, 0( $t0 ) Split into two packets, effectively a stall Load-use hazard Still one cycle use latency, but now two instructions More aggressive scheduling required
  • 158. Scheduling Example Schedule this for dual-issue MIPS Loop: lw $t0 , 0($s1) # $t0=array element addu $t0 , $t0 , $s2 # add scalar in $s2 sw $t0 , 0($s1) # store result addi $s1 , $s1,–4 # decrement pointer bne $s1 , $zero, Loop # branch $s1!=0 IPC = 5/4 = 1.25 (c.f. peak IPC = 2) 4 sw $t0 , 4($s1) bne $s1 , $zero, Loop 3 nop addu $t0 , $t0 , $s2 2 nop addi $s1 , $s1,–4 1 lw $t0 , 0($s1) nop Loop: cycle Load/store ALU/branch
  • 159. Loop Unrolling Replicate loop body to expose more parallelism Reduces loop-control overhead Use different registers per replication Called “register renaming” Avoid loop-carried “anti-dependencies” Store followed by a load of the same register Aka “name dependence” Reuse of a register name
  • 160. Loop Unrolling Example IPC = 14/8 = 1.75 Closer to 2, but at cost of registers and code size 3 lw $t2 , 8($s1) addu $t0 , $t0 , $s2 4 lw $t3 , 4($s1) addu $t1 , $t1 , $s2 5 sw $t0 , 16($s1) addu $t2 , $t2 , $s2 6 sw $t1 , 12($s1) addu $t3 , $t4 , $s2 8 sw $t3 , 4($s1) bne $s1 , $zero, Loop 7 sw $t2 , 8($s1) nop 2 lw $t1 , 12($s1) nop 1 lw $t0 , 0($s1) addi $s1 , $s1,–16 Loop: cycle Load/store ALU/branch
  • 161. Dynamic Multiple Issue “ Superscalar” processors CPU decides whether to issue 0, 1, 2, … each cycle Avoiding structural and data hazards Avoids the need for compiler scheduling Though it may still help Code semantics ensured by the CPU
  • 162. Dynamic Pipeline Scheduling Allow the CPU to execute instructions out of order to avoid stalls But commit result to registers in order Example lw $t0 , 20($s2) addu $t1, $t0 , $t2 sub $s4, $s4, $t3 slti $t5, $s4, 20 Can start sub while addu is waiting for lw
  • 163. Dynamically Scheduled CPU Results also sent to any waiting reservation stations Reorders buffer for register writes Can supply operands for issued instructions Preserves dependencies Hold pending operands
  • 164. Register Renaming Reservation stations and reorder buffer effectively provide register renaming On instruction issue to reservation station If operand is available in register file or reorder buffer Copied to reservation station No longer required in the register; can be overwritten If operand is not yet available It will be provided to the reservation station by a function unit Register update may not be required
  • 165. Speculation Predict branch and continue issuing Don’t commit until branch outcome determined Load speculation Avoid load and cache miss delay Predict the effective address Predict loaded value Load before completing outstanding stores Bypass stored values to load unit Don’t commit load until speculation cleared
  • 166. Why Do Dynamic Scheduling? Why not just let the compiler schedule code? Not all stalls are predicable e.g., cache misses Can’t always schedule around branches Branch outcome is dynamically determined Different implementations of an ISA have different latencies and hazards
  • 167. Does Multiple Issue Work? Yes, but not as much as we’d like Programs have real dependencies that limit ILP Some dependencies are hard to eliminate e.g., pointer aliasing Some parallelism is hard to expose Limited window size during instruction issue Memory delays and limited bandwidth Hard to keep pipelines full Speculation can help if done well The BIG Picture
  • 168. Power Efficiency Complexity of dynamic scheduling and speculations requires power Multiple simpler cores may be better 70W 8 No 1 6 1200MHz 2005 UltraSparc T1 90W 1 No 4 14 1950MHz 2003 UltraSparc III 75W 2 Yes 4 14 2930MHz 2006 Core 103W 1 Yes 3 31 3600MHz 2004 P4 Prescott 75W 1 Yes 3 22 2000MHz 2001 P4 Willamette 29W 1 Yes 3 10 200MHz 1997 Pentium Pro 10W 1 No 2 5 66MHz 1993 Pentium 5W 1 No 1 5 25MHz 1989 i486 Power Cores Out-of-order/ Speculation Issue width Pipeline Stages Clock Rate Year Microprocessor
  • 169. The Opteron X4 Microarchitecture §4.11 Real Stuff: The AMD Opteron X4 (Barcelona) Pipeline 72 physical registers
  • 170. The Opteron X4 Pipeline Flow For integer operations FP is 5 stages longer Up to 106 RISC-ops in progress Bottlenecks Complex instructions with long dependencies Branch mispredictions Memory access delays
  • 171. Fallacies Pipelining is easy (!) The basic idea is easy The devil is in the details e.g., detecting data hazards Pipelining is independent of technology So why haven’t we always done pipelining? More transistors make more advanced techniques feasible Pipeline-related ISA design needs to take account of technology trends e.g., predicated instructions §4.13 Fallacies and Pitfalls
  • 172. Pitfalls Poor ISA design can make pipelining harder e.g., complex instruction sets (VAX, IA-32) Significant overhead to make pipelining work IA-32 micro-op approach e.g., complex addressing modes Register update side effects, memory indirection e.g., delayed branches Advanced pipelines have long delay slots
  • 173. Concluding Remarks ISA influences design of datapath and control Datapath and control influence design of ISA Pipelining improves instruction throughput using parallelism More instructions completed per second Latency for each instruction not reduced Hazards: structural, data, control Multiple issue and dynamic scheduling (ILP) Dependencies limit achievable parallelism Complexity leads to the power wall §4.14 Concluding Remarks

Editor's Notes

  • #9: 20 May 2010
  • #12: 20 May 2010
  • #37: 20 May 2010 OK, let get on with today lecture by looking at the simple add instruction. In terms of Register Transfer Language, this is what the Add instruction need to do. First, you need to fetch the instruction from Memory. Then you perform the actual add operation. More specifically: (a) You add the contents of the register specified by the Rs and Rt fields of the instruction. (b) Then you write the results to the register specified by the Rd field. And finally, you need to update the program counter to point to the next instruction. Now, let take a detail look at the datapath during various phase of this instruction. +2 = 10 min. (X:50)
  • #38: 20 May 2010
  • #39: 20 May 2010
  • #40: 20 May 2010
  • #41: 20 May 2010
  • #43: 20 May 2010
  • #44: 20 May 2010
  • #47: 20 May 2010 Well, the last slide pretty much illustrate one of the biggest disadvantage of the single cycle implementation: it has a long cycle time. More specifically, the cycle time must be long enough for the load instruction which has the following components: Clock to Q time of the PC, .... Having a long cycle time is a big problem but not the the only problem. Another problem of this single cycle implementation is that this cycle time, which is long enough for the load instruction, is too long for all other instructions. We will show you why this is bad and what we can do about it in the next few lectures. That all for today. +2 = 79 min (Y:59)
  • #53: 20 May 2010
  • #61: 20 May 2010
  • #74: 20 May 2010
  • #75: 20 May 2010 What happened if we try to pipeline the R-type instructions with the Load instructions? Well, we have a problem here!!! We end up having two instructions trying to write to the register file at the same time! Why do we have this problem (the write ubble?? Well, the reason for this problem is that there is something I have not yet told you. +1 = 16 min. (X:56)
  • #76: 20 May 2010 I already told you that in order for pipeline to work perfectly, each functional unit can ONLY be used once per instruction. What I have not told you is that this (1st bullet) is a necessary but NOT sufficient condition for pipeline to work. The other condition to prevent pipeline hiccup is that each functional unit must be used at the same stage for all instructions. For example here, the load instruction uses the Register File Wr port during its 5th stage but the R-type instruction right now will use the Register File port during its 4th stage. This (5 versus 4) is what caused our problem. How do we solve it? We have 2 solutions. +1 = 17 min. (X:57)
  • #80: 20 May 2010 As shown here, each of these five steps will take one clock cycle to complete. And in pipeline terminology, each step is referred to as one stage of the pipeline. +1 = 8 min. (X:48)
  • #96: 20 May 2010 The main control here is identical to the one in the single cycle processor. It generate all the control signals necessary for a given instruction during that instruction Reg/Decode stage. All these control signals will be saved in the ID/Exec pipeline register at the end of the Reg/Decode cycle. The control signals for the Exec stage (ALUSrc, ... etc.) come from the output of the ID/Exec register. That is they are delayed ONE cycle from the cycle they are generated. The rest of the control signals that are not used during the Exec stage is passed down the pipeline and saved in the Exec/Mem register. The control signals for the Mem stage (MemWr, Branch) come from the output of the Exec/Mem register. That is they are delayed two cycles from the cycle they are generated. Finally, the control signals for the Wr stage (MemtoReg &amp; RegWr) come from the output of the Exec/Wr register: they are delayed three cycles from the cycle they are generated. +2 = 45 min. (Y:45)
  • #98: 20 May 2010