CPU Performance
Outline
 Response Time and Throughput
 Performance and Execution Time
 Clock Cycles Per Instruction (CPI)
 MIPS as a Performance Measure
 Amdahl’s Law
 Benchmarks
 Performance and Power
 How can we make intelligent choices about computers?
 Why some computer hardware performs better at some
programs, but performs less at other programs?
 How do we measure the performance of a computer?
 What factors are hardware related? software related?
 How does machine’s instruction set affect performance?
 Understanding performance is key to understanding
underlying organizational motivation
What is Performance?
Response Time and Throughput
 Response Time
 Time between start and completion of a task, as observed by end user
 Response Time = CPU Time + Waiting Time (I/O, OS scheduling, etc.)
 Throughput
 Number of tasks the machine can run in a given period of time
 Decreasing execution time improves throughput
 Example: using a faster version of a processor
 Less time to run a task  more tasks can be executed
 Increasing throughput can also improve response time
 Example: increasing number of processors in a multiprocessor
 More tasks can be executed in parallel
 Execution time of individual sequential tasks is not changed
 But less waiting time in scheduling queue reduces response time
 For some program running on machine X
 X is n times faster than Y
Book’s Definition of Performance
Execution timeX
1
PerformanceX =
PerformanceY
PerformanceX
Execution timeX
Execution timeY
= n
=
 Real Elapsed Time
 Counts everything:
 Waiting time, Input/output, disk access, OS scheduling, … etc.
 Useful number, but often not good for comparison purposes
 Our Focus: CPU Execution Time
 Time spent while executing the program instructions
 Doesn't count the waiting time for I/O or OS scheduling
 Can be measured in seconds, or
 Can be related to number of CPU clock cycles
What do we mean by Execution Time?
 Clock cycle = Clock period = 1 / Clock rate
 Clock rate = Clock frequency = Cycles per second
 1 Hz = 1 cycle/sec 1 KHz = 103
cycles/sec
 1 MHz = 106
cycles/sec 1 GHz = 109
cycles/sec
 2 GHz clock has a cycle time = 1/(2×109
) = 0.5 nanosecond (ns)
 We often use clock cycles to report CPU execution time
Clock Cycles
Cycle 1 Cycle 2 Cycle 3
CPU Execution Time = CPU cycles × cycle time
Clock rate
CPU cycles
=
Improving Performance
 To improve performance, we need to
 Reduce number of clock cycles required by a program, or
 Reduce clock cycle time (increase the clock rate)
 Example:
 A program runs in 10 seconds on computer X with 2 GHz clock
 What is the number of CPU cycles on computer X ?
 We want to design computer Y to run same program in 6 seconds
 But computer Y requires 10% more cycles to execute program
 What is the clock rate for computer Y ?
 Solution:
 CPU cycles on computer X = 10 sec × 2 × 109
cycles/s = 20 × 109
 CPU cycles on computer Y = 1.1 × 20 × 109
= 22 × 109
cycles
 Clock rate for computer Y = 22 × 109
cycles / 6 sec = 3.67 GHz
 Instructions take different number of cycles to execute
 Multiplication takes more time than addition
 Floating point operations take longer than integer ones
 Accessing memory takes more time than accessing registers
 CPI is an average number of clock cycles per instruction
 Important point
Changing the cycle time often changes the number of
cycles required for various instructions (more later)
Clock Cycles Per Instruction (CPI)
1
I1
cycles
I2 I3 I6
I4 I5 I7
2 3 4 5 6 7 8 9 10 11 12 13 14
CPI = 14/7 = 2
 To execute, a given program will require …
 Some number of machine instructions
 Some number of clock cycles
 Some number of seconds
 We can relate CPU clock cycles to instruction count
 Performance Equation: (related to instruction count)
Performance Equation
CPU cycles = Instruction Count × CPI
Time = Instruction Count × CPI × cycle time
I-Count CPI Cycle
Program X X
Compiler X X
ISA X X X
Organization X X
Technology X
Factors Impacting Performance
Time = Instruction Count × CPI × cycle time
 Suppose we have two implementations of the same ISA
 For a given program
 Machine A has a clock cycle time of 250 ps and a CPI of 2.2
 Machine B has a clock cycle time of 500 ps and a CPI of 1.0
 Which machine is faster for this program, and by how much?
 Solution:
 Both computers execute same count of instructions = I
 CPU execution time (A) = I × 2.2 × 250 ps = 550 × I ps
 CPU execution time (B) = I × 1.0 × 500 ps = 500 × I ps
 Computer B is faster than A by a factor = = 1.1
Using the Performance Equation
550 × I
500 × I
Determining the CPI
 Different types of instructions have different CPI
Let CPIi = clocks per instruction for class i of instructions
Let Ci = instruction count for class i of instructions
 Designers often obtain CPI by a detailed simulation
 Hardware counters are also used for operational CPUs
CPU cycles = (CPIi × Ci)
i = 1
n
∑ CPI =
(CPIi × Ci)
i = 1
n
∑
i = 1
n
∑Ci
Example on Determining the CPI
 Problem
A compiler designer is trying to decide between two code sequences for a
particular machine. Based on the hardware implementation, there are three
different classes of instructions: class A, class B, and class C, and they
require one, two, and three cycles per instruction, respectively.
The first code sequence has 5 instructions: 2 of A, 1 of B, and 2 of C
The second sequence has 6 instructions: 4 of A, 1 of B, and 1 of C
Compute the CPU cycles for each sequence. Which sequence is faster?
What is the CPI for each sequence?
 Solution
CPU cycles (1st
sequence) = (2×1) + (1×2) + (2×3) = 2+2+6 = 10 cycles
CPU cycles (2nd
sequence) = (4×1) + (1×2) + (1×3) = 4+2+3 = 9 cycles
Second sequence is faster, even though it executes one extra instruction
CPI (1st
sequence) = 10/5 = 2 CPI (2nd
sequence) = 9/6 = 1.5
Given: instruction mix of a program on a RISC processor
What is average CPI?
What is the percent of time used by each instruction class?
Classi Freqi CPIi
ALU 50% 1
Load 20% 5
Store 10% 3
Branch 20% 2
How faster would the machine be if load time is 2 cycles?
What if two ALU instructions could be executed at once?
Second Example on CPI
CPIi × Freqi
0.5×1 = 0.5
0.2×5 = 1.0
0.1×3 = 0.3
0.2×2 = 0.4
%Time
0.5/2.2 = 23%
1.0/2.2 = 45%
0.3/2.2 = 14%
0.4/2.2 = 18%
Average CPI = 0.5+1.0+0.3+0.4 = 2.2
 MIPS: Millions Instructions Per Second
 Sometimes used as performance metric
 Faster machine  larger MIPS
 MIPS specifies instruction execution rate
 We can also relate execution time to MIPS
MIPS as a Performance Measure
Instruction Count
Execution Time × 106
Clock Rate
CPI × 106
MIPS = =
Inst Count
MIPS × 106
Inst Count × CPI
Clock Rate
Execution Time = =
Drawbacks of MIPS
Three problems using MIPS as a performance metric
1. Does not take into account the capability of instructions
 Cannot use MIPS to compare computers with different
instruction sets because the instruction count will differ
2. MIPS varies between programs on the same computer
 A computer cannot have a single MIPS rating for all programs
3. MIPS can vary inversely with performance
 A higher MIPS rating does not always mean better performance
 Example in next slide shows this anomalous behavior
 Two different compilers are being tested on the same
program for a 4 GHz machine with three different
classes of instructions: Class A, Class B, and Class C,
which require 1, 2, and 3 cycles, respectively.
 The instruction count produced by the first compiler is 5
billion Class A instructions, 1 billion Class B instructions,
and 1 billion Class C instructions.
 The second compiler produces 10 billion Class A
instructions, 1 billion Class B instructions, and 1 billion
Class C instructions.
 Which compiler produces a higher MIPS?
 Which compiler produces a better execution time?
MIPS example
Solution to MIPS Example
 First, we find the CPU cycles for both compilers
 CPU cycles (compiler 1) = (5×1 + 1×2 + 1×3)×109
= 10×109
 CPU cycles (compiler 2) = (10×1 + 1×2 + 1×3)×109
= 15×109
 Next, we find the execution time for both compilers
 Execution time (compiler 1) = 10×109
cycles / 4×109
Hz = 2.5 sec
 Execution time (compiler 2) = 15×109
cycles / 4×109
Hz = 3.75 sec
 Compiler1 generates faster program (less execution time)
 Now, we compute MIPS rate for both compilers
 MIPS = Instruction Count / (Execution Time × 106
)
 MIPS (compiler 1) = (5+1+1) × 109
/ (2.5 × 106
) = 2800
 MIPS (compiler 2) = (10+1+1) × 109
/ (3.75 × 106
) = 3200
 So, code from compiler 2 has a higher MIPS rating !!!
Amdahl’s Law
 Amdahl's Law is a measure of Speedup
 How a computer performs after an enhancement E
 Relative to how it performed previously
 Enhancement improves a fraction f of execution time by
a factor s and the remaining time is unaffected
Performance with E
Performance before
ExTime before
ExTime with E
Speedup(E) = =
ExTime with E = ExTime before × (f / s + (1 – f ))
Speedup(E) =
(f / s + (1 – f ))
1
 Suppose a program runs in 100 seconds on a machine,
with multiply responsible for 80 seconds of this time. How
much do we have to improve the speed of multiplication if
we want the program to run 4 times faster?
 Solution: suppose we improve multiplication by a factor s
25 sec (4 times faster) = 80 sec / s + 20 sec
s = 80 / (25 – 20) = 80 / 5 = 16
Improve the speed of multiplication by s = 16 times
 How about making the program 5 times faster?
20 sec ( 5 times faster) = 80 sec / s + 20 sec
s = 80 / (20 – 20) = ∞ Impossible to make 5 times faster!
Example on Amdahl's Law
Benchmarks
 Performance best obtained by running a real application
 Use programs typical of expected workload
 Representatives of expected classes of applications
 Examples: compilers, editors, scientific applications, graphics, ...
 SPEC (System Performance Evaluation Corporation)
 Funded and supported by a number of computer vendors
 Companies have agreed on a set of real programs and inputs
 Various benchmarks for …
CPU performance, graphics, high-performance computing, client-
server models, file systems, Web servers, etc.
 Valuable indicator of performance (and compiler technology)
The SPEC CPU2000 Benchmarks
12 Integer benchmarks (C and C++) 14 FP benchmarks (Fortran 77, 90, and C)
Name Description Name Description
gzip Compression wupwise Quantum chromodynamics
vpr FPGA placement and routing swim Shallow water model
gcc GNU C compiler mgrid Multigrid solver in 3D potential field
mcf Combinatorial optimization applu Partial differential equation
crafty Chess program mesa Three-dimensional graphics library
parser Word processing program galgel Computational fluid dynamics
eon Computer visualization art Neural networks image recognition
perlbmk Perl application equake Seismic wave propagation simulation
gap Group theory, interpreter facerec Image recognition of faces
vortex Object-oriented database ammp Computational chemistry
bzip2 Compression lucas Primality testing
twolf Place and route simulator fma3d Crash simulation using finite elements
sixtrack High-energy nuclear physics
apsi Meteorology: pollutant distribution
 Wall clock time is used as metric
 Benchmarks measure CPU time, because of little I/O
SPEC 2000 Ratings (Pentium III & 4)
SPEC
ratio
=
Execution
time
is
normalized
relative
to
Sun
Ultra
5
(300
MHz)
SPEC
rating
=
Geometric
mean
of
SPEC
ratios
Clock rate in MHz
500 1000 1500 3000
2000 2500 3500
0
200
400
600
800
1000
1200
1400
Pe ntium III CINT2000
Pentium 4 CINT2000
Pentium III CFP2000
Pentium 4 CFP2000
Note the relative positions of
the CINT and CFP 2000
curves for the Pentium III & 4
Pentium III does better at the
integer benchmarks, while
Pentium 4 does better at the
floating-point benchmarks
due to its advanced SSE2
instructions
Performance and Power
 Power is a key limitation
 Battery capacity has improved only slightly over time
 Need to design power-efficient processors
 Reduce power by
 Reducing frequency
 Reducing voltage
 Putting components to sleep
 Energy efficiency
 Important metric for power-limited applications
 Defined as performance divided by power consumption
Performance and Power
Relative
Performance
0 .0
0 .2
0 .4
0 .6
0 .8
1 .0
1 .2
1 .4
1 .6
SPEC INT2000 SPECFP2000 SPEC INT2000 SPECFP2000 SPEC IN T2000 SPEC FP2000
Pe ntium M @ 1 .6/0.6 G H z
Pe ntium 4-M @ 2 .4 /1.2 G H z
Pe ntium III-M @ 1.2 /0.8 G H z
Always on / maximum clock Laptop mode / adaptive clock Minimum power / min clock
Benchmark and Power Mode
Energy Efficiency
Energy efficiency of the Pentium M is
highest for the SPEC2000 benchmarks
Relative
Energy
Efficiency
Always on / maximum clock Laptop mode / adaptive clock Minimum power / min clock
Benchmark and power mode
SPECINT 2000 SPECFP 2000 SPECINT 2000 SPECFP 2000 SPECINT 2000 SPECFP 2000
Pentium M @ 1.6/0.6 GHz
Pentium 4-M @ 2.4/1.2 GHz
Pentium III-M @ 1.2/0.8 GHz
 Performance is specific to a particular program
 Any measure of performance should reflect execution time
 Total execution time is a consistent summary of performance
 For a given ISA, performance improvements come from
 Increases in clock rate (without increasing the CPI)
 Improvements in processor organization that lower CPI
 Compiler enhancements that lower CPI and/or instruction count
 Algorithm/Language choices that affect instruction count
 Pitfalls (things you should avoid)
 Using a subset of the performance equation as a metric
 Expecting improvement of one aspect of a computer to increase
performance proportional to the size of improvement
Things to Remember
CISC – Complex Instruction Set Computer
RISC – Reduced Instruction Set Computer
Superscalar – Multiple similar processing units
are used to execute instructions in parallel
Multicore – Multiple Processors executing
instruction in a complementary way
Some Classes of Today’s Computer Architectures
Driving force for CISC
Software costs far exceed hardware costs
Increasingly complex high level languages
A “Semantic” gap between HLL & ML
Word size was increasing.
This Leads to:
 Large instruction sets
 More addressing modes
 Hardware implementations of HLL statements
Intention of CISC
 Ease compiler writing
 Improve execution efficiency
 Support more complex HLLs
RISC
Key features:
 Large number of general purpose registers
(or use of compiler technology to optimize register use)
 Limited and simple instruction set
 Emphasis on optimising the instruction pipeline &
memory management, i.e. leverage newer hardware
complexities now potentially available.
RISC Characteristics
 A Single Instruction size, typically 4 bytes
 A small number of data addressing modes, typically less
than 5
 No indirect Addressing that requires two memory
accesses
 No operations that combine load/store with arithmetic
 No more than one memory addressed operand per
instruction
 No arbitrary data alignment for load/store operations
 Large number of instruction bits for integer register
addressing, typically at least 5
 Large number of instruction bits for FP register
addressing, typically at least 4
Which is better?
 Is the execution of large special purpose instructions
more efficient than execution of many simple
instructions ?
 Which programs are really “shorter” ?
 Which are really faster ?
 What is the impact of having to support many
languages?
 What are the legacy challenges ?
 What are the cost tradeoffs ?
 Can compilers be better made to exploit CISC or
RISC better ? Complexity ?
 Which can better exploit hardware features ?
Characteristics of Some Example
Processors

More Related Content

PPT
Kiến trúc máy tính-COE 301 - Performance.ppt
PPTX
Evaluation of computer performance
PPTX
L07_performance and cost in advanced hardware- computer architecture.pptx
PPTX
2. Module_1_Computer Performance, Metrics, Measurement, & Evaluation (1).pptx
PPS
Measuring Performance by Irfanullah
PDF
Computer architecture short note (version 8)
PPT
Computer Performance Evaluation(CPI).ppt
PPTX
Cpu performance matrix
Kiến trúc máy tính-COE 301 - Performance.ppt
Evaluation of computer performance
L07_performance and cost in advanced hardware- computer architecture.pptx
2. Module_1_Computer Performance, Metrics, Measurement, & Evaluation (1).pptx
Measuring Performance by Irfanullah
Computer architecture short note (version 8)
Computer Performance Evaluation(CPI).ppt
Cpu performance matrix

Similar to 2 CPU Performance (1) by computer organization (20)

PDF
Parallel Computing - Lec 6
PPTX
Book for general presentation for computer science
PPT
Computer Organization Design ch2Slides.ppt
PPT
COMPUTER ARCHITECTURE BASIC CONCEPT
PPTX
Performanceand optimization of computer processor.pptx
PPT
Performance of processor.ppt
PPTX
performance uploading.pptx
PPT
Introduction to MIPS Computer Architecture
PPS
Measuringperformance 090527015748-phpapp01
PDF
Advanced Computer Architecture - Lec 2.pdf
PPT
Lecture 3
PDF
04 performance
PPT
Lec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Performance
PPT
Chapter 1 computer abstractions and technology
PDF
Computer Architecture Performance and Energy
PDF
Computer Organization And Design The Hardware Software Interface Riscv Editio...
PDF
02 performance
PPT
performance
PPT
PPTX
Operating Systems Process scheduling 16th .pptx
Parallel Computing - Lec 6
Book for general presentation for computer science
Computer Organization Design ch2Slides.ppt
COMPUTER ARCHITECTURE BASIC CONCEPT
Performanceand optimization of computer processor.pptx
Performance of processor.ppt
performance uploading.pptx
Introduction to MIPS Computer Architecture
Measuringperformance 090527015748-phpapp01
Advanced Computer Architecture - Lec 2.pdf
Lecture 3
04 performance
Lec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Performance
Chapter 1 computer abstractions and technology
Computer Architecture Performance and Energy
Computer Organization And Design The Hardware Software Interface Riscv Editio...
02 performance
performance
Operating Systems Process scheduling 16th .pptx
Ad

Recently uploaded (20)

PDF
Computer organization and architecuture Digital Notes....pdf
PPTX
Management Information system : MIS-e-Business Systems.pptx
PPTX
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
PPTX
A Brief Introduction to IoT- Smart Objects: The "Things" in IoT
PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
PDF
Java Basics-Introduction and program control
PDF
Influence of Green Infrastructure on Residents’ Endorsement of the New Ecolog...
PDF
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
PPTX
Building constraction Conveyance of water.pptx
PPTX
Principal presentation for NAAC (1).pptx
PPTX
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
PDF
Exploratory_Data_Analysis_Fundamentals.pdf
PDF
August 2025 - Top 10 Read Articles in Network Security & Its Applications
PPTX
ai_satellite_crop_management_20250815030350.pptx
PDF
Computer System Architecture 3rd Edition-M Morris Mano.pdf
PDF
LOW POWER CLASS AB SI POWER AMPLIFIER FOR WIRELESS MEDICAL SENSOR NETWORK
PPTX
Chemical Technological Processes, Feasibility Study and Chemical Process Indu...
PPTX
Chapter 2 -Technology and Enginerring Materials + Composites.pptx
PDF
20250617 - IR - Global Guide for HR - 51 pages.pdf
PDF
August -2025_Top10 Read_Articles_ijait.pdf
Computer organization and architecuture Digital Notes....pdf
Management Information system : MIS-e-Business Systems.pptx
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
A Brief Introduction to IoT- Smart Objects: The "Things" in IoT
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
Java Basics-Introduction and program control
Influence of Green Infrastructure on Residents’ Endorsement of the New Ecolog...
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
Building constraction Conveyance of water.pptx
Principal presentation for NAAC (1).pptx
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
Exploratory_Data_Analysis_Fundamentals.pdf
August 2025 - Top 10 Read Articles in Network Security & Its Applications
ai_satellite_crop_management_20250815030350.pptx
Computer System Architecture 3rd Edition-M Morris Mano.pdf
LOW POWER CLASS AB SI POWER AMPLIFIER FOR WIRELESS MEDICAL SENSOR NETWORK
Chemical Technological Processes, Feasibility Study and Chemical Process Indu...
Chapter 2 -Technology and Enginerring Materials + Composites.pptx
20250617 - IR - Global Guide for HR - 51 pages.pdf
August -2025_Top10 Read_Articles_ijait.pdf
Ad

2 CPU Performance (1) by computer organization

  • 2. Outline  Response Time and Throughput  Performance and Execution Time  Clock Cycles Per Instruction (CPI)  MIPS as a Performance Measure  Amdahl’s Law  Benchmarks  Performance and Power
  • 3.  How can we make intelligent choices about computers?  Why some computer hardware performs better at some programs, but performs less at other programs?  How do we measure the performance of a computer?  What factors are hardware related? software related?  How does machine’s instruction set affect performance?  Understanding performance is key to understanding underlying organizational motivation What is Performance?
  • 4. Response Time and Throughput  Response Time  Time between start and completion of a task, as observed by end user  Response Time = CPU Time + Waiting Time (I/O, OS scheduling, etc.)  Throughput  Number of tasks the machine can run in a given period of time  Decreasing execution time improves throughput  Example: using a faster version of a processor  Less time to run a task  more tasks can be executed  Increasing throughput can also improve response time  Example: increasing number of processors in a multiprocessor  More tasks can be executed in parallel  Execution time of individual sequential tasks is not changed  But less waiting time in scheduling queue reduces response time
  • 5.  For some program running on machine X  X is n times faster than Y Book’s Definition of Performance Execution timeX 1 PerformanceX = PerformanceY PerformanceX Execution timeX Execution timeY = n =
  • 6.  Real Elapsed Time  Counts everything:  Waiting time, Input/output, disk access, OS scheduling, … etc.  Useful number, but often not good for comparison purposes  Our Focus: CPU Execution Time  Time spent while executing the program instructions  Doesn't count the waiting time for I/O or OS scheduling  Can be measured in seconds, or  Can be related to number of CPU clock cycles What do we mean by Execution Time?
  • 7.  Clock cycle = Clock period = 1 / Clock rate  Clock rate = Clock frequency = Cycles per second  1 Hz = 1 cycle/sec 1 KHz = 103 cycles/sec  1 MHz = 106 cycles/sec 1 GHz = 109 cycles/sec  2 GHz clock has a cycle time = 1/(2×109 ) = 0.5 nanosecond (ns)  We often use clock cycles to report CPU execution time Clock Cycles Cycle 1 Cycle 2 Cycle 3 CPU Execution Time = CPU cycles × cycle time Clock rate CPU cycles =
  • 8. Improving Performance  To improve performance, we need to  Reduce number of clock cycles required by a program, or  Reduce clock cycle time (increase the clock rate)  Example:  A program runs in 10 seconds on computer X with 2 GHz clock  What is the number of CPU cycles on computer X ?  We want to design computer Y to run same program in 6 seconds  But computer Y requires 10% more cycles to execute program  What is the clock rate for computer Y ?  Solution:  CPU cycles on computer X = 10 sec × 2 × 109 cycles/s = 20 × 109  CPU cycles on computer Y = 1.1 × 20 × 109 = 22 × 109 cycles  Clock rate for computer Y = 22 × 109 cycles / 6 sec = 3.67 GHz
  • 9.  Instructions take different number of cycles to execute  Multiplication takes more time than addition  Floating point operations take longer than integer ones  Accessing memory takes more time than accessing registers  CPI is an average number of clock cycles per instruction  Important point Changing the cycle time often changes the number of cycles required for various instructions (more later) Clock Cycles Per Instruction (CPI) 1 I1 cycles I2 I3 I6 I4 I5 I7 2 3 4 5 6 7 8 9 10 11 12 13 14 CPI = 14/7 = 2
  • 10.  To execute, a given program will require …  Some number of machine instructions  Some number of clock cycles  Some number of seconds  We can relate CPU clock cycles to instruction count  Performance Equation: (related to instruction count) Performance Equation CPU cycles = Instruction Count × CPI Time = Instruction Count × CPI × cycle time
  • 11. I-Count CPI Cycle Program X X Compiler X X ISA X X X Organization X X Technology X Factors Impacting Performance Time = Instruction Count × CPI × cycle time
  • 12.  Suppose we have two implementations of the same ISA  For a given program  Machine A has a clock cycle time of 250 ps and a CPI of 2.2  Machine B has a clock cycle time of 500 ps and a CPI of 1.0  Which machine is faster for this program, and by how much?  Solution:  Both computers execute same count of instructions = I  CPU execution time (A) = I × 2.2 × 250 ps = 550 × I ps  CPU execution time (B) = I × 1.0 × 500 ps = 500 × I ps  Computer B is faster than A by a factor = = 1.1 Using the Performance Equation 550 × I 500 × I
  • 13. Determining the CPI  Different types of instructions have different CPI Let CPIi = clocks per instruction for class i of instructions Let Ci = instruction count for class i of instructions  Designers often obtain CPI by a detailed simulation  Hardware counters are also used for operational CPUs CPU cycles = (CPIi × Ci) i = 1 n ∑ CPI = (CPIi × Ci) i = 1 n ∑ i = 1 n ∑Ci
  • 14. Example on Determining the CPI  Problem A compiler designer is trying to decide between two code sequences for a particular machine. Based on the hardware implementation, there are three different classes of instructions: class A, class B, and class C, and they require one, two, and three cycles per instruction, respectively. The first code sequence has 5 instructions: 2 of A, 1 of B, and 2 of C The second sequence has 6 instructions: 4 of A, 1 of B, and 1 of C Compute the CPU cycles for each sequence. Which sequence is faster? What is the CPI for each sequence?  Solution CPU cycles (1st sequence) = (2×1) + (1×2) + (2×3) = 2+2+6 = 10 cycles CPU cycles (2nd sequence) = (4×1) + (1×2) + (1×3) = 4+2+3 = 9 cycles Second sequence is faster, even though it executes one extra instruction CPI (1st sequence) = 10/5 = 2 CPI (2nd sequence) = 9/6 = 1.5
  • 15. Given: instruction mix of a program on a RISC processor What is average CPI? What is the percent of time used by each instruction class? Classi Freqi CPIi ALU 50% 1 Load 20% 5 Store 10% 3 Branch 20% 2 How faster would the machine be if load time is 2 cycles? What if two ALU instructions could be executed at once? Second Example on CPI CPIi × Freqi 0.5×1 = 0.5 0.2×5 = 1.0 0.1×3 = 0.3 0.2×2 = 0.4 %Time 0.5/2.2 = 23% 1.0/2.2 = 45% 0.3/2.2 = 14% 0.4/2.2 = 18% Average CPI = 0.5+1.0+0.3+0.4 = 2.2
  • 16.  MIPS: Millions Instructions Per Second  Sometimes used as performance metric  Faster machine  larger MIPS  MIPS specifies instruction execution rate  We can also relate execution time to MIPS MIPS as a Performance Measure Instruction Count Execution Time × 106 Clock Rate CPI × 106 MIPS = = Inst Count MIPS × 106 Inst Count × CPI Clock Rate Execution Time = =
  • 17. Drawbacks of MIPS Three problems using MIPS as a performance metric 1. Does not take into account the capability of instructions  Cannot use MIPS to compare computers with different instruction sets because the instruction count will differ 2. MIPS varies between programs on the same computer  A computer cannot have a single MIPS rating for all programs 3. MIPS can vary inversely with performance  A higher MIPS rating does not always mean better performance  Example in next slide shows this anomalous behavior
  • 18.  Two different compilers are being tested on the same program for a 4 GHz machine with three different classes of instructions: Class A, Class B, and Class C, which require 1, 2, and 3 cycles, respectively.  The instruction count produced by the first compiler is 5 billion Class A instructions, 1 billion Class B instructions, and 1 billion Class C instructions.  The second compiler produces 10 billion Class A instructions, 1 billion Class B instructions, and 1 billion Class C instructions.  Which compiler produces a higher MIPS?  Which compiler produces a better execution time? MIPS example
  • 19. Solution to MIPS Example  First, we find the CPU cycles for both compilers  CPU cycles (compiler 1) = (5×1 + 1×2 + 1×3)×109 = 10×109  CPU cycles (compiler 2) = (10×1 + 1×2 + 1×3)×109 = 15×109  Next, we find the execution time for both compilers  Execution time (compiler 1) = 10×109 cycles / 4×109 Hz = 2.5 sec  Execution time (compiler 2) = 15×109 cycles / 4×109 Hz = 3.75 sec  Compiler1 generates faster program (less execution time)  Now, we compute MIPS rate for both compilers  MIPS = Instruction Count / (Execution Time × 106 )  MIPS (compiler 1) = (5+1+1) × 109 / (2.5 × 106 ) = 2800  MIPS (compiler 2) = (10+1+1) × 109 / (3.75 × 106 ) = 3200  So, code from compiler 2 has a higher MIPS rating !!!
  • 20. Amdahl’s Law  Amdahl's Law is a measure of Speedup  How a computer performs after an enhancement E  Relative to how it performed previously  Enhancement improves a fraction f of execution time by a factor s and the remaining time is unaffected Performance with E Performance before ExTime before ExTime with E Speedup(E) = = ExTime with E = ExTime before × (f / s + (1 – f )) Speedup(E) = (f / s + (1 – f )) 1
  • 21.  Suppose a program runs in 100 seconds on a machine, with multiply responsible for 80 seconds of this time. How much do we have to improve the speed of multiplication if we want the program to run 4 times faster?  Solution: suppose we improve multiplication by a factor s 25 sec (4 times faster) = 80 sec / s + 20 sec s = 80 / (25 – 20) = 80 / 5 = 16 Improve the speed of multiplication by s = 16 times  How about making the program 5 times faster? 20 sec ( 5 times faster) = 80 sec / s + 20 sec s = 80 / (20 – 20) = ∞ Impossible to make 5 times faster! Example on Amdahl's Law
  • 22. Benchmarks  Performance best obtained by running a real application  Use programs typical of expected workload  Representatives of expected classes of applications  Examples: compilers, editors, scientific applications, graphics, ...  SPEC (System Performance Evaluation Corporation)  Funded and supported by a number of computer vendors  Companies have agreed on a set of real programs and inputs  Various benchmarks for … CPU performance, graphics, high-performance computing, client- server models, file systems, Web servers, etc.  Valuable indicator of performance (and compiler technology)
  • 23. The SPEC CPU2000 Benchmarks 12 Integer benchmarks (C and C++) 14 FP benchmarks (Fortran 77, 90, and C) Name Description Name Description gzip Compression wupwise Quantum chromodynamics vpr FPGA placement and routing swim Shallow water model gcc GNU C compiler mgrid Multigrid solver in 3D potential field mcf Combinatorial optimization applu Partial differential equation crafty Chess program mesa Three-dimensional graphics library parser Word processing program galgel Computational fluid dynamics eon Computer visualization art Neural networks image recognition perlbmk Perl application equake Seismic wave propagation simulation gap Group theory, interpreter facerec Image recognition of faces vortex Object-oriented database ammp Computational chemistry bzip2 Compression lucas Primality testing twolf Place and route simulator fma3d Crash simulation using finite elements sixtrack High-energy nuclear physics apsi Meteorology: pollutant distribution  Wall clock time is used as metric  Benchmarks measure CPU time, because of little I/O
  • 24. SPEC 2000 Ratings (Pentium III & 4) SPEC ratio = Execution time is normalized relative to Sun Ultra 5 (300 MHz) SPEC rating = Geometric mean of SPEC ratios Clock rate in MHz 500 1000 1500 3000 2000 2500 3500 0 200 400 600 800 1000 1200 1400 Pe ntium III CINT2000 Pentium 4 CINT2000 Pentium III CFP2000 Pentium 4 CFP2000 Note the relative positions of the CINT and CFP 2000 curves for the Pentium III & 4 Pentium III does better at the integer benchmarks, while Pentium 4 does better at the floating-point benchmarks due to its advanced SSE2 instructions
  • 25. Performance and Power  Power is a key limitation  Battery capacity has improved only slightly over time  Need to design power-efficient processors  Reduce power by  Reducing frequency  Reducing voltage  Putting components to sleep  Energy efficiency  Important metric for power-limited applications  Defined as performance divided by power consumption
  • 26. Performance and Power Relative Performance 0 .0 0 .2 0 .4 0 .6 0 .8 1 .0 1 .2 1 .4 1 .6 SPEC INT2000 SPECFP2000 SPEC INT2000 SPECFP2000 SPEC IN T2000 SPEC FP2000 Pe ntium M @ 1 .6/0.6 G H z Pe ntium 4-M @ 2 .4 /1.2 G H z Pe ntium III-M @ 1.2 /0.8 G H z Always on / maximum clock Laptop mode / adaptive clock Minimum power / min clock Benchmark and Power Mode
  • 27. Energy Efficiency Energy efficiency of the Pentium M is highest for the SPEC2000 benchmarks Relative Energy Efficiency Always on / maximum clock Laptop mode / adaptive clock Minimum power / min clock Benchmark and power mode SPECINT 2000 SPECFP 2000 SPECINT 2000 SPECFP 2000 SPECINT 2000 SPECFP 2000 Pentium M @ 1.6/0.6 GHz Pentium 4-M @ 2.4/1.2 GHz Pentium III-M @ 1.2/0.8 GHz
  • 28.  Performance is specific to a particular program  Any measure of performance should reflect execution time  Total execution time is a consistent summary of performance  For a given ISA, performance improvements come from  Increases in clock rate (without increasing the CPI)  Improvements in processor organization that lower CPI  Compiler enhancements that lower CPI and/or instruction count  Algorithm/Language choices that affect instruction count  Pitfalls (things you should avoid)  Using a subset of the performance equation as a metric  Expecting improvement of one aspect of a computer to increase performance proportional to the size of improvement Things to Remember
  • 29. CISC – Complex Instruction Set Computer RISC – Reduced Instruction Set Computer Superscalar – Multiple similar processing units are used to execute instructions in parallel Multicore – Multiple Processors executing instruction in a complementary way Some Classes of Today’s Computer Architectures
  • 30. Driving force for CISC Software costs far exceed hardware costs Increasingly complex high level languages A “Semantic” gap between HLL & ML Word size was increasing. This Leads to:  Large instruction sets  More addressing modes  Hardware implementations of HLL statements
  • 31. Intention of CISC  Ease compiler writing  Improve execution efficiency  Support more complex HLLs
  • 32. RISC Key features:  Large number of general purpose registers (or use of compiler technology to optimize register use)  Limited and simple instruction set  Emphasis on optimising the instruction pipeline & memory management, i.e. leverage newer hardware complexities now potentially available.
  • 33. RISC Characteristics  A Single Instruction size, typically 4 bytes  A small number of data addressing modes, typically less than 5  No indirect Addressing that requires two memory accesses  No operations that combine load/store with arithmetic  No more than one memory addressed operand per instruction  No arbitrary data alignment for load/store operations  Large number of instruction bits for integer register addressing, typically at least 5  Large number of instruction bits for FP register addressing, typically at least 4
  • 34. Which is better?  Is the execution of large special purpose instructions more efficient than execution of many simple instructions ?  Which programs are really “shorter” ?  Which are really faster ?  What is the impact of having to support many languages?  What are the legacy challenges ?  What are the cost tradeoffs ?  Can compilers be better made to exploit CISC or RISC better ? Complexity ?  Which can better exploit hardware features ?
  • 35. Characteristics of Some Example Processors

Editor's Notes

  • #4: Have them raise their hands when answering questions
  • #16: Book page 61 has an example to show that a machine with a bigger MIPS performance worse than a machine with a smaller MIPS