SlideShare a Scribd company logo
Evaluating Computers:
Bigger, better, faster, more?
1
What do you want in a computer?
2
What do you want in a computer?
• Low latency -- one unit of work in minimum time
• 1/latency = responsiveness
• High throughput -- maximum work per time
• High bandwidth (BW)
• Low cost
• Low power -- minimum jules per time
• Low energy -- minimum jules per work
• Reliability -- Mean time to failure (MTTF)
• Derived metrics
• responsiveness/dollar
• BW/$
• BW/Watt
• Work/Jule
• Energy * latency -- Energy delay product
• MTTF/$
3
Latency
• This is the simplest kind of performance
• How long does it take the computer to perform
a task?
• The task at hand depends on the situation.
• Usually measured in seconds
• Also measured in clock cycles
• Caution: if you are comparing two different system, you
must ensure that the cycle times are the same.
4
Measuring Latency
• Stop watch!
• System calls
• gettimeofday()
• System.currentTimeMillis()
• Command line
• time <command>
5
Where latency matters
• Application responsiveness
• Any time a person is waiting.
• GUIs
• Games
• Internet services (from the users perspective)
• “Real-time” applications
• Tight constraints enforced by the real world
• Anti-lock braking systems
• Manufacturing control
• Multi-media applications
• The cost of poor latency
• If you are selling computer time, latency is money.
6
Latency and Performance
• By definition:
• Performance = 1/Latency
• If Performance(X) > Performance(Y), X is faster.
• If Perf(X)/Perf(Y) = S, X is S times faster thanY.
• Equivalently: Latency(Y)/Latency(X) = S
• When we need to talk about specifically about
other kinds of “performance” we must be more
specific.
7
The Performance Equation
• We would like to model how architecture impacts
performance (latency)
• This means we need to quantify performance in
terms of architectural parameters.
• Instructions -- this is the basic unit of work for a
processor
• Cycle time -- these two give us a notion of time.
• Cycles
• The first fundamental theorem of computer
architecture:
Latency = Instructions * Cycles/Instruction *
Seconds/Cycle
8
The Performance Equation
• The units work out! Remember your
dimensional analysis!
• Cycles/Instruction == CPI
• Seconds/Cycle == 1/hz
• Example:
• 1GHz clock
• 1 billion instructions
• CPI = 4
• What is the latency?
9
Latency = Instructions * Cycles/Instruction * Seconds/Cycle
Examples
• gcc runs in 100 sec on a 1 GHz machine
– How many cycles does it take?
• gcc runs in 75 sec on a 600 MHz machine
– How many cycles does it take?
100G cycles
45G cycles
Latency = Instructions * Cycles/Instruction * Seconds/Cycle
How can this be?
• Different Instruction count?
• Different ISAs ?
• Different compilers ?
• Different CPI?
• underlying machine implementation
• Microarchitecture
• Different cycle time?
• New process technology
• Microarchitecture
11
Latency = Instructions * Cycles/Instruction * Seconds/Cycle
Computing Average CPI
• Instruction execution time depends on instruction
time (we’ll get into why this is so later on)
• Integer +, -, <<, |, & -- 1 cycle
• Integer *, /, -- 5-10 cycles
• Floating point +, - -- 3-4 cycles
• Floating point *, /, sqrt() -- 10-30 cycles
• Loads/stores -- variable
• All theses values depend on the particular implementation,
not the ISA
• Total CPI depends on the workload’s Instruction mix
-- how many of each type of instruction executes
• What program is running?
• How was it compiled?
12
The Compiler’s Role
• Compilers affect CPI…
• Wise instruction selection
• “Strength reduction”: x*2n -> x << n
• Use registers to eliminate loads and stores
• More compact code -> less waiting for instructions
• …and instruction count
• Common sub-expression elimination
• Use registers to eliminate loads and stores
13
Stupid Compiler
int i, sum = 0;
for(i=0;i<10;i++)
sum += i;
sw 0($sp), $0 #sum = 0
sw 4($sp), $0 #i = 0
loop:
lw $1, 4($sp)
sub $3, $1, 10
beq $3, $0, end
lw $2, 0($sp)
add $2, $2, $1
st 0($sp), $2
addi $1, $1, 1
st 4($sp), $1
b loop
end:
Type CPI Static # dyn #
mem 5 6 42
int 1 3 30
br 1 2 20
Total 2.8 11 92
(5*42 + 1*30 + 1*20)/92 = 2.8
Smart Compiler
int i, sum = 0;
for(i=0;i<10;i++)
sum += i;
add $1, $0, $0 # i
add $2, $0, $0 # sum
loop:
sub $3, $1, 10
beq $3, $0, end
add $2, $2, $1
addi $1, $1, 1
b loop
end:
sw 0($sp), $2
Type CPI Static # dyn #
mem 5 1 1
int 1 5 32
br 1 2 20
Total 1.01 8 53
(5*1 + 1*32 + 1*20)/53 = 2.8
Live demo
16
Program inputs affect CPI too!
int rand[1000] = {random 0s and 1s }
for(i=0;i<1000;i++)
if(rand[i]) sum -= i;
else sum *= i;
int ones[1000] = {1, 1, ...}
for(i=0;i<1000;i++)
if(ones[i]) sum -= i;
else sum *= i;
• Data-dependent computation
• Data-dependent micro-architectural behavior
–Processors are faster when the computation is
predictable (more later)
Live demo
18
• Meaningful CPI exists only:
• For a particular program with a particular compiler
• ....with a particular input.
• You MUST consider all 3 to get accurate latency estimations
or machine speed comparisons
• Instruction Set
• Compiler
• Implementation of Instruction Set (386 vs Pentium)
• Processor Freq (600 Mhz vs 1 GHz)
• Same high level program with same input
• “wall clock” measurements are always comparable.
• If the workloads (app + inputs) are the same
19
Making Meaningful Comparisons
Latency = Instructions * Cycles/Instruction * Seconds/Cycle
The Performance Equation
• Clock rate =
• Instruction count =
• Latency =
• Find the CPI!
20
Latency = Instructions * Cycles/Instruction * Seconds/Cycle
Today
• DRAM
• Quiz 1 recap
• HW 1 recap
• Questions about ISAs
• More about the project?
• Amdahl’s law
21
Key Points
• Amdahl’s law and how to apply it in a variety of
situations
• It’s role in guiding optimization of a system
• It’s role in determining the impact of localized
changes on the entire system
•
22
Limits on Speedup: Amdahl’s Law
• “The fundamental theorem of performance
optimization”
• Coined by Gene Amdahl (one of the designers of the
IBM 360)
• Optimizations do not (generally) uniformly affect the
entire program
– The more widely applicable a technique is, the more valuable it
is
– Conversely, limited applicability can (drastically) reduce the
impact of an optimization.
Always heed Amdahl’s Law!!!
It is central to many many optimization problems
Amdahl’s Law in Action
• SuperJPEG-O-Rama2000 ISA extensions
**
–Speeds up JPEG decode by 10x!!!
–Act now! While Supplies Last!
** Increases processor cost by 45%
Amdahl’s Law in Action
• SuperJPEG-O-Rama2000 in the wild
• PictoBench spends 33% of it’s time doing
JPEG decode
• How much does JOR2k help?
JPEG Decodew/o JOR2k
w/ JOR2k
30s
21s
Amdahl’s Law in Action
• SuperJPEG-O-Rama2000 in the wild
• PictoBench spends 33% of it’s time doing
JPEG decode
• How much does JOR2k help?
JPEG Decodew/o JOR2k
w/ JOR2k
30s
21s
Performance: 30/21 = 1.4x Speedup != 10x
Amdahl’s Law in Action
• SuperJPEG-O-Rama2000 in the wild
• PictoBench spends 33% of it’s time doing
JPEG decode
• How much does JOR2k help?
JPEG Decodew/o JOR2k
w/ JOR2k
30s
21s
Performance: 30/21 = 1.4x Speedup != 10x
Is this worth the 45% increase in cost?
Amdahl’s Law in Action
• SuperJPEG-O-Rama2000 in the wild
• PictoBench spends 33% of it’s time doing
JPEG decode
• How much does JOR2k help?
JPEG Decodew/o JOR2k
w/ JOR2k
30s
21s
Performance: 30/21 = 1.4x Speedup != 10x
Is this worth the 45% increase in cost?
Amdahl
ate our
Speedup!
Amdahl’s Law
• The second fundamental theorem of computer
architecture.
• If we can speed up X of the program by S times
• Amdahl’s Law gives the total speed up, Stot
Stot = 1 .
(x/S + (1-x))
Amdahl’s Law
• The second fundamental theorem of computer
architecture.
• If we can speed up X of the program by S times
• Amdahl’s Law gives the total speed up, Stot
Stot = 1 .
(x/S + (1-x))
x =1 => Stot = 1 = 1 = S
(1/S + (1-1)) 1/S
Sanity check:
Amdahl’s Corollary #1
• Maximum possible speedup, Smax
Smax = 1
(1-x)
S = infinity
Amdahl’s Law Practice
• Protein String Matching Code
–200 hours to run on current machine, spends 20% of
time doing integer instructions
–How much faster must you make the integer unit to
make the code run 10 hours faster?
–How much faster must you make the integer unit to
make the code run 50 hours faster?
A)1.1
B)1.25
C)1.75
D)1.33
E) 10.0
F) 50.0
G) 1 million times
H) Other
Amdahl’s Law Practice
• Protein String Matching Code
–4 days ET on current machine
• 20% of time doing integer instructions
• 35% percent of time doing I/O
–Which is the better economic tradeoff?
• Compiler optimization that reduces number of
integer instructions by 25% (assume each integer
inst takes the same amount of time)
• Hardware optimization that makes I/O run 20%
faster?
Amdahl’s Law Applies All Over
30
• SSDs use 10x less power than HDs
• But they only save you ~50% overall.
Amdahl’s Law in Memory
31
Memory Device
Rowdecoder
Column decoder
Sense Amps
High order bits
Low order bits
Storage array
DataAddress
• Storage array 90% of area
• Row decoder 4%
• Column decode 2%
• Sense amps 4%
• What’s the benefit of
reducing bit size by 10%?
• Reducing column decoder
size by 90%?
Amdahl’s Corollary #2
• Make the common case fast (i.e., x should be
large)!
–Common == “most time consuming” not necessarily
“most frequent”
–The uncommon case doesn’t make much difference
–Be sure of what the common case is
–The common case changes.
• Repeat…
–With optimization, the common becomes uncommon
and vice versa.
Amdahl’s Corollary #2: Example
Common case
Amdahl’s Corollary #2: Example
Common case
7x => 1.4x
Amdahl’s Corollary #2: Example
Common case
7x => 1.4x
4x => 1.3x
Amdahl’s Corollary #2: Example
Common case
7x => 1.4x
4x => 1.3x
1.3x => 1.1x
Total = 20/10 = 2x
Amdahl’s Corollary #2: Example
Common case
7x => 1.4x
4x => 1.3x
1.3x => 1.1x
Total = 20/10 = 2x
• In the end, there is no common case!
• Options:
– Global optimizations (faster clock, better compiler)
– Find something common to work on (i.e. memory latency)
– War of attrition
– Total redesign (You are probably well-prepared for this)
Amdahl’s Corollary #3
• Benefits of parallel processing
• p processors
• x% is p-way parallizable
• maximum speedup, Spar
Spar = 1 .
(x/p + (1-x))
Amdahl’s Corollary #3
• Benefits of parallel processing
• p processors
• x% is p-way parallizable
• maximum speedup, Spar
Spar = 1 .
(x/p + (1-x))
x is pretty small for desktop applications, even for p = 2
Amdahl’s Corollary #3
• Benefits of parallel processing
• p processors
• x% is p-way parallizable
• maximum speedup, Spar
Spar = 1 .
(x/p + (1-x))
x is pretty small for desktop applications, even for p = 2
Does Intel’s 80-core processor make much sense?
Amdahl’s Corollary #4
• Amdahl’s law for latency (L)
Lnew = Lbase *1/Speedup
Lnew = Lbase *(x/S + (1-x))
Lnew = (Lbase /S)*x + ETbase*(1-x)
• If you can speed up y% of the remaining (1-x), you can apply
Amdahl’s law recursively
Lnew = (Lbase /S1)*x +
(Sbase*(1-x)/S2*y + Lbase*(1-x)*(1-y))
• This is how we will analyze memory system performance
Amdahl’s Non-Corollary
• Amdahl’s law does not bound slowdown
Lnew = (Lbase /S)*x + Lbase*(1-x)
• Lnew is linear in 1/S
• Example: x = 0.01 of execution, Lbase = 1
–S = 0.001;
• Enew = 1000*Lbase *0.01 + Lbase *(0.99) ~ 10*Lbase
–S = 0.00001;
• Enew = 100000*Lbase *0.01 + Lbase *(0.99) ~ 1000*Lbase
• Things can only get so fast, but they can get
arbitrarily slow.
–Do not hurt the non-common case too much!
Benchmarks: Standard Candles for
Performance
• It’s hard to convince manufacturers to run your program
(unless you’re a BIG customer)
• A benchmark is a set of programs that are representative of a
class of problems.
• To increase predictability, collections of benchmark
applications, called benchmark suites, are popular
– “Easy” to set up
– Portable
– Well-understood
– Stand-alone
– Standardized conditions
– These are all things that real software is not.
Classes of benchmarks
• Microbenchmark – measure one feature of system
– e.g. memory accesses or communication speed
• Kernels – most compute-intensive part of applications
– e.g. Linpack and NAS kernel b’marks (for supercomputers)
• Full application:
– SpecInt / SpecFP (int and float) (for Unix workstations)
– Other suites for databases, web servers, graphics,...
Bandwidth
• The amount of work (or data) per time
• MB/s, GB/s -- network BW, disk BW, etc.
• Frames per second -- Games, video transcoding
• (why are games under both latency and BW?)
• Also called “throughput”
39
Measuring Bandwidth
• Measure how much work is done
• Measure latency
• Divide
40
Latency-BW Trade-offs
• Often, increasing latency for one task and
increase BW for many tasks.
• Think of waiting in line for one of 4 bank tellers
• If the line is empty, your response time is minimized, but
throughput is low because utilization is low.
• If there is always a line, you wait longer (your latency
goes up), but there is always work available for tellers.
• Much of computer performance is about
scheduling work onto resources
• Network links.
• Memory ports.
• Processors, functional units, etc.
• IO channels.
• Increasing contention for these resources generally
increases throughput but hurts latency.
41
Stationwagon Digression
• IPv6 Internet 2: 272,400 terabit-meters per second
–585GB in 30 minutes over 30,000 Km
–9.08 Gb/s
• Subaru outback wagon
– Max load = 408Kg
– 21Mpg
• MHX2 BT 300 Laptop drive
– 300GB/Drive
– 0.135Kg
• 906TB
• Legal speed: 75MPH (33.3 m/s)
• BW = 8.2 Gb/s
• Latency = 10 days
• 241,535 terabit-meters per second
Prius Digression
• IPv6 Internet 2: 272,400 terabit-meters per second
–585GB in 30 minutes over 30,000 Km
–9.08 Gb/s
• My Toyota Prius
– Max load = 374Kg
– 44Mpg (2x power efficiency)
• MHX2 BT 300
– 300GB/Drive
– 0.135Kg
• 831TB
• Legal speed: 75MPH (33.3 m/s)
• BW = 7.5 Gb/s
• Latency = 10 days
• 221,407 terabit-meters per second (13%
performance hit)

More Related Content

PDF
04 performance
PPT
Fdp embedded systems
PPSX
Survey of task scheduler
PPT
PDF
Jan Hloušek, Keen Software House
PPTX
Real time systems 1 and 2
PPT
Real time system tsp
PDF
OSMC 2014: Naemon 1, 2, 3, N | Andreas Ericsson
04 performance
Fdp embedded systems
Survey of task scheduler
Jan Hloušek, Keen Software House
Real time systems 1 and 2
Real time system tsp
OSMC 2014: Naemon 1, 2, 3, N | Andreas Ericsson

What's hot (15)

PPTX
Real Time Operating Systems
PDF
Performance Testing Java Applications
PPTX
참여기관_발표자료-국민대학교 201301 정기회의
PDF
Real Time Systems
PDF
Supporting Time-Sensitive Applications on a Commodity OS
PPT
Real Time Operating system (RTOS) - Embedded systems
PPT
Embedded Intro India05
PPT
Real Time Systems &amp; RTOS
PDF
Extlect02
PPTX
What to do when detect deadlock
PPTX
IOT Firmware: Best Pratices
PDF
Keeping Latency Low and Throughput High with Application-level Priority Manag...
PDF
LCA14: LCA14-306: CPUidle & CPUfreq integration with scheduler
PPT
Real-Time Operating Systems
PDF
Realtime systems chapter 1
Real Time Operating Systems
Performance Testing Java Applications
참여기관_발표자료-국민대학교 201301 정기회의
Real Time Systems
Supporting Time-Sensitive Applications on a Commodity OS
Real Time Operating system (RTOS) - Embedded systems
Embedded Intro India05
Real Time Systems &amp; RTOS
Extlect02
What to do when detect deadlock
IOT Firmware: Best Pratices
Keeping Latency Low and Throughput High with Application-level Priority Manag...
LCA14: LCA14-306: CPUidle & CPUfreq integration with scheduler
Real-Time Operating Systems
Realtime systems chapter 1
Ad

Viewers also liked (20)

PPTX
Intentional code
PDF
02 performance
PDF
02 isa
PDF
00 introduction
PDF
New Catfiz presentation
ODP
ROBOTS Y MÁQUINAS
PDF
Egészség mint versenyelőny - Munkahelyi stressz
PDF
SV9100 foundation
PDF
Rishab Aggarwal
PDF
DBPIA 이용 매뉴얼
PDF
Analisis tema 8 y 9
PPTX
Aparelho auditivo
PPTX
Tema: Redes Informáticas
PPTX
CoWorkRev
PDF
Equipo 3
PPT
01åbz ¡åmigo!
DOC
6º 3 MS 5 2014 Programa de examen Biol, Gen y Soc.
PPT
Postural management for people with MS
DOCX
RAJEEV RANJAN KUMAR RESUME
PPTX
2014-11-25 AMETIC - Congreso de Factura electrónica
Intentional code
02 performance
02 isa
00 introduction
New Catfiz presentation
ROBOTS Y MÁQUINAS
Egészség mint versenyelőny - Munkahelyi stressz
SV9100 foundation
Rishab Aggarwal
DBPIA 이용 매뉴얼
Analisis tema 8 y 9
Aparelho auditivo
Tema: Redes Informáticas
CoWorkRev
Equipo 3
01åbz ¡åmigo!
6º 3 MS 5 2014 Programa de examen Biol, Gen y Soc.
Postural management for people with MS
RAJEEV RANJAN KUMAR RESUME
2014-11-25 AMETIC - Congreso de Factura electrónica
Ad

Similar to 03 performance (20)

PPT
Lec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Performance
PDF
What’s eating python performance
PPT
cs1311lecture25wdl.ppt
PPTX
Fundamentals.pptx
PPT
Lecture1
PPT
L-2 (Computer Performance).ppt
PDF
Understanding Android Benchmarks
PPT
Resource Management in (Embedded) Real-Time Systems
PPT
Embedded System-design technology
PDF
PILOT Session for Embedded Systems
PDF
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...
PPTX
1. An Introduction to Embed Systems_DRKG.pptx
PPTX
2. Module_1_Computer Performance, Metrics, Measurement, & Evaluation (1).pptx
PPT
Performance Tuning by Dijesh P
PPT
Basics of micro controllers for biginners
PPT
39245147 intro-es-i
PPT
Reduced instruction set computers
PDF
Trends in Systems and How to Get Efficient Performance
PPTX
Ruby3x3: How are we going to measure 3x
PDF
Unit 1 Computer organization and Instructions
Lec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Performance
What’s eating python performance
cs1311lecture25wdl.ppt
Fundamentals.pptx
Lecture1
L-2 (Computer Performance).ppt
Understanding Android Benchmarks
Resource Management in (Embedded) Real-Time Systems
Embedded System-design technology
PILOT Session for Embedded Systems
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...
1. An Introduction to Embed Systems_DRKG.pptx
2. Module_1_Computer Performance, Metrics, Measurement, & Evaluation (1).pptx
Performance Tuning by Dijesh P
Basics of micro controllers for biginners
39245147 intro-es-i
Reduced instruction set computers
Trends in Systems and How to Get Efficient Performance
Ruby3x3: How are we going to measure 3x
Unit 1 Computer organization and Instructions

More from marangburu42 (20)

DOCX
PDF
Write miss
DOCX
Hennchthree 161102111515
DOCX
Hennchthree
DOCX
Hennchthree
DOCX
Sequential circuits
DOCX
Combinational circuits
DOCX
Hennchthree 160912095304
DOCX
Sequential circuits
DOCX
Combinational circuits
DOCX
Karnaugh mapping allaboutcircuits
DOCX
Aac boolean formulae
DOCX
Virtualmemoryfinal 161019175858
DOCX
Io systems final
DOCX
File system interfacefinal
DOCX
File systemimplementationfinal
DOCX
Mass storage structurefinal
DOCX
All aboutcircuits karnaugh maps
DOCX
Virtual memoryfinal
DOCX
Mainmemoryfinal 161019122029
Write miss
Hennchthree 161102111515
Hennchthree
Hennchthree
Sequential circuits
Combinational circuits
Hennchthree 160912095304
Sequential circuits
Combinational circuits
Karnaugh mapping allaboutcircuits
Aac boolean formulae
Virtualmemoryfinal 161019175858
Io systems final
File system interfacefinal
File systemimplementationfinal
Mass storage structurefinal
All aboutcircuits karnaugh maps
Virtual memoryfinal
Mainmemoryfinal 161019122029

Recently uploaded (20)

PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPTX
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
PDF
Insiders guide to clinical Medicine.pdf
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
Pre independence Education in Inndia.pdf
PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PPTX
Cell Types and Its function , kingdom of life
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
O7-L3 Supply Chain Operations - ICLT Program
PPTX
Institutional Correction lecture only . . .
PPTX
master seminar digital applications in india
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Complications of Minimal Access Surgery at WLH
O5-L3 Freight Transport Ops (International) V1.pdf
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
FourierSeries-QuestionsWithAnswers(Part-A).pdf
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
Insiders guide to clinical Medicine.pdf
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Pre independence Education in Inndia.pdf
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
VCE English Exam - Section C Student Revision Booklet
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Cell Types and Its function , kingdom of life
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Module 4: Burden of Disease Tutorial Slides S2 2025
O7-L3 Supply Chain Operations - ICLT Program
Institutional Correction lecture only . . .
master seminar digital applications in india
102 student loan defaulters named and shamed – Is someone you know on the list?
Complications of Minimal Access Surgery at WLH

03 performance

  • 2. What do you want in a computer? 2
  • 3. What do you want in a computer? • Low latency -- one unit of work in minimum time • 1/latency = responsiveness • High throughput -- maximum work per time • High bandwidth (BW) • Low cost • Low power -- minimum jules per time • Low energy -- minimum jules per work • Reliability -- Mean time to failure (MTTF) • Derived metrics • responsiveness/dollar • BW/$ • BW/Watt • Work/Jule • Energy * latency -- Energy delay product • MTTF/$ 3
  • 4. Latency • This is the simplest kind of performance • How long does it take the computer to perform a task? • The task at hand depends on the situation. • Usually measured in seconds • Also measured in clock cycles • Caution: if you are comparing two different system, you must ensure that the cycle times are the same. 4
  • 5. Measuring Latency • Stop watch! • System calls • gettimeofday() • System.currentTimeMillis() • Command line • time <command> 5
  • 6. Where latency matters • Application responsiveness • Any time a person is waiting. • GUIs • Games • Internet services (from the users perspective) • “Real-time” applications • Tight constraints enforced by the real world • Anti-lock braking systems • Manufacturing control • Multi-media applications • The cost of poor latency • If you are selling computer time, latency is money. 6
  • 7. Latency and Performance • By definition: • Performance = 1/Latency • If Performance(X) > Performance(Y), X is faster. • If Perf(X)/Perf(Y) = S, X is S times faster thanY. • Equivalently: Latency(Y)/Latency(X) = S • When we need to talk about specifically about other kinds of “performance” we must be more specific. 7
  • 8. The Performance Equation • We would like to model how architecture impacts performance (latency) • This means we need to quantify performance in terms of architectural parameters. • Instructions -- this is the basic unit of work for a processor • Cycle time -- these two give us a notion of time. • Cycles • The first fundamental theorem of computer architecture: Latency = Instructions * Cycles/Instruction * Seconds/Cycle 8
  • 9. The Performance Equation • The units work out! Remember your dimensional analysis! • Cycles/Instruction == CPI • Seconds/Cycle == 1/hz • Example: • 1GHz clock • 1 billion instructions • CPI = 4 • What is the latency? 9 Latency = Instructions * Cycles/Instruction * Seconds/Cycle
  • 10. Examples • gcc runs in 100 sec on a 1 GHz machine – How many cycles does it take? • gcc runs in 75 sec on a 600 MHz machine – How many cycles does it take? 100G cycles 45G cycles Latency = Instructions * Cycles/Instruction * Seconds/Cycle
  • 11. How can this be? • Different Instruction count? • Different ISAs ? • Different compilers ? • Different CPI? • underlying machine implementation • Microarchitecture • Different cycle time? • New process technology • Microarchitecture 11 Latency = Instructions * Cycles/Instruction * Seconds/Cycle
  • 12. Computing Average CPI • Instruction execution time depends on instruction time (we’ll get into why this is so later on) • Integer +, -, <<, |, & -- 1 cycle • Integer *, /, -- 5-10 cycles • Floating point +, - -- 3-4 cycles • Floating point *, /, sqrt() -- 10-30 cycles • Loads/stores -- variable • All theses values depend on the particular implementation, not the ISA • Total CPI depends on the workload’s Instruction mix -- how many of each type of instruction executes • What program is running? • How was it compiled? 12
  • 13. The Compiler’s Role • Compilers affect CPI… • Wise instruction selection • “Strength reduction”: x*2n -> x << n • Use registers to eliminate loads and stores • More compact code -> less waiting for instructions • …and instruction count • Common sub-expression elimination • Use registers to eliminate loads and stores 13
  • 14. Stupid Compiler int i, sum = 0; for(i=0;i<10;i++) sum += i; sw 0($sp), $0 #sum = 0 sw 4($sp), $0 #i = 0 loop: lw $1, 4($sp) sub $3, $1, 10 beq $3, $0, end lw $2, 0($sp) add $2, $2, $1 st 0($sp), $2 addi $1, $1, 1 st 4($sp), $1 b loop end: Type CPI Static # dyn # mem 5 6 42 int 1 3 30 br 1 2 20 Total 2.8 11 92 (5*42 + 1*30 + 1*20)/92 = 2.8
  • 15. Smart Compiler int i, sum = 0; for(i=0;i<10;i++) sum += i; add $1, $0, $0 # i add $2, $0, $0 # sum loop: sub $3, $1, 10 beq $3, $0, end add $2, $2, $1 addi $1, $1, 1 b loop end: sw 0($sp), $2 Type CPI Static # dyn # mem 5 1 1 int 1 5 32 br 1 2 20 Total 1.01 8 53 (5*1 + 1*32 + 1*20)/53 = 2.8
  • 17. Program inputs affect CPI too! int rand[1000] = {random 0s and 1s } for(i=0;i<1000;i++) if(rand[i]) sum -= i; else sum *= i; int ones[1000] = {1, 1, ...} for(i=0;i<1000;i++) if(ones[i]) sum -= i; else sum *= i; • Data-dependent computation • Data-dependent micro-architectural behavior –Processors are faster when the computation is predictable (more later)
  • 19. • Meaningful CPI exists only: • For a particular program with a particular compiler • ....with a particular input. • You MUST consider all 3 to get accurate latency estimations or machine speed comparisons • Instruction Set • Compiler • Implementation of Instruction Set (386 vs Pentium) • Processor Freq (600 Mhz vs 1 GHz) • Same high level program with same input • “wall clock” measurements are always comparable. • If the workloads (app + inputs) are the same 19 Making Meaningful Comparisons Latency = Instructions * Cycles/Instruction * Seconds/Cycle
  • 20. The Performance Equation • Clock rate = • Instruction count = • Latency = • Find the CPI! 20 Latency = Instructions * Cycles/Instruction * Seconds/Cycle
  • 21. Today • DRAM • Quiz 1 recap • HW 1 recap • Questions about ISAs • More about the project? • Amdahl’s law 21
  • 22. Key Points • Amdahl’s law and how to apply it in a variety of situations • It’s role in guiding optimization of a system • It’s role in determining the impact of localized changes on the entire system • 22
  • 23. Limits on Speedup: Amdahl’s Law • “The fundamental theorem of performance optimization” • Coined by Gene Amdahl (one of the designers of the IBM 360) • Optimizations do not (generally) uniformly affect the entire program – The more widely applicable a technique is, the more valuable it is – Conversely, limited applicability can (drastically) reduce the impact of an optimization. Always heed Amdahl’s Law!!! It is central to many many optimization problems
  • 24. Amdahl’s Law in Action • SuperJPEG-O-Rama2000 ISA extensions ** –Speeds up JPEG decode by 10x!!! –Act now! While Supplies Last! ** Increases processor cost by 45%
  • 25. Amdahl’s Law in Action • SuperJPEG-O-Rama2000 in the wild • PictoBench spends 33% of it’s time doing JPEG decode • How much does JOR2k help? JPEG Decodew/o JOR2k w/ JOR2k 30s 21s
  • 26. Amdahl’s Law in Action • SuperJPEG-O-Rama2000 in the wild • PictoBench spends 33% of it’s time doing JPEG decode • How much does JOR2k help? JPEG Decodew/o JOR2k w/ JOR2k 30s 21s Performance: 30/21 = 1.4x Speedup != 10x
  • 27. Amdahl’s Law in Action • SuperJPEG-O-Rama2000 in the wild • PictoBench spends 33% of it’s time doing JPEG decode • How much does JOR2k help? JPEG Decodew/o JOR2k w/ JOR2k 30s 21s Performance: 30/21 = 1.4x Speedup != 10x Is this worth the 45% increase in cost?
  • 28. Amdahl’s Law in Action • SuperJPEG-O-Rama2000 in the wild • PictoBench spends 33% of it’s time doing JPEG decode • How much does JOR2k help? JPEG Decodew/o JOR2k w/ JOR2k 30s 21s Performance: 30/21 = 1.4x Speedup != 10x Is this worth the 45% increase in cost? Amdahl ate our Speedup!
  • 29. Amdahl’s Law • The second fundamental theorem of computer architecture. • If we can speed up X of the program by S times • Amdahl’s Law gives the total speed up, Stot Stot = 1 . (x/S + (1-x))
  • 30. Amdahl’s Law • The second fundamental theorem of computer architecture. • If we can speed up X of the program by S times • Amdahl’s Law gives the total speed up, Stot Stot = 1 . (x/S + (1-x)) x =1 => Stot = 1 = 1 = S (1/S + (1-1)) 1/S Sanity check:
  • 31. Amdahl’s Corollary #1 • Maximum possible speedup, Smax Smax = 1 (1-x) S = infinity
  • 32. Amdahl’s Law Practice • Protein String Matching Code –200 hours to run on current machine, spends 20% of time doing integer instructions –How much faster must you make the integer unit to make the code run 10 hours faster? –How much faster must you make the integer unit to make the code run 50 hours faster? A)1.1 B)1.25 C)1.75 D)1.33 E) 10.0 F) 50.0 G) 1 million times H) Other
  • 33. Amdahl’s Law Practice • Protein String Matching Code –4 days ET on current machine • 20% of time doing integer instructions • 35% percent of time doing I/O –Which is the better economic tradeoff? • Compiler optimization that reduces number of integer instructions by 25% (assume each integer inst takes the same amount of time) • Hardware optimization that makes I/O run 20% faster?
  • 34. Amdahl’s Law Applies All Over 30 • SSDs use 10x less power than HDs • But they only save you ~50% overall.
  • 35. Amdahl’s Law in Memory 31 Memory Device Rowdecoder Column decoder Sense Amps High order bits Low order bits Storage array DataAddress • Storage array 90% of area • Row decoder 4% • Column decode 2% • Sense amps 4% • What’s the benefit of reducing bit size by 10%? • Reducing column decoder size by 90%?
  • 36. Amdahl’s Corollary #2 • Make the common case fast (i.e., x should be large)! –Common == “most time consuming” not necessarily “most frequent” –The uncommon case doesn’t make much difference –Be sure of what the common case is –The common case changes. • Repeat… –With optimization, the common becomes uncommon and vice versa.
  • 37. Amdahl’s Corollary #2: Example Common case
  • 38. Amdahl’s Corollary #2: Example Common case 7x => 1.4x
  • 39. Amdahl’s Corollary #2: Example Common case 7x => 1.4x 4x => 1.3x
  • 40. Amdahl’s Corollary #2: Example Common case 7x => 1.4x 4x => 1.3x 1.3x => 1.1x Total = 20/10 = 2x
  • 41. Amdahl’s Corollary #2: Example Common case 7x => 1.4x 4x => 1.3x 1.3x => 1.1x Total = 20/10 = 2x • In the end, there is no common case! • Options: – Global optimizations (faster clock, better compiler) – Find something common to work on (i.e. memory latency) – War of attrition – Total redesign (You are probably well-prepared for this)
  • 42. Amdahl’s Corollary #3 • Benefits of parallel processing • p processors • x% is p-way parallizable • maximum speedup, Spar Spar = 1 . (x/p + (1-x))
  • 43. Amdahl’s Corollary #3 • Benefits of parallel processing • p processors • x% is p-way parallizable • maximum speedup, Spar Spar = 1 . (x/p + (1-x)) x is pretty small for desktop applications, even for p = 2
  • 44. Amdahl’s Corollary #3 • Benefits of parallel processing • p processors • x% is p-way parallizable • maximum speedup, Spar Spar = 1 . (x/p + (1-x)) x is pretty small for desktop applications, even for p = 2 Does Intel’s 80-core processor make much sense?
  • 45. Amdahl’s Corollary #4 • Amdahl’s law for latency (L) Lnew = Lbase *1/Speedup Lnew = Lbase *(x/S + (1-x)) Lnew = (Lbase /S)*x + ETbase*(1-x) • If you can speed up y% of the remaining (1-x), you can apply Amdahl’s law recursively Lnew = (Lbase /S1)*x + (Sbase*(1-x)/S2*y + Lbase*(1-x)*(1-y)) • This is how we will analyze memory system performance
  • 46. Amdahl’s Non-Corollary • Amdahl’s law does not bound slowdown Lnew = (Lbase /S)*x + Lbase*(1-x) • Lnew is linear in 1/S • Example: x = 0.01 of execution, Lbase = 1 –S = 0.001; • Enew = 1000*Lbase *0.01 + Lbase *(0.99) ~ 10*Lbase –S = 0.00001; • Enew = 100000*Lbase *0.01 + Lbase *(0.99) ~ 1000*Lbase • Things can only get so fast, but they can get arbitrarily slow. –Do not hurt the non-common case too much!
  • 47. Benchmarks: Standard Candles for Performance • It’s hard to convince manufacturers to run your program (unless you’re a BIG customer) • A benchmark is a set of programs that are representative of a class of problems. • To increase predictability, collections of benchmark applications, called benchmark suites, are popular – “Easy” to set up – Portable – Well-understood – Stand-alone – Standardized conditions – These are all things that real software is not.
  • 48. Classes of benchmarks • Microbenchmark – measure one feature of system – e.g. memory accesses or communication speed • Kernels – most compute-intensive part of applications – e.g. Linpack and NAS kernel b’marks (for supercomputers) • Full application: – SpecInt / SpecFP (int and float) (for Unix workstations) – Other suites for databases, web servers, graphics,...
  • 49. Bandwidth • The amount of work (or data) per time • MB/s, GB/s -- network BW, disk BW, etc. • Frames per second -- Games, video transcoding • (why are games under both latency and BW?) • Also called “throughput” 39
  • 50. Measuring Bandwidth • Measure how much work is done • Measure latency • Divide 40
  • 51. Latency-BW Trade-offs • Often, increasing latency for one task and increase BW for many tasks. • Think of waiting in line for one of 4 bank tellers • If the line is empty, your response time is minimized, but throughput is low because utilization is low. • If there is always a line, you wait longer (your latency goes up), but there is always work available for tellers. • Much of computer performance is about scheduling work onto resources • Network links. • Memory ports. • Processors, functional units, etc. • IO channels. • Increasing contention for these resources generally increases throughput but hurts latency. 41
  • 52. Stationwagon Digression • IPv6 Internet 2: 272,400 terabit-meters per second –585GB in 30 minutes over 30,000 Km –9.08 Gb/s • Subaru outback wagon – Max load = 408Kg – 21Mpg • MHX2 BT 300 Laptop drive – 300GB/Drive – 0.135Kg • 906TB • Legal speed: 75MPH (33.3 m/s) • BW = 8.2 Gb/s • Latency = 10 days • 241,535 terabit-meters per second
  • 53. Prius Digression • IPv6 Internet 2: 272,400 terabit-meters per second –585GB in 30 minutes over 30,000 Km –9.08 Gb/s • My Toyota Prius – Max load = 374Kg – 44Mpg (2x power efficiency) • MHX2 BT 300 – 300GB/Drive – 0.135Kg • 831TB • Legal speed: 75MPH (33.3 m/s) • BW = 7.5 Gb/s • Latency = 10 days • 221,407 terabit-meters per second (13% performance hit)