SlideShare a Scribd company logo
ISA
By
AJAL.A.J - AP/ECE
Instruction Set Architecture
• Instruction set architecture is the structure of a
computer that a machine language programmer must
understand to write a correct (timing independent)
program for that machine.
• The instruction set architecture is also the machine
description that a hardware designer must
understand to design a correct implementation of
the computer.
• a fixed number of operations are formatted as
one big instruction (called a bundle)
op op op Bundling info
Instruction Set Architecture
Computer Architecture =
Instruction Set Architecture
+ Machine Organization
• “... the attributes of a [computing] system as seen by
the programmer, i.e. the conceptual structure and
functional behavior …”
Evolution of Instruction Sets
Single Accumulator (EDSAC 1950)
Accumulator + Index Registers
(Manchester Mark I, IBM 700 series 1953)
Separation of Programming Model
from Implementation
High-level Language Based Concept of a Family
(B5000 1963) (IBM 360 1964)
General Purpose Register Machines
Complex Instruction Sets Load/Store Architecture
RISC
(Vax, Intel 432 1977-80) (CDC 6600, Cray 1 1963-76)
(Mips,Sparc,HP-PA,IBM RS6000,PowerPC . . .1987)
LIW/”EPIC”? (IA-64. . .1999) VLIW
Instruction Set Architecture
– Interface between all the software that runs on the
machine and the hardware that executes it
• Computer Architecture = Hardware + ISA
instruction set, or instruction set
architecture (ISA)
• An instruction set, or instruction set
architecture (ISA), is the part of the computer
architecture related to programming, including the
native data types, instructions, registers, addressing
modes, memory,
architecture, interrupt and exception handling, and
external I/O. An ISA includes a specification of the
set of opcodes (machine language), and the native
commands implemented by a particular processor.
Microarchitecture
• Instruction set architecture is distinguished from
the microarchitecture, which is the set of processor
design techniques used to implement the
instruction set. Computers with different micro
architectures can share a common instruction set.
• For example, the Intel Pentium and
the AMD Athlon implement nearly identical
versions of the x86 instruction set, but have
radically different internal designs.
isa architecture
NUAL vs. UAL
• Unit Assumed Latency (UAL)
– Semantics of the program are that each
instruction is completed before the next one is
issued
– This is the conventional sequential model
• Non-Unit Assumed Latency (NUAL):
– At least 1 operation has a non-unit assumed
latency, L, which is greater than 1
– The semantics of the program are correctly
understood if exactly the next L-1 instructions are
understood to have issued before this operation
completes
isa architecture
isa architecture
isa architecture
isa architecture
isa architecture
isa architecture
isa architecture
isa architecture
isa architecture
isa architecture
isa architecture
isa architecture
Instruction Set Architecture ICS 233 – Computer Architecture and Assembly Language – KFUPM
© Muhamed Mudawar slide 22
Summary of RISC Design
 All instructions are typically of one size
 Few instruction formats
 All operations on data are register to register
 Operands are read from registers
 Result is stored in a register
 General purpose integer and floating point registers
 Typically, 32 integer and 32 floating-point registers
 Memory access only via load and store instructions
 Load and store: bytes, half words, words, and double words
 Few simple addressing modes
Instruction Set Architectures
 Reduced Instruction Set Computers (RISCs)
 Simple instruction
 Flexibility
 Higher throughput
 Faster execution
 Complex Instruction Set Computers (CISCs)
 Hardware support for high-level language
 Compact program
MIPS: A RISC example
 Smaller and simpler instruction set
 111 instructions
 One cycle execution time
 Pipelining
 32 registers
 32 bits for each register
MIPS Instruction Set
 25 branch/jump instructions
 21 arithmetic instructions
 15 load instructions
 12 comparison instructions
 10 store instructions
 8 logic instructions
 8 bit manipulation instructions
 8 move instructions
 4 miscellaneous instructions
Overview of the MIPS Processor
Memory
Up to 232
bytes = 230
words
4 bytes per word
$0
$1
$2
$31
Hi Lo
ALU
F0
F1
F2
F31
FP
Arith
EPC
Cause
BadVaddr
Status
EIU FPU
TMU
Execution
&
Integer Unit
(Main proc)
Floating
Point Unit
(Coproc 1)
Trap &
Memory Unit
(Coproc 0)
. . .
. . .
Integer
mul/div
Arithmetic &
Logic Unit
32 General
Purpose
Registers
Integer
Multiplier/Divider
32 Floating-Point
Registers
Floating-Point
Arithmetic Unit
CPU
Registers
$0
$31
Arithmetic
unit
Multiply
divide
Lo Hi
Coprocessor 1 (FPU)
Registers
$0
$31
Arithmetic
unit
Registers
BadVAddr
Coprocessor 0 (traps and memory)
Status
Cause
EPC
Memory
MIPS R2000
Organization
3-28ECE 361
DefinitionsDefinitions
Performance is typically in units-per-second
• bigger is better
If we are primarily concerned with response time
• performance = 1
execution_time
" X is n times faster than Y" means
n
ePerformanc
ePerformanc
imeExecutionT
imeExecutionT
y
x
x
y
==
3-29ECE 361
Organizational Trade-offsOrganizational Trade-offs
Compiler
Programming
Language
Application
Datapath
Control
TransistorsWiresPins
ISA
Function Units
Instruction Mix
Cycle Time
CPI
CPI is a useful design measure relating the Instruction Set
Architecture with the Implementation of that architecture, and the
program measured
3-30ECE 361
Principal Design Metrics: CPI and Cycle TimePrincipal Design Metrics: CPI and Cycle Time
Seconds
nsInstructio
Cycle
Seconds
nInstructio
Cycles
ePerformanc
CycleTimeCPI
ePerformanc
imeExecutionT
ePerformanc
=
×
=
×
=
=
1
1
1
3-31ECE 361
Amdahl's “Law”: Make the Common Case FastAmdahl's “Law”: Make the Common Case Fast
Speedup due to enhancement E:
ExTime w/o E Performance w/ E
Speedup(E) = -------------------- = ---------------------
ExTime w/ E Performance w/o E
Suppose that enhancement E accelerates a fraction F of the task
by a factor S and the remainder of the task is unaffected then,
ExTime(with E) = ((1-F) + F/S) X ExTime(without E)
Speedup(with E) = ExTime(without E) ÷
((1-F) + F/S) X ExTime(without E)
Performance improvement
is limited by how much the
improved feature is used 
Invest resources where
time is spent.
Classification of Instruction Set
Architectures
3-33ECE 361
Instruction SetInstruction Set
DesignDesign
Multiple Implementations: 8086  Pentium 4
ISAs evolve: MIPS-I, MIPS-II, MIPS-II, MIPS-IV,
MIPS,MDMX, MIPS-32, MIPS-64
instruction set
software
hardware
3-34ECE 361
The steps for executing an instruction:The steps for executing an instruction:
1.Fetch the instruction
2.Decode the instruction
3.Locate the operand
4.Fetch the operand (if necessary)
5.Execute the operation in processor
registers
6.Store the results
7.Go back to step 1
3-35ECE 361
Typical Processor Execution CycleTypical Processor Execution Cycle
Instruction
Fetch
Instruction
Decode
Operand
Fetch
Execute
Result
Store
Next
Instruction
Obtain instruction from program storage
Determine required actions and instruction size
Locate and obtain operand data
Compute result value or status
Deposit results in register or storage for later use
Determine successor instruction
3-36ECE 361
Instruction and Data Memory: Unified or SeparateInstruction and Data Memory: Unified or Separate
ADD
SUBTRACT
AND
OR
COMPARE
.
.
.
01010
01110
10011
10001
11010
.
.
.
Programmer's View
Computer's View
CPU
Memory
I/O
Computer
Program
(Instructions)
Princeton (Von Neumann) Architecture
--- Data and Instructions mixed in same
unified memory
--- Program as data
--- Storage utilization
--- Single memory interface
Harvard Architecture
--- Data & Instructions in
separate memories
--- Has advantages in certain
high performance
implementations
--- Can optimize each memory
3-37ECE 361
Basic Addressing ClassesBasic Addressing Classes
Declining cost of registers
10-5 Data-transfer instructions
Arithmetic instructions
Logical and bit-manipulation instructions
Shift instruction
3-42ECE 361
Stack ArchitecturesStack Architectures
3-43ECE 361
 Stack architecture
 high frequency of memory accesses has made it
unattractive
 is useful for rapid interpretation of high-level
language programs
Infix expression
(A+B) ×C+(D×E)
Postfix expression
AB+C×DE×+
3-44ECE 361
Accumulator ArchitecturesAccumulator Architectures
3-45ECE 361
Register-Set ArchitecturesRegister-Set Architectures
3-46ECE 361
Register-to-Register: Load-Store ArchitecturesRegister-to-Register: Load-Store Architectures
3-47ECE 361
Register-to-Memory ArchitecturesRegister-to-Memory Architectures
3-48ECE 361
Memory-to-Memory ArchitecturesMemory-to-Memory Architectures
3-49ECE 361
Addressing ModesAddressing Modes
3-50ECE 361
Instruction Set Design MetricsInstruction Set Design Metrics
Static Metrics
• How many bytes does the program
occupy in memory?
Dynamic Metrics
• How many instructions are executed?
• How many bytes does the processor fetch to execute the
program?
• How many clocks are required per instruction?
• How "lean" a clock is practical?
CPI
Instruction Count Cycle Time
Cycle
Seconds
nInstructio
Cycles
nsInstructio
ePerformanc
imeExecutionT ××==
1
Types of ISA and examples:
1. RISC -> Playstation
2. CISC -> Intel x86
3. MISC -> INMOS Transputer
4. ZISC -> ZISC36
5. SIMD -> many GPUs
6. EPIC -> IA-64 Itanium
7. VLIW -> C6000 (Texas Instruments)
Problems of the Past
• In the past, it was believed that hardware
design was easier than compiler design
– Most programs were written in assembly
language
• Hardware concerns of the past:
– Limited and slower memory
– Few registers
The Solution
• Have instructions do more work, thereby
minimizing the number of instructions called
in a program
• Allow for variations of each instruction
– Usually variations in memory access
• Minimize the number of memory accesses
The Search for RISC
• Compilers became more prevalent
• The majority of CISC instructions were rarely
used
• Some complex instructions were slower than
a group of simple instructions performing an
equivalent task
– Too many instructions for designers to optimize
each one
RISC Architecture
• Small, highly optimized set of instructions
• Uses a load-store architecture
• Short execution time
• Pipelining
• Many registers
Pipelining
• Break instructions into steps
• Work on instructions like in an assembly line
• Allows for more instructions to be executed
in less time
• A n-stage pipeline is n times faster than a non
pipeline processor (in theory)
RISC Pipeline Stages
• Fetch instruction
• Decode instruction
• Execute instruction
• Access operand
• Write result
– Note: Slight variations depending on processor
Without Pipelining
Instr 1
Instr 2
Clock Cycle 1 2 3 4 5 6 7 8 9 10
• Normally, you would perform the fetch, decode,
execute, operate, and write steps of an instruction
and then move on to the next instruction
With Pipelining
Clock Cycle 1 2 3 4 5 6 7 8 9
Instr 1
Instr 2
Instr 3
Instr 4
Instr 5
• The processor is able to perform each stage
simultaneously.
• If the processor is decoding an instruction, it may
also fetch another instruction at the same time.
Pipeline (cont.)
• Length of pipeline depends on the longest
step
• Thus in RISC, all instructions were made to
be the same length
• Each stage takes 1 clock cycle
• In theory, an instruction should be finished
each clock cycle
Pipeline Problem
• Problem: An instruction may need to wait for
the result of another instruction
Pipeline Solution :
• Solution: Compiler may recognize which
instructions are dependent or independent of
the current instruction, and rearrange them to
run the independent one first
How to make pipelines faster
• Superpipelining
– Divide the stages of pipelining into more stages
• Ex: Split “fetch instruction” stage into two
stages
Super duper pipelining
Super scalar pipelining
 Run multiple pipelines in parallel
Automated consolidation of data from many
sources,
Dynamic pipeline
• Dynamic pipeline: Uses buffers to hold
instruction bits in case a dependent
instruction stalls
Why CISC Persists ?
• Most Intel and AMD chips are CISC x86
• Most PC applications are written for x86
• Intel spent more money improving the
performance of their chips
• Modern Intel and AMD chips incorporate
elements of pipelining
– During decoding, x86 instructions are split into
smaller pieces
Superscalar and VLIW
Architectures
VLSI ARCHITECTURES
Outline
• Types of architectures
• Superscalar
• Differences between CISC, RISC and VLIW
• VLIW ( very long instruction word )
VLIW Goals:
Flexible enough
Match well technology
Very Long Instruction Word
o Very long instruction word or VLIW refers to a processor architecture designed to
take advantage of instruction level parallelism
VLIW philosophy:
– “dumb” hardware
– “intelligent” compiler
VLIW - History
• Floating Point Systems Array Processor
– very successful in 70’s
– all latencies fixed; fast memory
• Multiflow
– Josh Fisher (now at HP)
– 1980’s Mini-Supercomputer
• Cydrome
– Bob Rau (now at HP)
– 1980’s Mini-Supercomputer
• Tera
– Burton Smith
– 1990’s Supercomputer
– Multithreading
• Intel IA-64 (Intel & HP)
VLIW Processors
Goal of the hardware design:
• reduce hardware complexity
• to shorten the cycle time for better performance
• to reduce power requirements
How VLIW designs reduce hardware complexity ?
1. less multiple-issue hardware
1. no dependence checking for instructions within a bundle
2. can be fewer paths between instruction issue slots & FUs
2. simpler instruction dispatch
1. no out-of-order execution, no instruction grouping
3. ideally no structural hazard checking logic
• Reduction in hardware complexity affects cycle time & power
consumption
VLIW Processors
More compiler support to increase ILP
detects hazards & hides
latencies
• structural hazards
• no 2 operations to the same functional unit
• no 2 operations to the same memory bank
• hiding latencies
• data prefetching
• hoisting loads above stores
• data hazards
• no data hazards among instructions in a
bundle
• control hazards
• predicated execution
VLIW: Definition
• Multiple independent Functional Units
• Instruction consists of multiple independent instructions
• Each of them is aligned to a functional unit
• Latencies are fixed
– Architecturally visible
• Compiler packs instructions into a VLIW also schedules all
hardware resources
• Entire VLIW issues as a single unit
• Result: ILP with simple hardware
– compact, fast hardware control
– fast clock
– At least, this is the goal
Introduction
o Instruction of a VLIW processor consists of multiple independent
operations grouped together.
o There are multiple independent Functional Units in VLIW processor
architecture.
o Each operation in the instruction is aligned to a functional unit.
o All functional units share the use of a common large register file.
o This type of processor architecture is intended to allow higher
performance without the inherent complexity of some other
approaches.
Slide74
VLIW History
The term coined by J.A. Fisher (Yale) in 1983
ELI S12 (prototype) Trace
(Commercial)
Origin lies in horizontal microcode optimization
Another pioneering work by B. Ramakrishna Rau in
1982 Poly
cyclic (Prototype) Cydra-5
(Commercial)
Recent developments Trimedia
– Philips TMS320C6X –
Texas Instruments
"Bob" Rau
• Bantwal Ramakrishna "Bob" Rau (1951
– December 10, 2002) was a computer
engineer and HP Fellow. Rau was a founder
and chief architect of Cydrome, where he
helped develop the Very long instruction
word technology that is now standard in
modern computer processors. Rau was the
recipient of the 2002 Eckert–Mauchly
Award.
1984: Co-founded Cydrome Inc. and was the chief
architect of the Cydra 5 mini-supercomputer.
1989: Joined Hewlett Packard and started HP Lab's research
program in VLIW and instruction-level parallel processing.
Director of the Compiler and Architecture Research (CAR)
program, which during the 1990s, developed advanced
compiler technology for Hewlett Packard and Intel computers.
At HP, also worked on PICO (Program In, Chip Out) project
to take an embedded application and to automatically design
highly customized computing hardware that is specific to that
application, as well as any compiler that might be needed.
2002: passed away after losing a long battle with cancer
The VLIW Architecture
• A typical VLIW (very long instruction word) machine
has instruction words hundreds of bits in length.
• Multiple functional units are used concurrently in a
VLIW processor.
• All functional units share the use of a
common large register file.
Parallel Operating Environment (POE)
• Compiler creates complete plan of run-time execution
– At what time and using what resource
– POE communicated to hardware via the ISA
– Processor obediently follows POE
– No dynamic scheduling, out of order execution
• These second guess the compiler’s plan
• Compiler allowed to play the statistics
– Many types of info only available at run-time
• branch directions, pointer values
– Traditionally compilers behave conservatively  handle worst case
possibility
– Allow the compiler to gamble when it believes the odds are in its favor
• Profiling
• Expose micro-architecture to the compiler
– memory system, branch execution
VLIW Processors
Compiler support to increase ILP
• compiler creates each VLIW word
• need for good code scheduling greater than with in-order issue superscalars
• instruction doesn’t issue if 1 operation can’t ( reverse to maala
bulb )
• techniques for increasing ILP
1.loop unrolling
2.software pipelining (schedules instructions from
different iterations together)
3.aggressive inlining (function becomes part of the
caller code)
4.trace scheduling (schedule beyond basic block
boundaries)
Different Approaches
Other approaches to improving performance in processor architectures :
o Pipelining
Breaking up instructions into sub-steps so that instructions can be
executed partially at the same time
o Superscalar architectures
Dispatching individual instructions to be executed completely
independently in different parts of the processor
o Out-of-order execution
Executing instructions in an order different from the program
Parallel processing
Processing instructions in parallel requires three
major tasks:
1. checking dependencies between instructions to
determine which instructions can be grouped
together for parallel execution;
2. assigning instructions to the functional units on
the hardware;
3. determining when instructions are initiated placed
together into a single word.
ILP
Consider the following program:
op 1 e = a + b
op2 f = c + d
op3 m = e * f
o Operation 3 depends on the results of operations 1 and 2, so it
cannot be calculated until both of them are completed
o However, operations 1 and 2 do not depend on any other
operation, so they can be calculated simultaneously
o If we assume that each operation can be completed in one unit of
time then these three instructions can be completed in a total of
two units of time giving an ILP of 3/2.
Two approaches to ILP
oHardware approach:
Works upon dynamic parallelism where
scheduling of instructions is at run time
oSoftware approach:
Works on static parallelism where
scheduling of instructions is by compiler
VLIW COMPILER
o Compiler is responsible for static scheduling of instructions in VLIW
processor.
o Compiler finds out which operations can be
executed in parallel in the program.
o It groups together these operations in single instruction which is the
very large instruction word.
o Compiler ensures that an operation is not issued before its operands
are ready.
VLIW Example
I-fetch &
Issue
FU
FU
Memory
Port
Memory
Port
Multi-ported
Register
File
Block Diagram
Working
o Long instruction words are fetched from the memory
o A common multi-ported register file for fetching the operands and
storing the results.
o Parallel random access to the register file is possible through the
read/write cross bar.
o Execution in the functional units is carried out concurrently with the
load/store operation of data between RAM and the register file.
o One or multiple register files for FX and FP data.
o Rely on compiler to find parallelism and schedule dependency free
program code.
Major categories
VLIW – Very Long Instruction Word
EPIC – Explicitly Parallel Instruction Computing
IA-64 EPIC
Explicitly Parallel Instruction Computing, VLIW
2001 800 MHz Itanium IA-64 implementation
Bundle of instructions
• 128 bit bundles
• 3 41-bit instructions/bundle
• 2 bundles can be issued at once
• if issue one, get another
• less delay in bundle issue
Slide90
Data path : A simple VLIW Architecture
FU FU FU
Register file
Scalability ?
Access time, area, power consumption sharply increase with
number of register ports
Slide91
Data path : Clustered VLIW Architecture
(distributed register file)
FU FU
Register file
FU FU
Register file
FU FU
Register file
Interconnection Network
Slide92
Coarse grain Fus with
VLIW core
MULT RAM ALU
Coarse grain
FU
Reg2
Reg1
Reg1
Reg1
Reg2
Reg2
Multiplexer network
Micro
Code
IR
Prg. Counter
Logic
Embedded (co)-processors as Fus in a VLIW architecture
Slide93
Application Specific FUs
FUfunctionality
number of inputs
number of outputs
latency initiation interval I/O time shape
Functional Units
Superscalar Processors
• Superscalar processors are designed to exploit more
instruction-level parallelism in user programs.
• Only independent instructions can be executed in parallel
without causing a wait state.
• The amount of instruction-level parallelism varies widely
depending on the type of code being executed.
• Superscalar
– Operations are sequential
– Hardware figures out resource assignment, time of execution
Pipelining in Superscalar Processors
• In order to fully utilise a superscalar processor of
degree m, m instructions must be executable in
parallel. This situation may not be true in all clock
cycles. In that case, some of the pipelines may be
stalling in a wait state.
• In a superscalar processor, the simple operation
latency should require only one cycle, as in the base
scalar processor.
isa architecture
Superscalar Execution
Superscalar Implementation
• Simultaneously fetch multiple instructions
• Logic to determine true dependencies involving
register values
• Mechanisms to communicate these values
• Mechanisms to initiate multiple instructions in
parallel
• Resources for parallel execution of multiple
instructions
• Mechanisms for committing process state in
correct order
Difference Between VLIW
&
Superscalar Architecture
Slide
100
Why Superscalar Processors are
commercially more popular as
compared to VLIW processor ?
Binary code compatibility among scalar &
superscalar processors of same family
Same compiler works for all processors (scalars
and superscalars) of same family
Assembly programming of VLIWs is tedious
Code density in VLIWs is very poor
- Instruction encoding schemes
Area Performance
Slide
101
Superscalars vs. VLIW
VLIW requires a more complex compiler
Superscalar's can more efficiently execute
pipeline-independent code
• consequence: don’t have to recompile if change
the implementation
VLSI DESIGN GROUP – METS SCHOOL OF ENGINEERING , MALA
Comparison: CISC, RISC, VLIW
VLSI DESIGN GROUP – METS SCHOOL OF ENGINEERING , MALA
Advantages of VLIW
Compiler prepares fixed packets of multiple
operations that give the full "plan of execution"
– dependencies are determined by compiler and used to
schedule according to function unit latencies
– function units are assigned by compiler and correspond to
the position within the instruction packet ("slotting")
– compiler produces fully-scheduled, hazard-free code =>
hardware doesn't have to "rediscover" dependencies or
schedule
Disadvantages of VLIW
Compatibility across implementations is a major
problem
– VLIW code won't run properly with different number
of function units or different latencies
– unscheduled events (e.g., cache miss) stall entire
processor
Code density is another problem
– low slot utilization (mostly nops)
– reduce nops by compression ("flexible VLIW",
"variable-length VLIW")
isa architecture
References
1. Advanced Computer Architectures, Parallelism, Scalability,
Programmability, K. Hwang, 1993.
2. M. Smotherman, "Understanding EPIC Architectures and
Implementations" (pdf)
http://www.cs.clemson.edu/~mark/464/acmse_epic.pdf
3. Lecture notes of Mark Smotherman,
http://www.cs.clemson.edu/~mark/464/hp3e4.html
4. An Introduction To Very-Long Instruction Word (VLIW) Computer
Architecture, Philips Semiconductors,
http://www.semiconductors.philips.com/acrobat_download/other
/vliw-wp.pdf
5. Texas Instruments, Tutorial on TMS320C6000 VelociTI Advanced
VLIW Architecture.
http://www.acm.org/sigs/sigmicro/existing/micro31/pdf/m31_sesha
n.pdf
Slide
108
ThanksThanks

More Related Content

PPTX
Cache Memory
PPTX
Instruction Set Architecture
PPT
Architecture of 8086 Microprocessor
PPTX
Instruction Set Architecture
PPTX
Register transfer language
PPTX
Computer architecture page replacement algorithms
PPTX
Addressing sequencing
PPTX
Cache memory
Cache Memory
Instruction Set Architecture
Architecture of 8086 Microprocessor
Instruction Set Architecture
Register transfer language
Computer architecture page replacement algorithms
Addressing sequencing
Cache memory

What's hot (20)

PPTX
Cache memory
PDF
Unit IV Memory and I/O Organization
PPTX
Microprocessor Presentation
PPTX
Computer architecture memory system
PPT
pipelining
PPTX
Cache coherence ppt
PPTX
COMPUTER INSTRUCTIONS & TIMING & CONTROL.
PPTX
RISC - Reduced Instruction Set Computing
PPTX
Assembly Language
PPTX
Auxiliary memory
PDF
Unit II Arm 7 Introduction
PPTX
Register organization, stack
PPTX
Cache memory
PPT
Unit 5 I/O organization
PPT
Parallel processing
PPT
Computer architecture pipelining
PPTX
The medium access sublayer
PPTX
PPTX
Multiprocessor
PPTX
Modes of transfer
Cache memory
Unit IV Memory and I/O Organization
Microprocessor Presentation
Computer architecture memory system
pipelining
Cache coherence ppt
COMPUTER INSTRUCTIONS & TIMING & CONTROL.
RISC - Reduced Instruction Set Computing
Assembly Language
Auxiliary memory
Unit II Arm 7 Introduction
Register organization, stack
Cache memory
Unit 5 I/O organization
Parallel processing
Computer architecture pipelining
The medium access sublayer
Multiprocessor
Modes of transfer
Ad

Similar to isa architecture (20)

PPTX
Advanced Processor Power Point Presentation
PDF
Module 1 of apj Abdul kablam university hpc.pdf
PDF
Area Optimized Implementation For Mips Processor
PPTX
Processors selection
PPTX
11-risc-cisc-and-isa-w.pptx
PDF
“A New Golden Age for Computer Architecture: Processor Innovation to Enable U...
PPTX
Microprocessor and Microcontroller Lecture.pptx
PPTX
Vlsi_ppt_34_36_64.pptx
PPSX
Processors used in System on chip
PDF
A New Golden Age for Computer Architecture
PPTX
instruction set Architecture.....(ISA)
PPT
PPTX
Presentation1.pptx
PPT
Unit 5-lecture 5
PDF
Unit I_MT2301.pdf
PPT
Basics of micro controllers for biginners
PDF
UNIT 2_ESD.pdf
PDF
L1.pdf
PDF
L1.pdf
PPTX
microprocessor and microcontroller material
Advanced Processor Power Point Presentation
Module 1 of apj Abdul kablam university hpc.pdf
Area Optimized Implementation For Mips Processor
Processors selection
11-risc-cisc-and-isa-w.pptx
“A New Golden Age for Computer Architecture: Processor Innovation to Enable U...
Microprocessor and Microcontroller Lecture.pptx
Vlsi_ppt_34_36_64.pptx
Processors used in System on chip
A New Golden Age for Computer Architecture
instruction set Architecture.....(ISA)
Presentation1.pptx
Unit 5-lecture 5
Unit I_MT2301.pdf
Basics of micro controllers for biginners
UNIT 2_ESD.pdf
L1.pdf
L1.pdf
microprocessor and microcontroller material
Ad

More from AJAL A J (20)

PDF
KEAM KERALA ENTRANCE EXAM
PDF
Paleontology Career
PPT
CHEMISTRY basic concepts of chemistry
PPT
Ecology
PPT
Biogeochemical cycles
PDF
ac dc bridges
PDF
Hays bridge schering bridge wien bridge
PPT
App Naming Tip
PDF
flora and fauna of himachal pradesh and kerala
PDF
B.Sc Cardiovascular Technology(CVT)
PDF
11 business strategies to make profit
PDF
PCOS Polycystic Ovary Syndrome
PDF
Courses and Career Options after Class 12 in Humanities
PPT
MANAGEMENT Stories
PDF
NEET PREPRATION TIPS AND STRATEGY
PDF
REVOLUTIONS IN AGRICULTURE
PDF
NRI QUOTA IN NIT'S
PDF
Subjects to study if you want to work for a charity
PDF
IIT JEE A KERALA PERSPECTIVE
PDF
Clat 2020 exam COMPLETE DETAILS
KEAM KERALA ENTRANCE EXAM
Paleontology Career
CHEMISTRY basic concepts of chemistry
Ecology
Biogeochemical cycles
ac dc bridges
Hays bridge schering bridge wien bridge
App Naming Tip
flora and fauna of himachal pradesh and kerala
B.Sc Cardiovascular Technology(CVT)
11 business strategies to make profit
PCOS Polycystic Ovary Syndrome
Courses and Career Options after Class 12 in Humanities
MANAGEMENT Stories
NEET PREPRATION TIPS AND STRATEGY
REVOLUTIONS IN AGRICULTURE
NRI QUOTA IN NIT'S
Subjects to study if you want to work for a charity
IIT JEE A KERALA PERSPECTIVE
Clat 2020 exam COMPLETE DETAILS

Recently uploaded (20)

PDF
PPT on Performance Review to get promotions
PPTX
bas. eng. economics group 4 presentation 1.pptx
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPT
Mechanical Engineering MATERIALS Selection
PPTX
additive manufacturing of ss316l using mig welding
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
Well-logging-methods_new................
PPTX
Welding lecture in detail for understanding
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPT on Performance Review to get promotions
bas. eng. economics group 4 presentation 1.pptx
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Arduino robotics embedded978-1-4302-3184-4.pdf
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Internet of Things (IOT) - A guide to understanding
Model Code of Practice - Construction Work - 21102022 .pdf
Mechanical Engineering MATERIALS Selection
additive manufacturing of ss316l using mig welding
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Foundation to blockchain - A guide to Blockchain Tech
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Embodied AI: Ushering in the Next Era of Intelligent Systems
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Well-logging-methods_new................
Welding lecture in detail for understanding
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx

isa architecture

  • 2. Instruction Set Architecture • Instruction set architecture is the structure of a computer that a machine language programmer must understand to write a correct (timing independent) program for that machine. • The instruction set architecture is also the machine description that a hardware designer must understand to design a correct implementation of the computer. • a fixed number of operations are formatted as one big instruction (called a bundle) op op op Bundling info
  • 3. Instruction Set Architecture Computer Architecture = Instruction Set Architecture + Machine Organization • “... the attributes of a [computing] system as seen by the programmer, i.e. the conceptual structure and functional behavior …”
  • 4. Evolution of Instruction Sets Single Accumulator (EDSAC 1950) Accumulator + Index Registers (Manchester Mark I, IBM 700 series 1953) Separation of Programming Model from Implementation High-level Language Based Concept of a Family (B5000 1963) (IBM 360 1964) General Purpose Register Machines Complex Instruction Sets Load/Store Architecture RISC (Vax, Intel 432 1977-80) (CDC 6600, Cray 1 1963-76) (Mips,Sparc,HP-PA,IBM RS6000,PowerPC . . .1987) LIW/”EPIC”? (IA-64. . .1999) VLIW
  • 5. Instruction Set Architecture – Interface between all the software that runs on the machine and the hardware that executes it • Computer Architecture = Hardware + ISA
  • 6. instruction set, or instruction set architecture (ISA) • An instruction set, or instruction set architecture (ISA), is the part of the computer architecture related to programming, including the native data types, instructions, registers, addressing modes, memory, architecture, interrupt and exception handling, and external I/O. An ISA includes a specification of the set of opcodes (machine language), and the native commands implemented by a particular processor.
  • 7. Microarchitecture • Instruction set architecture is distinguished from the microarchitecture, which is the set of processor design techniques used to implement the instruction set. Computers with different micro architectures can share a common instruction set. • For example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but have radically different internal designs.
  • 9. NUAL vs. UAL • Unit Assumed Latency (UAL) – Semantics of the program are that each instruction is completed before the next one is issued – This is the conventional sequential model • Non-Unit Assumed Latency (NUAL): – At least 1 operation has a non-unit assumed latency, L, which is greater than 1 – The semantics of the program are correctly understood if exactly the next L-1 instructions are understood to have issued before this operation completes
  • 22. Instruction Set Architecture ICS 233 – Computer Architecture and Assembly Language – KFUPM © Muhamed Mudawar slide 22 Summary of RISC Design  All instructions are typically of one size  Few instruction formats  All operations on data are register to register  Operands are read from registers  Result is stored in a register  General purpose integer and floating point registers  Typically, 32 integer and 32 floating-point registers  Memory access only via load and store instructions  Load and store: bytes, half words, words, and double words  Few simple addressing modes
  • 23. Instruction Set Architectures  Reduced Instruction Set Computers (RISCs)  Simple instruction  Flexibility  Higher throughput  Faster execution  Complex Instruction Set Computers (CISCs)  Hardware support for high-level language  Compact program
  • 24. MIPS: A RISC example  Smaller and simpler instruction set  111 instructions  One cycle execution time  Pipelining  32 registers  32 bits for each register
  • 25. MIPS Instruction Set  25 branch/jump instructions  21 arithmetic instructions  15 load instructions  12 comparison instructions  10 store instructions  8 logic instructions  8 bit manipulation instructions  8 move instructions  4 miscellaneous instructions
  • 26. Overview of the MIPS Processor Memory Up to 232 bytes = 230 words 4 bytes per word $0 $1 $2 $31 Hi Lo ALU F0 F1 F2 F31 FP Arith EPC Cause BadVaddr Status EIU FPU TMU Execution & Integer Unit (Main proc) Floating Point Unit (Coproc 1) Trap & Memory Unit (Coproc 0) . . . . . . Integer mul/div Arithmetic & Logic Unit 32 General Purpose Registers Integer Multiplier/Divider 32 Floating-Point Registers Floating-Point Arithmetic Unit
  • 27. CPU Registers $0 $31 Arithmetic unit Multiply divide Lo Hi Coprocessor 1 (FPU) Registers $0 $31 Arithmetic unit Registers BadVAddr Coprocessor 0 (traps and memory) Status Cause EPC Memory MIPS R2000 Organization
  • 28. 3-28ECE 361 DefinitionsDefinitions Performance is typically in units-per-second • bigger is better If we are primarily concerned with response time • performance = 1 execution_time " X is n times faster than Y" means n ePerformanc ePerformanc imeExecutionT imeExecutionT y x x y ==
  • 29. 3-29ECE 361 Organizational Trade-offsOrganizational Trade-offs Compiler Programming Language Application Datapath Control TransistorsWiresPins ISA Function Units Instruction Mix Cycle Time CPI CPI is a useful design measure relating the Instruction Set Architecture with the Implementation of that architecture, and the program measured
  • 30. 3-30ECE 361 Principal Design Metrics: CPI and Cycle TimePrincipal Design Metrics: CPI and Cycle Time Seconds nsInstructio Cycle Seconds nInstructio Cycles ePerformanc CycleTimeCPI ePerformanc imeExecutionT ePerformanc = × = × = = 1 1 1
  • 31. 3-31ECE 361 Amdahl's “Law”: Make the Common Case FastAmdahl's “Law”: Make the Common Case Fast Speedup due to enhancement E: ExTime w/o E Performance w/ E Speedup(E) = -------------------- = --------------------- ExTime w/ E Performance w/o E Suppose that enhancement E accelerates a fraction F of the task by a factor S and the remainder of the task is unaffected then, ExTime(with E) = ((1-F) + F/S) X ExTime(without E) Speedup(with E) = ExTime(without E) ÷ ((1-F) + F/S) X ExTime(without E) Performance improvement is limited by how much the improved feature is used  Invest resources where time is spent.
  • 32. Classification of Instruction Set Architectures
  • 33. 3-33ECE 361 Instruction SetInstruction Set DesignDesign Multiple Implementations: 8086  Pentium 4 ISAs evolve: MIPS-I, MIPS-II, MIPS-II, MIPS-IV, MIPS,MDMX, MIPS-32, MIPS-64 instruction set software hardware
  • 34. 3-34ECE 361 The steps for executing an instruction:The steps for executing an instruction: 1.Fetch the instruction 2.Decode the instruction 3.Locate the operand 4.Fetch the operand (if necessary) 5.Execute the operation in processor registers 6.Store the results 7.Go back to step 1
  • 35. 3-35ECE 361 Typical Processor Execution CycleTypical Processor Execution Cycle Instruction Fetch Instruction Decode Operand Fetch Execute Result Store Next Instruction Obtain instruction from program storage Determine required actions and instruction size Locate and obtain operand data Compute result value or status Deposit results in register or storage for later use Determine successor instruction
  • 36. 3-36ECE 361 Instruction and Data Memory: Unified or SeparateInstruction and Data Memory: Unified or Separate ADD SUBTRACT AND OR COMPARE . . . 01010 01110 10011 10001 11010 . . . Programmer's View Computer's View CPU Memory I/O Computer Program (Instructions) Princeton (Von Neumann) Architecture --- Data and Instructions mixed in same unified memory --- Program as data --- Storage utilization --- Single memory interface Harvard Architecture --- Data & Instructions in separate memories --- Has advantages in certain high performance implementations --- Can optimize each memory
  • 37. 3-37ECE 361 Basic Addressing ClassesBasic Addressing Classes Declining cost of registers
  • 43. 3-43ECE 361  Stack architecture  high frequency of memory accesses has made it unattractive  is useful for rapid interpretation of high-level language programs Infix expression (A+B) ×C+(D×E) Postfix expression AB+C×DE×+
  • 46. 3-46ECE 361 Register-to-Register: Load-Store ArchitecturesRegister-to-Register: Load-Store Architectures
  • 50. 3-50ECE 361 Instruction Set Design MetricsInstruction Set Design Metrics Static Metrics • How many bytes does the program occupy in memory? Dynamic Metrics • How many instructions are executed? • How many bytes does the processor fetch to execute the program? • How many clocks are required per instruction? • How "lean" a clock is practical? CPI Instruction Count Cycle Time Cycle Seconds nInstructio Cycles nsInstructio ePerformanc imeExecutionT ××== 1
  • 51. Types of ISA and examples: 1. RISC -> Playstation 2. CISC -> Intel x86 3. MISC -> INMOS Transputer 4. ZISC -> ZISC36 5. SIMD -> many GPUs 6. EPIC -> IA-64 Itanium 7. VLIW -> C6000 (Texas Instruments)
  • 52. Problems of the Past • In the past, it was believed that hardware design was easier than compiler design – Most programs were written in assembly language • Hardware concerns of the past: – Limited and slower memory – Few registers
  • 53. The Solution • Have instructions do more work, thereby minimizing the number of instructions called in a program • Allow for variations of each instruction – Usually variations in memory access • Minimize the number of memory accesses
  • 54. The Search for RISC • Compilers became more prevalent • The majority of CISC instructions were rarely used • Some complex instructions were slower than a group of simple instructions performing an equivalent task – Too many instructions for designers to optimize each one
  • 55. RISC Architecture • Small, highly optimized set of instructions • Uses a load-store architecture • Short execution time • Pipelining • Many registers
  • 56. Pipelining • Break instructions into steps • Work on instructions like in an assembly line • Allows for more instructions to be executed in less time • A n-stage pipeline is n times faster than a non pipeline processor (in theory)
  • 57. RISC Pipeline Stages • Fetch instruction • Decode instruction • Execute instruction • Access operand • Write result – Note: Slight variations depending on processor
  • 58. Without Pipelining Instr 1 Instr 2 Clock Cycle 1 2 3 4 5 6 7 8 9 10 • Normally, you would perform the fetch, decode, execute, operate, and write steps of an instruction and then move on to the next instruction
  • 59. With Pipelining Clock Cycle 1 2 3 4 5 6 7 8 9 Instr 1 Instr 2 Instr 3 Instr 4 Instr 5 • The processor is able to perform each stage simultaneously. • If the processor is decoding an instruction, it may also fetch another instruction at the same time.
  • 60. Pipeline (cont.) • Length of pipeline depends on the longest step • Thus in RISC, all instructions were made to be the same length • Each stage takes 1 clock cycle • In theory, an instruction should be finished each clock cycle
  • 61. Pipeline Problem • Problem: An instruction may need to wait for the result of another instruction
  • 62. Pipeline Solution : • Solution: Compiler may recognize which instructions are dependent or independent of the current instruction, and rearrange them to run the independent one first
  • 63. How to make pipelines faster • Superpipelining – Divide the stages of pipelining into more stages • Ex: Split “fetch instruction” stage into two stages Super duper pipelining Super scalar pipelining  Run multiple pipelines in parallel Automated consolidation of data from many sources,
  • 64. Dynamic pipeline • Dynamic pipeline: Uses buffers to hold instruction bits in case a dependent instruction stalls
  • 65. Why CISC Persists ? • Most Intel and AMD chips are CISC x86 • Most PC applications are written for x86 • Intel spent more money improving the performance of their chips • Modern Intel and AMD chips incorporate elements of pipelining – During decoding, x86 instructions are split into smaller pieces
  • 67. Outline • Types of architectures • Superscalar • Differences between CISC, RISC and VLIW • VLIW ( very long instruction word )
  • 68. VLIW Goals: Flexible enough Match well technology Very Long Instruction Word o Very long instruction word or VLIW refers to a processor architecture designed to take advantage of instruction level parallelism VLIW philosophy: – “dumb” hardware – “intelligent” compiler
  • 69. VLIW - History • Floating Point Systems Array Processor – very successful in 70’s – all latencies fixed; fast memory • Multiflow – Josh Fisher (now at HP) – 1980’s Mini-Supercomputer • Cydrome – Bob Rau (now at HP) – 1980’s Mini-Supercomputer • Tera – Burton Smith – 1990’s Supercomputer – Multithreading • Intel IA-64 (Intel & HP)
  • 70. VLIW Processors Goal of the hardware design: • reduce hardware complexity • to shorten the cycle time for better performance • to reduce power requirements How VLIW designs reduce hardware complexity ? 1. less multiple-issue hardware 1. no dependence checking for instructions within a bundle 2. can be fewer paths between instruction issue slots & FUs 2. simpler instruction dispatch 1. no out-of-order execution, no instruction grouping 3. ideally no structural hazard checking logic • Reduction in hardware complexity affects cycle time & power consumption
  • 71. VLIW Processors More compiler support to increase ILP detects hazards & hides latencies • structural hazards • no 2 operations to the same functional unit • no 2 operations to the same memory bank • hiding latencies • data prefetching • hoisting loads above stores • data hazards • no data hazards among instructions in a bundle • control hazards • predicated execution
  • 72. VLIW: Definition • Multiple independent Functional Units • Instruction consists of multiple independent instructions • Each of them is aligned to a functional unit • Latencies are fixed – Architecturally visible • Compiler packs instructions into a VLIW also schedules all hardware resources • Entire VLIW issues as a single unit • Result: ILP with simple hardware – compact, fast hardware control – fast clock – At least, this is the goal
  • 73. Introduction o Instruction of a VLIW processor consists of multiple independent operations grouped together. o There are multiple independent Functional Units in VLIW processor architecture. o Each operation in the instruction is aligned to a functional unit. o All functional units share the use of a common large register file. o This type of processor architecture is intended to allow higher performance without the inherent complexity of some other approaches.
  • 74. Slide74 VLIW History The term coined by J.A. Fisher (Yale) in 1983 ELI S12 (prototype) Trace (Commercial) Origin lies in horizontal microcode optimization Another pioneering work by B. Ramakrishna Rau in 1982 Poly cyclic (Prototype) Cydra-5 (Commercial) Recent developments Trimedia – Philips TMS320C6X – Texas Instruments
  • 75. "Bob" Rau • Bantwal Ramakrishna "Bob" Rau (1951 – December 10, 2002) was a computer engineer and HP Fellow. Rau was a founder and chief architect of Cydrome, where he helped develop the Very long instruction word technology that is now standard in modern computer processors. Rau was the recipient of the 2002 Eckert–Mauchly Award.
  • 76. 1984: Co-founded Cydrome Inc. and was the chief architect of the Cydra 5 mini-supercomputer. 1989: Joined Hewlett Packard and started HP Lab's research program in VLIW and instruction-level parallel processing. Director of the Compiler and Architecture Research (CAR) program, which during the 1990s, developed advanced compiler technology for Hewlett Packard and Intel computers. At HP, also worked on PICO (Program In, Chip Out) project to take an embedded application and to automatically design highly customized computing hardware that is specific to that application, as well as any compiler that might be needed. 2002: passed away after losing a long battle with cancer
  • 77. The VLIW Architecture • A typical VLIW (very long instruction word) machine has instruction words hundreds of bits in length. • Multiple functional units are used concurrently in a VLIW processor. • All functional units share the use of a common large register file.
  • 78. Parallel Operating Environment (POE) • Compiler creates complete plan of run-time execution – At what time and using what resource – POE communicated to hardware via the ISA – Processor obediently follows POE – No dynamic scheduling, out of order execution • These second guess the compiler’s plan • Compiler allowed to play the statistics – Many types of info only available at run-time • branch directions, pointer values – Traditionally compilers behave conservatively  handle worst case possibility – Allow the compiler to gamble when it believes the odds are in its favor • Profiling • Expose micro-architecture to the compiler – memory system, branch execution
  • 79. VLIW Processors Compiler support to increase ILP • compiler creates each VLIW word • need for good code scheduling greater than with in-order issue superscalars • instruction doesn’t issue if 1 operation can’t ( reverse to maala bulb ) • techniques for increasing ILP 1.loop unrolling 2.software pipelining (schedules instructions from different iterations together) 3.aggressive inlining (function becomes part of the caller code) 4.trace scheduling (schedule beyond basic block boundaries)
  • 80. Different Approaches Other approaches to improving performance in processor architectures : o Pipelining Breaking up instructions into sub-steps so that instructions can be executed partially at the same time o Superscalar architectures Dispatching individual instructions to be executed completely independently in different parts of the processor o Out-of-order execution Executing instructions in an order different from the program
  • 81. Parallel processing Processing instructions in parallel requires three major tasks: 1. checking dependencies between instructions to determine which instructions can be grouped together for parallel execution; 2. assigning instructions to the functional units on the hardware; 3. determining when instructions are initiated placed together into a single word.
  • 82. ILP Consider the following program: op 1 e = a + b op2 f = c + d op3 m = e * f o Operation 3 depends on the results of operations 1 and 2, so it cannot be calculated until both of them are completed o However, operations 1 and 2 do not depend on any other operation, so they can be calculated simultaneously o If we assume that each operation can be completed in one unit of time then these three instructions can be completed in a total of two units of time giving an ILP of 3/2.
  • 83. Two approaches to ILP oHardware approach: Works upon dynamic parallelism where scheduling of instructions is at run time oSoftware approach: Works on static parallelism where scheduling of instructions is by compiler
  • 84. VLIW COMPILER o Compiler is responsible for static scheduling of instructions in VLIW processor. o Compiler finds out which operations can be executed in parallel in the program. o It groups together these operations in single instruction which is the very large instruction word. o Compiler ensures that an operation is not issued before its operands are ready.
  • 87. Working o Long instruction words are fetched from the memory o A common multi-ported register file for fetching the operands and storing the results. o Parallel random access to the register file is possible through the read/write cross bar. o Execution in the functional units is carried out concurrently with the load/store operation of data between RAM and the register file. o One or multiple register files for FX and FP data. o Rely on compiler to find parallelism and schedule dependency free program code.
  • 88. Major categories VLIW – Very Long Instruction Word EPIC – Explicitly Parallel Instruction Computing
  • 89. IA-64 EPIC Explicitly Parallel Instruction Computing, VLIW 2001 800 MHz Itanium IA-64 implementation Bundle of instructions • 128 bit bundles • 3 41-bit instructions/bundle • 2 bundles can be issued at once • if issue one, get another • less delay in bundle issue
  • 90. Slide90 Data path : A simple VLIW Architecture FU FU FU Register file Scalability ? Access time, area, power consumption sharply increase with number of register ports
  • 91. Slide91 Data path : Clustered VLIW Architecture (distributed register file) FU FU Register file FU FU Register file FU FU Register file Interconnection Network
  • 92. Slide92 Coarse grain Fus with VLIW core MULT RAM ALU Coarse grain FU Reg2 Reg1 Reg1 Reg1 Reg2 Reg2 Multiplexer network Micro Code IR Prg. Counter Logic Embedded (co)-processors as Fus in a VLIW architecture
  • 93. Slide93 Application Specific FUs FUfunctionality number of inputs number of outputs latency initiation interval I/O time shape Functional Units
  • 94. Superscalar Processors • Superscalar processors are designed to exploit more instruction-level parallelism in user programs. • Only independent instructions can be executed in parallel without causing a wait state. • The amount of instruction-level parallelism varies widely depending on the type of code being executed. • Superscalar – Operations are sequential – Hardware figures out resource assignment, time of execution
  • 95. Pipelining in Superscalar Processors • In order to fully utilise a superscalar processor of degree m, m instructions must be executable in parallel. This situation may not be true in all clock cycles. In that case, some of the pipelines may be stalling in a wait state. • In a superscalar processor, the simple operation latency should require only one cycle, as in the base scalar processor.
  • 98. Superscalar Implementation • Simultaneously fetch multiple instructions • Logic to determine true dependencies involving register values • Mechanisms to communicate these values • Mechanisms to initiate multiple instructions in parallel • Resources for parallel execution of multiple instructions • Mechanisms for committing process state in correct order
  • 100. Slide 100 Why Superscalar Processors are commercially more popular as compared to VLIW processor ? Binary code compatibility among scalar & superscalar processors of same family Same compiler works for all processors (scalars and superscalars) of same family Assembly programming of VLIWs is tedious Code density in VLIWs is very poor - Instruction encoding schemes Area Performance
  • 101. Slide 101 Superscalars vs. VLIW VLIW requires a more complex compiler Superscalar's can more efficiently execute pipeline-independent code • consequence: don’t have to recompile if change the implementation
  • 102. VLSI DESIGN GROUP – METS SCHOOL OF ENGINEERING , MALA Comparison: CISC, RISC, VLIW
  • 103. VLSI DESIGN GROUP – METS SCHOOL OF ENGINEERING , MALA
  • 104. Advantages of VLIW Compiler prepares fixed packets of multiple operations that give the full "plan of execution" – dependencies are determined by compiler and used to schedule according to function unit latencies – function units are assigned by compiler and correspond to the position within the instruction packet ("slotting") – compiler produces fully-scheduled, hazard-free code => hardware doesn't have to "rediscover" dependencies or schedule
  • 105. Disadvantages of VLIW Compatibility across implementations is a major problem – VLIW code won't run properly with different number of function units or different latencies – unscheduled events (e.g., cache miss) stall entire processor Code density is another problem – low slot utilization (mostly nops) – reduce nops by compression ("flexible VLIW", "variable-length VLIW")
  • 107. References 1. Advanced Computer Architectures, Parallelism, Scalability, Programmability, K. Hwang, 1993. 2. M. Smotherman, "Understanding EPIC Architectures and Implementations" (pdf) http://www.cs.clemson.edu/~mark/464/acmse_epic.pdf 3. Lecture notes of Mark Smotherman, http://www.cs.clemson.edu/~mark/464/hp3e4.html 4. An Introduction To Very-Long Instruction Word (VLIW) Computer Architecture, Philips Semiconductors, http://www.semiconductors.philips.com/acrobat_download/other /vliw-wp.pdf 5. Texas Instruments, Tutorial on TMS320C6000 VelociTI Advanced VLIW Architecture. http://www.acm.org/sigs/sigmicro/existing/micro31/pdf/m31_sesha n.pdf

Editor's Notes

  • #33: “Speed” is often an important design criteria. However, other applications have other criteria – e.g., power, reliability, EMI, …
  • #75: Yale near to new york
  • #93: FUS is the gene's official symbol. The FUSgene is also known by other names, listed below.