SlideShare a Scribd company logo
Ca alternative architecture
PRESENTATION
ALRENATIVE
ARCHITECTU
RE
RISC MACHINES
 RISC machines were being developed, the term “reduced” became somewhat of
a misnomer, and is even more so now. The original idea was to provide a set of
minimal instructions that could carry out all essential operations: data
movement, ALU operations, and branching.
 RISC systems access memory only with explicit load and store instructions.
 In CISC systems, many different kinds of instructions access memory, making
instruction length variable and fetch-execute time unpredictable.
RISC MACHINES
 The difference between CISC and RISC becomes evident through the basic
computer performance equation:
 RISC systems shorten execution time by reducing the clock cycles per
instruction.
 CISC systems improve performance by reducing the number of instructions per
program
 Add to this the fact that RISC clock cycles are often shorter than CISC clock
cycles, and it should be clear that even though there are more instructions, the
actual execution time is less for RISC than for CISC.
FLYNN’S TAXONOMY
 Flynn’s taxonomy considers two factors:
1. The number of instructions
2. The number of data streams that flow into the processor
 Flynn’s Taxonomy takes into consideration the number of processors and the
number of data streams that flow into the processor.
 Data driven or Dataflow type architectures sequence of processor events are
based on the data characteristics and not the instructions’ characteristics.
 Flynn’s Taxonomy falls short in a number of ways:
 First, there appears to be no need for MISD machines.
 Second, parallelism is not homogeneous. This assumption ignores the contribution of
specialized processors.
 Third, it provides no straightforward way to distinguish architectures of the MIMD
category.
FLYNN’S TAXONOMY
 The four combinations of multiple processors and multiple data paths are
described by Flynn as:
 SISD: Single instruction stream, single data stream. These are classic
uniprocessor systems.
 SIMD: Single instruction stream, multiple data streams. Executes a single
instruction using multiple computations at the same time (or in parallel) –
makes use of data-level parallelism. All processors execute the same
instruction simultaneously.
 MIMD: Multiple instruction streams, multiple data streams. These are today’s
parallel architectures. Multiple processors function independently and
asynchronously in executing different instructions on different data
 MISD: Multiple instruction streams operating on a single data stream. Many
processors performing different operations on the same data stream.
PARALLEL AND MULTIPROCESSOR ARCHITECTURES
 Parallel processing is capable of economically increasing system throughput
while providing better fault tolerance.
 The limiting factor is that no matter how well an algorithm is parallelized, there is
always some portion that must be done sequentially.
 Additional processors sit idle while the sequential work is performed.
 Thus, it is important to keep in mind that an n -fold increase in processing power
does not necessarily result in an n -fold increase in throughput.
PARALLEL AND MULTIPROCESSOR ARCHITECTURES
 Superscalar And VLIW
 Superscalar
 Superpipeling
 VLIW
 Vectors Processors
 Vector Registers
 Distributed Computing
 Remote Procedure Calls(RPC)
 Superscalar And VLIW
 Superscalar architectures include multiple execution units such as specialized integer and
floating-point adders and multipliers.
 Very long instruction word (VLIW) architectures differ from superscalar architectures because
the VLIW compiler, instead of a hardware decoding unit, packs independent instructions into one
long instruction that is sent down the pipeline to the execution units.
PARALLEL AND MULTIPROCESSOR ARCHITECTURES
 Vector Processors
 Vector computers are processors that operate on entire vectors or matrices at once.
 Vector processors can be categorized according to how operands are accessed.
 Register-register vector processors require all operands to be in registers.
 Memory-memory vector processors allow operands to be sent from memory
directly to the arithmetic units.
 Distributed Computing
 Distributed computing is another form of multiprocessing. However, the term distributed computing
means different things to different people.
 Remote procedure calls (RPCs) extend the concept of distributed computing
and help provide the necessary transparency for resource sharing.
ALTERNATIVE PARALLEL PROCESSING APPROACHES
 Some people argue that real breakthroughs in computational power--
breakthroughs that will enable us to solve today’s intractable problems-- will
occur only by abandoning the von Neumann model.
 Numerous efforts are now underway to devise systems that could change the
way that we think about computers and computation.
 These systems implement new ways of thinking about computers and
computation.
ALTERNATIVE PARALLEL PROCESSING APPROACHES
 They include
 Dataflow Computing
 Neural Network
 Systolic Array
 Dataflow Computing
 Von Neumann machines exhibit sequential control flow: A linear stream of instructions is fetched
from memory, and they act upon data.
 In dataflow computing, program control is directly controlled by data dependencies.
 There is no program counter or shared storage.
ALTERNATIVE PARALLEL PROCESSING APPROACHES
 Neural Network
 Neural network computers consist of a large number of simple processing elements that
individually solve a small piece of a much larger problem.
 They are particularly useful in dynamic situations that are an accumulation of previous behavior,
and where an exact algorithmic solution cannot be formulated.
 Neural network processing elements (PEs) multiply a set of input values by an adaptable set of
weights to yield a single output value.
 Systolic Array
 Systolic arrays, a variation of SIMD computers, have simple processors that process data by
circulating it through vector pipelines.
 Systolic arrays can sustain great throughout because they employ a high degree of parallelism.
 Connections are short, and the design is simple and scalable. They are robust, efficient, and cheap
to produce. They are, however, highly specialized and limited as to they types of problems they
can solve.
CONCLUSION
 The RISC-versus-CISC debate is becoming increasingly more a comparison of
chip architectures. What really matters is program execution time, and both
RISC and CISC designers will continue to improve performance.
 Flynn’s taxonomy categorizes architectures depending on the number of
instructions and data streams. MIMD machines should be further divide
into those that use shared memory and those that do not.
 Very long instruction word (VLIW) architectures differ from superscalar
architectures because the compiler, instead of a decoding unit, creates long
instructions.
 New architectures are being devised to solve intractable problems. These new
architectures include dataflow computers, neural networks, and systolic arrays.
Ca alternative architecture

More Related Content

DOCX
INTRODUCTION TO PARALLEL PROCESSING
PDF
Intro to parallel computing
PPT
Parallel Computing 2007: Overview
PPT
Parallel Computing
PDF
Translating GPU Binaries to Tiered SIMD Architectures with Ocelot
PPTX
Cluster computing
PPTX
Parallel computing
DOCX
Parallel computing persentation
INTRODUCTION TO PARALLEL PROCESSING
Intro to parallel computing
Parallel Computing 2007: Overview
Parallel Computing
Translating GPU Binaries to Tiered SIMD Architectures with Ocelot
Cluster computing
Parallel computing
Parallel computing persentation

What's hot (20)

PPTX
Introduction to Parallel Computing
PDF
Performance from Architecture: Comparing a RISC and a CISC with Similar Hardw...
PPTX
Dichotomy of parallel computing platforms
PPT
CLUSTER COMPUTING
PPTX
Cluster computer
PDF
cbs_sips2005
DOC
PPTX
Parallel computing and its applications
PPTX
Beowulf cluster
PPTX
message passing vs shared memory
PPTX
Vmm level distributed transparency provisioning using cloud infrastructure te...
PDF
HPC Technology Compass 2014/15
DOCX
Introduction to parallel computing
PPT
Parallel processing
PPT
Tutorial on Parallel Computing and Message Passing Model - C1
PPTX
Introduction to parallel processing
PDF
Cloud and distributed computing, advantages
PDF
Using CMOS Sub-Micron Technology VLSI Implementation of Low Power, High Speed...
PPT
Parallel computing
PPTX
parallel processing
Introduction to Parallel Computing
Performance from Architecture: Comparing a RISC and a CISC with Similar Hardw...
Dichotomy of parallel computing platforms
CLUSTER COMPUTING
Cluster computer
cbs_sips2005
Parallel computing and its applications
Beowulf cluster
message passing vs shared memory
Vmm level distributed transparency provisioning using cloud infrastructure te...
HPC Technology Compass 2014/15
Introduction to parallel computing
Parallel processing
Tutorial on Parallel Computing and Message Passing Model - C1
Introduction to parallel processing
Cloud and distributed computing, advantages
Using CMOS Sub-Micron Technology VLSI Implementation of Low Power, High Speed...
Parallel computing
parallel processing
Ad

Similar to Ca alternative architecture (20)

PPTX
Hpc 4 5
PPTX
Advanced processor principles
PDF
Flynn taxonomies
PDF
archintro.pdf
PPT
Chapter 3
PPT
Floating Point Operations , Memory Chip Organization , Serial Bus Architectur...
PPTX
Parallel Computing
PPTX
CA UNIT IV.pptx
PDF
SOC System Design Approach
PPTX
Parallel architecture &programming
PPTX
Parallel architecture-programming
PDF
L1.pdf
PDF
L1.pdf
PPT
Cp uarch
PPT
Floating Point Operations , Memory Chip Organization , Serial Bus Architectur...
PDF
CS304PC:Computer Organization and Architecture UNIT V_merged_merged.pdf
PPTX
Pipelining, processors, risc and cisc
PPTX
aca mod1.pptx
PPTX
parellelisum edited_jsdnsfnjdnjfnjdn.pptx
PDF
This is Unit 1 of High Performance Computing For SRM students
Hpc 4 5
Advanced processor principles
Flynn taxonomies
archintro.pdf
Chapter 3
Floating Point Operations , Memory Chip Organization , Serial Bus Architectur...
Parallel Computing
CA UNIT IV.pptx
SOC System Design Approach
Parallel architecture &programming
Parallel architecture-programming
L1.pdf
L1.pdf
Cp uarch
Floating Point Operations , Memory Chip Organization , Serial Bus Architectur...
CS304PC:Computer Organization and Architecture UNIT V_merged_merged.pdf
Pipelining, processors, risc and cisc
aca mod1.pptx
parellelisum edited_jsdnsfnjdnjfnjdn.pptx
This is Unit 1 of High Performance Computing For SRM students
Ad

Recently uploaded (20)

PDF
Categorization of Factors Affecting Classification Algorithms Selection
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PDF
737-MAX_SRG.pdf student reference guides
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
Fundamentals of Mechanical Engineering.pptx
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
Sustainable Sites - Green Building Construction
PDF
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
PPTX
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
Current and future trends in Computer Vision.pptx
Categorization of Factors Affecting Classification Algorithms Selection
Foundation to blockchain - A guide to Blockchain Tech
Fundamentals of safety and accident prevention -final (1).pptx
737-MAX_SRG.pdf student reference guides
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
Automation-in-Manufacturing-Chapter-Introduction.pdf
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Fundamentals of Mechanical Engineering.pptx
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
Embodied AI: Ushering in the Next Era of Intelligent Systems
Sustainable Sites - Green Building Construction
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Current and future trends in Computer Vision.pptx

Ca alternative architecture

  • 4. RISC MACHINES  RISC machines were being developed, the term “reduced” became somewhat of a misnomer, and is even more so now. The original idea was to provide a set of minimal instructions that could carry out all essential operations: data movement, ALU operations, and branching.  RISC systems access memory only with explicit load and store instructions.  In CISC systems, many different kinds of instructions access memory, making instruction length variable and fetch-execute time unpredictable.
  • 5. RISC MACHINES  The difference between CISC and RISC becomes evident through the basic computer performance equation:  RISC systems shorten execution time by reducing the clock cycles per instruction.  CISC systems improve performance by reducing the number of instructions per program  Add to this the fact that RISC clock cycles are often shorter than CISC clock cycles, and it should be clear that even though there are more instructions, the actual execution time is less for RISC than for CISC.
  • 6. FLYNN’S TAXONOMY  Flynn’s taxonomy considers two factors: 1. The number of instructions 2. The number of data streams that flow into the processor  Flynn’s Taxonomy takes into consideration the number of processors and the number of data streams that flow into the processor.  Data driven or Dataflow type architectures sequence of processor events are based on the data characteristics and not the instructions’ characteristics.  Flynn’s Taxonomy falls short in a number of ways:  First, there appears to be no need for MISD machines.  Second, parallelism is not homogeneous. This assumption ignores the contribution of specialized processors.  Third, it provides no straightforward way to distinguish architectures of the MIMD category.
  • 7. FLYNN’S TAXONOMY  The four combinations of multiple processors and multiple data paths are described by Flynn as:  SISD: Single instruction stream, single data stream. These are classic uniprocessor systems.  SIMD: Single instruction stream, multiple data streams. Executes a single instruction using multiple computations at the same time (or in parallel) – makes use of data-level parallelism. All processors execute the same instruction simultaneously.  MIMD: Multiple instruction streams, multiple data streams. These are today’s parallel architectures. Multiple processors function independently and asynchronously in executing different instructions on different data  MISD: Multiple instruction streams operating on a single data stream. Many processors performing different operations on the same data stream.
  • 8. PARALLEL AND MULTIPROCESSOR ARCHITECTURES  Parallel processing is capable of economically increasing system throughput while providing better fault tolerance.  The limiting factor is that no matter how well an algorithm is parallelized, there is always some portion that must be done sequentially.  Additional processors sit idle while the sequential work is performed.  Thus, it is important to keep in mind that an n -fold increase in processing power does not necessarily result in an n -fold increase in throughput.
  • 9. PARALLEL AND MULTIPROCESSOR ARCHITECTURES  Superscalar And VLIW  Superscalar  Superpipeling  VLIW  Vectors Processors  Vector Registers  Distributed Computing  Remote Procedure Calls(RPC)  Superscalar And VLIW  Superscalar architectures include multiple execution units such as specialized integer and floating-point adders and multipliers.  Very long instruction word (VLIW) architectures differ from superscalar architectures because the VLIW compiler, instead of a hardware decoding unit, packs independent instructions into one long instruction that is sent down the pipeline to the execution units.
  • 10. PARALLEL AND MULTIPROCESSOR ARCHITECTURES  Vector Processors  Vector computers are processors that operate on entire vectors or matrices at once.  Vector processors can be categorized according to how operands are accessed.  Register-register vector processors require all operands to be in registers.  Memory-memory vector processors allow operands to be sent from memory directly to the arithmetic units.  Distributed Computing  Distributed computing is another form of multiprocessing. However, the term distributed computing means different things to different people.  Remote procedure calls (RPCs) extend the concept of distributed computing and help provide the necessary transparency for resource sharing.
  • 11. ALTERNATIVE PARALLEL PROCESSING APPROACHES  Some people argue that real breakthroughs in computational power-- breakthroughs that will enable us to solve today’s intractable problems-- will occur only by abandoning the von Neumann model.  Numerous efforts are now underway to devise systems that could change the way that we think about computers and computation.  These systems implement new ways of thinking about computers and computation.
  • 12. ALTERNATIVE PARALLEL PROCESSING APPROACHES  They include  Dataflow Computing  Neural Network  Systolic Array  Dataflow Computing  Von Neumann machines exhibit sequential control flow: A linear stream of instructions is fetched from memory, and they act upon data.  In dataflow computing, program control is directly controlled by data dependencies.  There is no program counter or shared storage.
  • 13. ALTERNATIVE PARALLEL PROCESSING APPROACHES  Neural Network  Neural network computers consist of a large number of simple processing elements that individually solve a small piece of a much larger problem.  They are particularly useful in dynamic situations that are an accumulation of previous behavior, and where an exact algorithmic solution cannot be formulated.  Neural network processing elements (PEs) multiply a set of input values by an adaptable set of weights to yield a single output value.  Systolic Array  Systolic arrays, a variation of SIMD computers, have simple processors that process data by circulating it through vector pipelines.  Systolic arrays can sustain great throughout because they employ a high degree of parallelism.  Connections are short, and the design is simple and scalable. They are robust, efficient, and cheap to produce. They are, however, highly specialized and limited as to they types of problems they can solve.
  • 14. CONCLUSION  The RISC-versus-CISC debate is becoming increasingly more a comparison of chip architectures. What really matters is program execution time, and both RISC and CISC designers will continue to improve performance.  Flynn’s taxonomy categorizes architectures depending on the number of instructions and data streams. MIMD machines should be further divide into those that use shared memory and those that do not.  Very long instruction word (VLIW) architectures differ from superscalar architectures because the compiler, instead of a decoding unit, creates long instructions.  New architectures are being devised to solve intractable problems. These new architectures include dataflow computers, neural networks, and systolic arrays.