SlideShare a Scribd company logo
Microprocessor IO module and its different functions
Microprocessor IO module and its different functions
Bus
I/O device 1 I/O device n
Processor Memory
•Multiple I/O devices may be connected to the processor and the memory via a bus.
•Bus consists of three sets of lines to carry address, data and control signals.
•Each I/O device is assigned an unique address.
•To access an I/O device, the processor places the address on the address lines.
•The device recognizes the address, and responds to the control signals.
4
 I/O devices and the memory may share the same address
space:
 Memory-mapped I/O.
 Any machine instruction that can access memory can be used to transfer data to
or from an I/O device.
 Simpler software.
 I/O devices and the memory may have different address
spaces:
 Special instructions to transfer data to and from I/O devices.
 I/O devices may have to deal with fewer address lines.
 I/O address lines need not be physically separate from memory address lines.
 In fact, address lines may be shared between I/O devices and memory, with a
control signal to indicate whether it is a memory address or an I/O address.
I/O
interface
decoder
Address Data and
status registers
Control
circuits
Input device
Bus
Address lines
Data lines
Control lines
•I/O device is connected to the bus using an I/O interface circuit which has:
- Address decoder, control circuit, and data and status registers.
•Address decoder decodes the address placed on the address lines thus enabling
the
device to recognize its address.
•Data register holds the data being transferred to or from the processor.
•Status register holds information necessary for the operation of the I/O device.
•Data and status registers are connected to the data lines, and have unique
addresses.
•I/O interface circuit coordinates I/O transfers.
 The rate of transfer to and from I/O devices is slower than
the speed of the processor. This creates the need for
mechanisms to synchronize data transfers between them.
 Program-controlled I/O:
 Processor repeatedly monitors a status flag to achieve the necessary
synchronization.
 Processor polls the I/O device.
 Two other mechanisms used for synchronizing data
transfers between the processor and memory:
 Interrupts.
 Direct Memory Access.
Microprocessor IO module and its different functions
 In program-controlled I/O, when the processor
continuously monitors the status of the device, it does
not perform any useful tasks.
 An alternate approach would be for the I/O device to
alert the processor when it becomes ready.
 Do so by sending a hardware signal called an interrupt to the processor.
 At least one of the bus control lines, called an interrupt-request line is
dedicated for this purpose.
 Processor can perform other useful tasks while it is
waiting for the device to be ready.
Interrupt Service routine
Program 1
here
Interrupt
occurs
M
i
2
1
i 1
+
•Processor is executing the instruction located at address i when an interrupt occurs.
•Routine executed in response to an interrupt request is called the interrupt-service routine.
•When an interrupt occurs, control must be transferred to the interrupt service routine.
•But before transferring control, the current contents of the PC (i+1), must be saved in a known
location.
•This will enable the return-from-interrupt instruction to resume execution at i+1.
•Return address, or the contents of the PC are usually stored on the processor stack.
 Treatment of an interrupt-service routine is very
similar to that of a subroutine.
 However there are significant differences:
 A subroutine performs a task that is required by the calling program.
 Interrupt-service routine may not have anything in common with the
program it interrupts.
 Interrupt-service routine and the program that it interrupts may belong
to different users.
 As a result, before branching to the interrupt-service routine, not only
the PC, but other information such as condition code flags, and
processor registers used by both the interrupted program and the
interrupt service routine must be stored.
 This will enable the interrupted program to resume execution upon
return from interrupt service routine.
 Saving and restoring information can be done automatically by the processor
or explicitly by program instructions.
 Saving and restoring registers involves memory transfers:
 Increases the total execution time.
 Increases the delay between the time an interrupt request is received, and
the start of execution of the interrupt-service routine. This delay is called
interrupt latency.
 In order to reduce the interrupt latency, most processors save only the minimal
amount of information:
 This minimal amount of information includes Program Counter and
processor status registers.
 Any additional information that must be saved, must be saved explicitly by the
program instructions at the beginning of the interrupt service routine.
 When a processor receives an interrupt-request,
it must branch to the interrupt service routine.
 It must also inform the device that it has
recognized the interrupt request.
 This can be accomplished in two ways:
 Some processors have an explicit interrupt-acknowledge
control signal for this purpose.
 In other cases, the data transfer that takes place between the
device and the processor can be used to inform the device.
 Interrupt-requests interrupt the execution of a
program, and may alter the intended sequence of
events:
 Sometimes such alterations may be undesirable, and must not be allowed.
 For example, the processor may not want to be interrupted by the same
device while executing its interrupt-service routine.
 Processors generally provide the ability to enable
and disable such interruptions as desired.
 One simple way is to provide machine instructions
such as Interrupt-enable and Interrupt-disable for
this purpose.
 To avoid interruption by the same device during
the execution of an interrupt service routine:
 First instruction of an interrupt service routine can be Interrupt-disable.
 Last instruction of an interrupt service routine can be Interrupt-enable.
Handling Multiple Devices
Multiple I/O devices may be connected to the processor and
the memory via a bus. Some or all of these devices may be
capable of generating interrupt requests.
 Each device operates independently, and hence no definite order can be
imposed on how the devices generate interrupt requests?
How does the processor know which device has generated
an interrupt?
How does the processor know which interrupt service
routine needs to be executed?
When the processor is executing an interrupt service routine
for one device, can other device interrupt the processor?
If two interrupt-requests are received simultaneously, then
how to break the tie?
Registers in keyboard and display interfaces
DATAIN
DATAOUT
STATUS
CONTROL
DIRQ KIRQ SOUT SIN
DEN KEN
Consider a simple arrangement where all devices send their
interrupt-requests over a single control line in the bus.
When the processor receives an interrupt request over this
control line, how does it know which device is requesting an
interrupt?
This information is available in the status register of the device
requesting an interrupt:
 The status register of each device has an IRQ bit which it
sets to 1 when it requests an interrupt.
Interrupt service routine can poll the I/O devices connected to
the bus. The first device with IRQ equal to 1 is the one that is
serviced.
Polling mechanism is easy, but time consuming to query the
status bits of all the I/O devices connected to the bus.
An alternative approach is Vectored Interrupts
The device requesting an interrupt may identify itself directly to
the processor.
 Device can do so by sending a special code (4 to 8 bits) the
processor over the bus.
 Code supplied by the device may represent a part of the
starting address of the interrupt-service routine.
 The remainder of the starting address is obtained by the
processor based on other information such as the range of
memory addresses where interrupt service routines are
located.
Usually the location pointed to by the interrupting device is used
to store the starting address of the interrupt-service routine. The
processor reads this address, called interrupt vector, and loads it
into the PC.
Interrupt Nesting
Previously, before the processor started executing the interrupt
service routine for a device, it disabled the interrupts from the
device.
In general, same arrangement is used when multiple devices can
send interrupt requests to the processor.
 During the execution of an interrupt service routine of device,
the processor does not accept interrupt requests from any other
device.
 Since the interrupt service routines are usually short, the delay
that this causes is generally acceptable.
However, for certain devices this delay may not be acceptable.
 Which devices can be allowed to interrupt a processor when it
is executing an interrupt service routine of another device?
 I/O devices are organized in a priority structure:
 An interrupt request from a high-priority device is accepted
while the processor is executing the interrupt service routine
of a low priority device.
 A priority level is assigned to a processor that can be changed
under program control.
 Priority level of a processor is the priority of the program
that is currently being executed.
 When the processor starts executing the interrupt service
routine of a device, its priority is raised to that of the device.
 If the device sending an interrupt request has a higher
priority than the processor, the processor accepts the
interrupt request.
 Processor’s priority is encoded in a few bits of the processor
status register.
 Priority can be changed by instructions that write into the
processor status register.
 Usually, these are privileged instructions, or instructions
that can be executed only in the supervisor mode.
 Privileged instructions cannot be executed in the user mode.
 Prevents a user program from accidentally or intentionally
changing the priority of the processor.
 If there is an attempt to execute a privileged instruction in the
user mode, it causes a special type of interrupt called as
privilege exception.
Priority arbitration
Device 1 Device 2 Device p
Processor
INTA1
INTR1 INTRp
INTAp
Which interrupt request does the processor accept if it receives interrupt
requests from two or more devices simultaneously?
If the I/O devices are organized in a priority structure, the processor
accepts the interrupt request from a device with higher priority.
Each device has its own interrupt request and interrupt acknowledge line.
A different priority level is assigned to the interrupt request line of each
device.
However, if the devices share an interrupt request line, then how does the
processor decide which interrupt request to accept?
Processor
Device 2
I NTR
INTA
Device n
Device 1
Polling scheme:
•If the processor uses a polling mechanism to poll the status registers of I/O devices
to determine which device is requesting an interrupt.
•In this case the priority is determined by the order in which the devices are polled.
•The first device with status bit set to 1 is the device whose interrupt request is
accepted.
Daisy chain scheme:
•Devices are connected to form a daisy chain.
•Devices share the interrupt-request line, and interrupt-acknowledge line is connected
to form a daisy chain.
•When devices raise an interrupt request, the interrupt-request line is activated.
•The processor in response activates interrupt-acknowledge.
•Received by device 1, if device 1 does not need service, it passes the signal to device 2
•Device that is electrically closest to the processor has the highest priority.
•When I/O devices were organized into a priority structure, each device had its own
interrupt-request and interrupt-acknowledge line.
•When I/O devices were organized in a daisy chain fashion, the devices shared an
interrupt-request line, and the interrupt-acknowledge propagated through the devices
•A combination of priority structure and daisy chain scheme can also used.
Device Device
circuit
Priority arbitration
Processor
Device Device
INTR1
INTR p
INTA1
INTAp
•Devices are organized into groups.
•Each group is assigned a different priority level.
•All the devices within a single group share an interrupt-request line, and are
connected to form a daisy chain.
 Only those devices that are being used in a program should be
allowed to generate interrupt requests.
 To control which devices are allowed to generate interrupt
requests, the interface circuit of each I/O device has an
interrupt-enable bit.
 If the interrupt-enable bit in the device interface is set to 1,
then the device is allowed to generate an interrupt-request.
 Interrupt-enable bit in the device’s interface circuit determines
whether the device is allowed to generate an interrupt request.
 Interrupt-enable bit in the processor status register or the
priority structure of the interrupts determines whether a given
interrupt will be accepted.
Microprocessor IO module and its different functions
A special control unit may be provided to transfer a block of data
directly between an I/O device and the main memory, without
continuous intervention by the processor.
Control unit which performs these transfers is a part of the I/O
device’s interface circuit. This control unit is called as a DMA
controller.
DMA controller performs functions that would be normally
carried out by the processor:
For each word, it provides the memory address and all the
control signals.
To transfer a block of data, it increments the memory addresses
and keeps track of the number of transfers.
DMA controller can transfer a block of data from an external
device to the processor, without any intervention from the
processor.
 However, the operation of the DMA controller must be
under the control of a program executed by the processor.
That is, the processor must initiate the DMA transfer.
To initiate the DMA transfer, the processor informs the DMA
controller with:
 Starting address,
 Number of words in the block.
 Direction of transfer (I/O device to the memory, or memory
to the I/O device).
Once the DMA controller completes the DMA transfer, it
informs the processor by raising an interrupt signal.
memory
Processor
System bus
Main
Keyboard
Disk/DMA
controller Printer
DMA
controller
Disk
Disk
•DMA controller connects a high-speed network to the computer bus.
•Disk controller, which controls two disks also has DMA capability. It provides two
DMA channels.
•It can perform two independent DMA operations, as if each disk has its own DMA
controller. The registers to store the memory address, word count and status and
control information are duplicated.
Network
Interface
 Processor and DMA controllers have to use the bus in an interwoven
fashion to access the memory.
 DMA devices are given higher priority than the processor to
access the bus.
 Among different DMA devices, high priority is given to high-
speed peripherals such as a disk or a graphics display device.
 Processor originates most memory access cycles on the bus.
 DMA controller can be said to “steal” memory access cycles from
the bus. This interweaving technique is called as “cycle
stealing”.
 An alternate approach is, provide a DMA controller an exclusive
capability to initiate transfers on the bus, and hence exclusive access
to the main memory. This is known as the block or burst mode.
Bus arbitration
 Processor and DMA controllers both need to initiate data transfers on
the bus and access main memory.
 The device that is allowed to initiate transfers on the bus at any given
time is called the bus master.
 When the current bus master relinquishes its status as the bus master,
another device can acquire this status.
 The process by which the next device to become the bus master is
selected and bus mastership is transferred to it is called bus
arbitration.
 Centralized arbitration:
 A single bus arbiter performs the arbitration.
 Distributed arbitration:
 All devices participate in the selection of the next bus master.
Centralized Bus Arbitration
Processor
DMA
controller
1
DMA
controller
2
BG1 BG2
B R
B BS Y
Centralized Bus Arbitration
• Bus arbiter may be the processor or a separate unit connected to
the bus.
• Normally, the processor is the bus master, unless it grants bus
membership to one of the DMA controllers.
• DMA controller requests the control of the bus by asserting the
Bus Request (BR) line.
• In response, the processor activates the Bus-Grant1 (BG1) line,
indicating that the controller may use the bus when it is free.
• BG1 signal is connected to all DMA controllers in a daisy chain
fashion.
• BBSY signal is 0, it indicates that the bus is busy. When BBSY
becomes 1, the DMA controller which asserted BR can acquire
control of the bus.
BBSY
BG1
BG2
Bus
master
BR
Processor DMA controller 2 Processor
Time
DMA controller 2
asserts the BR signal.
Processor asserts
the BG1 signal
BG1 signal propagates
to DMA#2.
Processor relinquishes control
of the bus by setting BBSY to 1.
Distributed Arbitration
 All devices waiting to use the bus to share the responsibility of
carrying out the arbitration process.
 Arbitration process does not depend on a central arbiter and
hence distributed arbitration has higher reliability.
 Each device is assigned a 4-bit ID number.
 All the devices are connected using 5 lines, 4 arbitration lines
to transmit the ID, and one line for the Start-Arbitration signal.
 To request the bus, a device:
 Asserts the Start-Arbitration signal.
 Places its 4-bit ID number on the arbitration lines.
 The pattern that appears on the arbitration lines is the logical-
OR of all the 4-bit device IDs placed on the arbitration lines.
Microprocessor IO module and its different functions
 Arbitration process:
 Each device compares the pattern that appears on the
arbitration lines to its own ID, starting with MSB.
 If it detects a difference, it transmits 0s on the arbitration
lines for that and all lower bit positions.
 The pattern that appears on the arbitration lines is the
logical-OR of all the 4-bit device IDs placed on the
arbitration lines.
•Device A has the ID 5 and wants to request the bus:
- Transmits the pattern 0101 on the arbitration lines.
•Device B has the ID 6 and wants to request the bus:
- Transmits the pattern 0110 on the arbitration lines.
•Pattern that appears on the arbitration lines is the logical OR of the patterns:
- Pattern 0111 appears on the arbitration lines.
Arbitration process:
•Each device compares the pattern that appears on the arbitration lines to its own
ID, starting with MSB.
•If it detects a difference, it transmits 0s on the arbitration lines for that and all low
bit positions.
•Device A compares its ID 5 with a pattern 0101 to pattern 0111.
•It detects a difference at bit position 0, as a result, it transmits a pattern 0100 on th
arbitration lines.
•The pattern that appears on the arbitration lines is the logical-OR of 0100 and 0110
which is 0110.
•This pattern is the same as the device ID of B, and hence B has won the arbitration.
Internal organization of memory chips
 Each memory cell can hold one bit of information.
 Memory cells are organized in the form of an array.
 One row is one memory word.
 All cells of a row are connected to a common line,
known as the “word line”.
 Word line is connected to the address decoder.
 Sense/Write circuits are connected to the data
input/output lines of the memory chip.
FF
circuit
Sense / Write
Address
decoder
FF
CS
cells
Memory
circuit
Sense / Write Sense / Write
circuit
Data input /output lines:
A0
A1
A2
A3
W0
W1
W15
7 1 0
W
R /
7 1 0
b7 b1 b0
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
SRAM Cell
 Two transistor inverters are cross connected to implement a basic flip-flop.
 The cell is connected to one word line and two bits lines by transistors T1
and T2
 When word line is at ground level, the transistors are turned off and the
latch retains its state
 Read operation: In order to read state of SRAM cell, the word line is
activated to close switches T1 and T2. Sense/Write circuits at the bottom
monitor the state of b and b’
Y
X
Word line
Bit lines
b
T 2
T 1
b 
Speed, Size, and Cost
 A big challenge in the design of a computer system is to provide a sufficiently
large memory, with a reasonable speed at an affordable cost.
 Static RAM:
 Very fast, but expensive, because a basic SRAM cell has a complex circuit
making it impossible to pack a large number of cells onto a single chip.
 Dynamic RAM:
 Simpler basic cell circuit, hence are much less expensive, but significantly
slower than SRAMs.
 Magnetic disks:
 Storage provided by DRAMs is higher than SRAMs, but is still less than
what is necessary.
 Secondary storage such as magnetic disks provide a large amount
of storage, but is much slower than DRAMs.
Memory Hierarchy
Pr ocessor
Primary
cache
Main
memory
Increasing
size
Increasing
speed
Magnetic disk
secondary
memory
Increasing
cost per bit
Re gisters
L1
Secondary
cache
L2
•Fastest access is to the data held in
processor registers. Registers are at
the top of the memory hierarchy.
•Relatively small amount of memory that
can be implemented on the processor
chip. This is processor cache.
•Two levels of cache. Level 1 (L1) cache
is on the processor chip. Level 2 (L2)
cache is in between main memory and
processor.
•Next level is main memory, implemented
as SIMMs. Much larger, but much slower
than cache memory.
•Next level is magnetic disks. Huge amount
of inexepensive storage.
•Speed of memory access is critical, the
idea is to bring instructions and data
that will be used in the near future as
close to the processor as possible.
Cache memories
• Processor issues a Read request, a block of words is transferred from the
main memory to the cache, one word at a time.
• Subsequent references to the data in this block of words are found in the
cache.
• At any given time, only some blocks in the main memory are held in the
cache. Which blocks in the main memory are in the cache is determined
by a “mapping function”.
• When the cache is full, and a block of words needs to be transferred
from the main memory, some block of words in the cache must be
replaced. This is determined by a “replacement algorithm”.
Cache
Main
memory
Processor
Mapping functions
 Mapping functions determine how memory blocks are
placed in the cache.
 A simple processor example:
 Cache consisting of 128 blocks of 16 words each.
 Total size of cache is 2048 (2K) words.
 Main memory is addressable by a 16-bit address.
 Main memory has 64K words.
 Main memory has 4K blocks of 16 words each.
 Three mapping functions:
 Direct mapping
 Associative mapping
 Set-associative mapping.
Main
memory
Block 0
Block 1
Block 127
Block 128
Block 129
Block 255
Block 256
Block 257
Block 4095
7 4
Main memory address
T ag Block W ord
5
tag
tag
tag
Cache
Block 0
Block 1
Block 127
•Block j of the main memory maps to j modulo 128
of the cache. 0 maps to 0, 129 maps to 1.
•More than one memory block is mapped onto the
same position in the cache.
•May lead to contention for cache blocks even if the
cache is not full.
•Resolve the contention by allowing new block to
replace the old block, leading to a trivial
replacement algorithm.
•Memory address is divided into three fields:
- Low order 4 bits determine one of the 16
words in a block.
- When a new block is brought into the cache,
the next 7 bits determine which cache
block this new block is placed in.
- High order 5 bits determine which of the
possible 32 blocks is currently present in the cache.
These are tag bits.
•Simple to implement but not very flexible.
•Main memory block can be placed into
any cache
position.
•Memory address is divided into two
fields:
- Low order 4 bits identify the word
within a block.
- High order 12 bits or tag bits identify
a memory block when it is resident in the
cache.
•Flexible, and uses cache space
efficiently.
•Replacement algorithms can be used to
replace an existing block in the cache
when the cache is full.
•Cost is higher than direct-mapped cache
because of the need to search all 128
patterns to determine whether a given
block is in the cache.
Main
memory
Block 0
Block 1
Block 127
Block 128
Block 129
Block 255
Block 256
Block 257
Block 4095
4
Main memory address
Tag Word
12
tag
tag
tag
Cache
Block 0
Block 1
Block 127
Blocks of cache are grouped into sets.
Mapping function allows a block of the main
memory to reside in any block of a specific set.
Divide the cache into 64 sets, with two blocks per
set.
Memory block 0, 64, 128 etc. map to block 0, and
they can occupy either of the two positions.
Memory address is divided into three fields:
- 6 bit field determines the set number.
- High order 6 bit fields are compared to the
tag fields of the two blocks in a set.
Set-associative mapping combination of direct and
associative mapping.
Number of blocks per set is a design parameter.
- One extreme is to have all the blocks in one
set, requiring no set bits (fully associative
mapping).
- Other extreme is to have one block per set, is
the same as direct mapping.
Main
memory Block 0
Block 1
Block 63
Block 64
Block 65
Block 127
Block 128
Block 129
Block 4095
7 4
Main memory address
Tag Block Word
5
tag
tag
tag
Cache
Block 1
Block 2
Block 126
Block 127
Block 3
Block 0
tag
tag
tag

More Related Content

PPT
Computer Organization_Input_ UNIT -4.ppt
PPT
input and output organization in computer architecture
PPT
unit-5 ppt.ppt
PPT
Unit 5 I/O organization
PPT
Unit2 p1 io organization-97-2003
PPTX
Module 5 Part 1-IO ORGANIZATION IMP.pptx
PPTX
chapter7-io organization.pptx
PDF
Computer oganization input-output
Computer Organization_Input_ UNIT -4.ppt
input and output organization in computer architecture
unit-5 ppt.ppt
Unit 5 I/O organization
Unit2 p1 io organization-97-2003
Module 5 Part 1-IO ORGANIZATION IMP.pptx
chapter7-io organization.pptx
Computer oganization input-output

Similar to Microprocessor IO module and its different functions (20)

PPT
IO organization.ppt
PDF
Module 4 IO organization- computer arc.pdf
PPTX
COA-Unit5-ppt2.pptx
PPTX
Computer organization I/O organization details
PPTX
Computer organization
PPTX
Computer organization
PPTX
Io pro
PPTX
420214730-15cs34-module-2-pptx.pptx
PPTX
Computer Organization and Architecture_Unit 1_part b.pptx
PDF
bec306c Computer Architecture and Organization
PPTX
Module4-Inputoutput Organization.pptxbygvug
PPT
Interrupt
PPT
Interrupt
PDF
COMPUTER ORGANIZATION NOTES Unit 3 4
PPTX
CO--MODULE-1 (b) - Input-Output-Organization.pptx
PPTX
UNIT 2.pptx
PPTX
IO hardware
PPTX
UNIT 5- UNDERSTANDING THE SYSTEM DESIGN PROCESS.pptx
PPTX
Lecture22_New2024 (1).pptx lecture numbere one h yeh
PPT
Unit no 05
IO organization.ppt
Module 4 IO organization- computer arc.pdf
COA-Unit5-ppt2.pptx
Computer organization I/O organization details
Computer organization
Computer organization
Io pro
420214730-15cs34-module-2-pptx.pptx
Computer Organization and Architecture_Unit 1_part b.pptx
bec306c Computer Architecture and Organization
Module4-Inputoutput Organization.pptxbygvug
Interrupt
Interrupt
COMPUTER ORGANIZATION NOTES Unit 3 4
CO--MODULE-1 (b) - Input-Output-Organization.pptx
UNIT 2.pptx
IO hardware
UNIT 5- UNDERSTANDING THE SYSTEM DESIGN PROCESS.pptx
Lecture22_New2024 (1).pptx lecture numbere one h yeh
Unit no 05
Ad

Recently uploaded (20)

PPTX
ART-APP-REPORT-FINctrwxsg f fuy L-na.pptx
PPTX
lesson6-211001025531lesson plan ppt.pptx
PDF
Parts of Speech Prepositions Presentation in Colorful Cute Style_20250724_230...
PPTX
water for all cao bang - a charity project
PPTX
Role and Responsibilities of Bangladesh Coast Guard Base, Mongla Challenges
PPTX
English-9-Q1-3-.pptxjkshbxnnxgchchxgxhxhx
PPTX
Hydrogel Based delivery Cancer Treatment
PPTX
Introduction-to-Food-Packaging-and-packaging -materials.pptx
PPTX
Self management and self evaluation presentation
PPTX
Presentation for DGJV QMS (PQP)_12.03.2025.pptx
PPTX
Project and change Managment: short video sequences for IBA
PPTX
Emphasizing It's Not The End 08 06 2025.pptx
DOC
学位双硕士UTAS毕业证,墨尔本理工学院毕业证留学硕士毕业证
PPTX
AcademyNaturalLanguageProcessing-EN-ILT-M02-Introduction.pptx
PPTX
INTERNATIONAL LABOUR ORAGNISATION PPT ON SOCIAL SCIENCE
PPTX
Introduction to Effective Communication.pptx
DOCX
"Project Management: Ultimate Guide to Tools, Techniques, and Strategies (2025)"
PPTX
The Effect of Human Resource Management Practice on Organizational Performanc...
PDF
Presentation1 [Autosaved].pdf diagnosiss
PPTX
Learning-Plan-5-Policies-and-Practices.pptx
ART-APP-REPORT-FINctrwxsg f fuy L-na.pptx
lesson6-211001025531lesson plan ppt.pptx
Parts of Speech Prepositions Presentation in Colorful Cute Style_20250724_230...
water for all cao bang - a charity project
Role and Responsibilities of Bangladesh Coast Guard Base, Mongla Challenges
English-9-Q1-3-.pptxjkshbxnnxgchchxgxhxhx
Hydrogel Based delivery Cancer Treatment
Introduction-to-Food-Packaging-and-packaging -materials.pptx
Self management and self evaluation presentation
Presentation for DGJV QMS (PQP)_12.03.2025.pptx
Project and change Managment: short video sequences for IBA
Emphasizing It's Not The End 08 06 2025.pptx
学位双硕士UTAS毕业证,墨尔本理工学院毕业证留学硕士毕业证
AcademyNaturalLanguageProcessing-EN-ILT-M02-Introduction.pptx
INTERNATIONAL LABOUR ORAGNISATION PPT ON SOCIAL SCIENCE
Introduction to Effective Communication.pptx
"Project Management: Ultimate Guide to Tools, Techniques, and Strategies (2025)"
The Effect of Human Resource Management Practice on Organizational Performanc...
Presentation1 [Autosaved].pdf diagnosiss
Learning-Plan-5-Policies-and-Practices.pptx
Ad

Microprocessor IO module and its different functions

  • 3. Bus I/O device 1 I/O device n Processor Memory •Multiple I/O devices may be connected to the processor and the memory via a bus. •Bus consists of three sets of lines to carry address, data and control signals. •Each I/O device is assigned an unique address. •To access an I/O device, the processor places the address on the address lines. •The device recognizes the address, and responds to the control signals.
  • 4. 4  I/O devices and the memory may share the same address space:  Memory-mapped I/O.  Any machine instruction that can access memory can be used to transfer data to or from an I/O device.  Simpler software.  I/O devices and the memory may have different address spaces:  Special instructions to transfer data to and from I/O devices.  I/O devices may have to deal with fewer address lines.  I/O address lines need not be physically separate from memory address lines.  In fact, address lines may be shared between I/O devices and memory, with a control signal to indicate whether it is a memory address or an I/O address.
  • 5. I/O interface decoder Address Data and status registers Control circuits Input device Bus Address lines Data lines Control lines •I/O device is connected to the bus using an I/O interface circuit which has: - Address decoder, control circuit, and data and status registers. •Address decoder decodes the address placed on the address lines thus enabling the device to recognize its address. •Data register holds the data being transferred to or from the processor. •Status register holds information necessary for the operation of the I/O device. •Data and status registers are connected to the data lines, and have unique addresses. •I/O interface circuit coordinates I/O transfers.
  • 6.  The rate of transfer to and from I/O devices is slower than the speed of the processor. This creates the need for mechanisms to synchronize data transfers between them.  Program-controlled I/O:  Processor repeatedly monitors a status flag to achieve the necessary synchronization.  Processor polls the I/O device.  Two other mechanisms used for synchronizing data transfers between the processor and memory:  Interrupts.  Direct Memory Access.
  • 8.  In program-controlled I/O, when the processor continuously monitors the status of the device, it does not perform any useful tasks.  An alternate approach would be for the I/O device to alert the processor when it becomes ready.  Do so by sending a hardware signal called an interrupt to the processor.  At least one of the bus control lines, called an interrupt-request line is dedicated for this purpose.  Processor can perform other useful tasks while it is waiting for the device to be ready.
  • 9. Interrupt Service routine Program 1 here Interrupt occurs M i 2 1 i 1 + •Processor is executing the instruction located at address i when an interrupt occurs. •Routine executed in response to an interrupt request is called the interrupt-service routine. •When an interrupt occurs, control must be transferred to the interrupt service routine. •But before transferring control, the current contents of the PC (i+1), must be saved in a known location. •This will enable the return-from-interrupt instruction to resume execution at i+1. •Return address, or the contents of the PC are usually stored on the processor stack.
  • 10.  Treatment of an interrupt-service routine is very similar to that of a subroutine.  However there are significant differences:  A subroutine performs a task that is required by the calling program.  Interrupt-service routine may not have anything in common with the program it interrupts.  Interrupt-service routine and the program that it interrupts may belong to different users.  As a result, before branching to the interrupt-service routine, not only the PC, but other information such as condition code flags, and processor registers used by both the interrupted program and the interrupt service routine must be stored.  This will enable the interrupted program to resume execution upon return from interrupt service routine.
  • 11.  Saving and restoring information can be done automatically by the processor or explicitly by program instructions.  Saving and restoring registers involves memory transfers:  Increases the total execution time.  Increases the delay between the time an interrupt request is received, and the start of execution of the interrupt-service routine. This delay is called interrupt latency.  In order to reduce the interrupt latency, most processors save only the minimal amount of information:  This minimal amount of information includes Program Counter and processor status registers.  Any additional information that must be saved, must be saved explicitly by the program instructions at the beginning of the interrupt service routine.
  • 12.  When a processor receives an interrupt-request, it must branch to the interrupt service routine.  It must also inform the device that it has recognized the interrupt request.  This can be accomplished in two ways:  Some processors have an explicit interrupt-acknowledge control signal for this purpose.  In other cases, the data transfer that takes place between the device and the processor can be used to inform the device.
  • 13.  Interrupt-requests interrupt the execution of a program, and may alter the intended sequence of events:  Sometimes such alterations may be undesirable, and must not be allowed.  For example, the processor may not want to be interrupted by the same device while executing its interrupt-service routine.  Processors generally provide the ability to enable and disable such interruptions as desired.  One simple way is to provide machine instructions such as Interrupt-enable and Interrupt-disable for this purpose.  To avoid interruption by the same device during the execution of an interrupt service routine:  First instruction of an interrupt service routine can be Interrupt-disable.  Last instruction of an interrupt service routine can be Interrupt-enable.
  • 14. Handling Multiple Devices Multiple I/O devices may be connected to the processor and the memory via a bus. Some or all of these devices may be capable of generating interrupt requests.  Each device operates independently, and hence no definite order can be imposed on how the devices generate interrupt requests? How does the processor know which device has generated an interrupt? How does the processor know which interrupt service routine needs to be executed? When the processor is executing an interrupt service routine for one device, can other device interrupt the processor? If two interrupt-requests are received simultaneously, then how to break the tie?
  • 15. Registers in keyboard and display interfaces DATAIN DATAOUT STATUS CONTROL DIRQ KIRQ SOUT SIN DEN KEN
  • 16. Consider a simple arrangement where all devices send their interrupt-requests over a single control line in the bus. When the processor receives an interrupt request over this control line, how does it know which device is requesting an interrupt? This information is available in the status register of the device requesting an interrupt:  The status register of each device has an IRQ bit which it sets to 1 when it requests an interrupt. Interrupt service routine can poll the I/O devices connected to the bus. The first device with IRQ equal to 1 is the one that is serviced. Polling mechanism is easy, but time consuming to query the status bits of all the I/O devices connected to the bus.
  • 17. An alternative approach is Vectored Interrupts The device requesting an interrupt may identify itself directly to the processor.  Device can do so by sending a special code (4 to 8 bits) the processor over the bus.  Code supplied by the device may represent a part of the starting address of the interrupt-service routine.  The remainder of the starting address is obtained by the processor based on other information such as the range of memory addresses where interrupt service routines are located. Usually the location pointed to by the interrupting device is used to store the starting address of the interrupt-service routine. The processor reads this address, called interrupt vector, and loads it into the PC.
  • 18. Interrupt Nesting Previously, before the processor started executing the interrupt service routine for a device, it disabled the interrupts from the device. In general, same arrangement is used when multiple devices can send interrupt requests to the processor.  During the execution of an interrupt service routine of device, the processor does not accept interrupt requests from any other device.  Since the interrupt service routines are usually short, the delay that this causes is generally acceptable. However, for certain devices this delay may not be acceptable.  Which devices can be allowed to interrupt a processor when it is executing an interrupt service routine of another device?
  • 19.  I/O devices are organized in a priority structure:  An interrupt request from a high-priority device is accepted while the processor is executing the interrupt service routine of a low priority device.  A priority level is assigned to a processor that can be changed under program control.  Priority level of a processor is the priority of the program that is currently being executed.  When the processor starts executing the interrupt service routine of a device, its priority is raised to that of the device.  If the device sending an interrupt request has a higher priority than the processor, the processor accepts the interrupt request.
  • 20.  Processor’s priority is encoded in a few bits of the processor status register.  Priority can be changed by instructions that write into the processor status register.  Usually, these are privileged instructions, or instructions that can be executed only in the supervisor mode.  Privileged instructions cannot be executed in the user mode.  Prevents a user program from accidentally or intentionally changing the priority of the processor.  If there is an attempt to execute a privileged instruction in the user mode, it causes a special type of interrupt called as privilege exception.
  • 21. Priority arbitration Device 1 Device 2 Device p Processor INTA1 INTR1 INTRp INTAp Which interrupt request does the processor accept if it receives interrupt requests from two or more devices simultaneously? If the I/O devices are organized in a priority structure, the processor accepts the interrupt request from a device with higher priority. Each device has its own interrupt request and interrupt acknowledge line. A different priority level is assigned to the interrupt request line of each device. However, if the devices share an interrupt request line, then how does the processor decide which interrupt request to accept?
  • 22. Processor Device 2 I NTR INTA Device n Device 1 Polling scheme: •If the processor uses a polling mechanism to poll the status registers of I/O devices to determine which device is requesting an interrupt. •In this case the priority is determined by the order in which the devices are polled. •The first device with status bit set to 1 is the device whose interrupt request is accepted. Daisy chain scheme: •Devices are connected to form a daisy chain. •Devices share the interrupt-request line, and interrupt-acknowledge line is connected to form a daisy chain. •When devices raise an interrupt request, the interrupt-request line is activated. •The processor in response activates interrupt-acknowledge. •Received by device 1, if device 1 does not need service, it passes the signal to device 2 •Device that is electrically closest to the processor has the highest priority.
  • 23. •When I/O devices were organized into a priority structure, each device had its own interrupt-request and interrupt-acknowledge line. •When I/O devices were organized in a daisy chain fashion, the devices shared an interrupt-request line, and the interrupt-acknowledge propagated through the devices •A combination of priority structure and daisy chain scheme can also used. Device Device circuit Priority arbitration Processor Device Device INTR1 INTR p INTA1 INTAp •Devices are organized into groups. •Each group is assigned a different priority level. •All the devices within a single group share an interrupt-request line, and are connected to form a daisy chain.
  • 24.  Only those devices that are being used in a program should be allowed to generate interrupt requests.  To control which devices are allowed to generate interrupt requests, the interface circuit of each I/O device has an interrupt-enable bit.  If the interrupt-enable bit in the device interface is set to 1, then the device is allowed to generate an interrupt-request.  Interrupt-enable bit in the device’s interface circuit determines whether the device is allowed to generate an interrupt request.  Interrupt-enable bit in the processor status register or the priority structure of the interrupts determines whether a given interrupt will be accepted.
  • 26. A special control unit may be provided to transfer a block of data directly between an I/O device and the main memory, without continuous intervention by the processor. Control unit which performs these transfers is a part of the I/O device’s interface circuit. This control unit is called as a DMA controller. DMA controller performs functions that would be normally carried out by the processor: For each word, it provides the memory address and all the control signals. To transfer a block of data, it increments the memory addresses and keeps track of the number of transfers.
  • 27. DMA controller can transfer a block of data from an external device to the processor, without any intervention from the processor.  However, the operation of the DMA controller must be under the control of a program executed by the processor. That is, the processor must initiate the DMA transfer. To initiate the DMA transfer, the processor informs the DMA controller with:  Starting address,  Number of words in the block.  Direction of transfer (I/O device to the memory, or memory to the I/O device). Once the DMA controller completes the DMA transfer, it informs the processor by raising an interrupt signal.
  • 28. memory Processor System bus Main Keyboard Disk/DMA controller Printer DMA controller Disk Disk •DMA controller connects a high-speed network to the computer bus. •Disk controller, which controls two disks also has DMA capability. It provides two DMA channels. •It can perform two independent DMA operations, as if each disk has its own DMA controller. The registers to store the memory address, word count and status and control information are duplicated. Network Interface
  • 29.  Processor and DMA controllers have to use the bus in an interwoven fashion to access the memory.  DMA devices are given higher priority than the processor to access the bus.  Among different DMA devices, high priority is given to high- speed peripherals such as a disk or a graphics display device.  Processor originates most memory access cycles on the bus.  DMA controller can be said to “steal” memory access cycles from the bus. This interweaving technique is called as “cycle stealing”.  An alternate approach is, provide a DMA controller an exclusive capability to initiate transfers on the bus, and hence exclusive access to the main memory. This is known as the block or burst mode.
  • 30. Bus arbitration  Processor and DMA controllers both need to initiate data transfers on the bus and access main memory.  The device that is allowed to initiate transfers on the bus at any given time is called the bus master.  When the current bus master relinquishes its status as the bus master, another device can acquire this status.  The process by which the next device to become the bus master is selected and bus mastership is transferred to it is called bus arbitration.  Centralized arbitration:  A single bus arbiter performs the arbitration.  Distributed arbitration:  All devices participate in the selection of the next bus master.
  • 32. Centralized Bus Arbitration • Bus arbiter may be the processor or a separate unit connected to the bus. • Normally, the processor is the bus master, unless it grants bus membership to one of the DMA controllers. • DMA controller requests the control of the bus by asserting the Bus Request (BR) line. • In response, the processor activates the Bus-Grant1 (BG1) line, indicating that the controller may use the bus when it is free. • BG1 signal is connected to all DMA controllers in a daisy chain fashion. • BBSY signal is 0, it indicates that the bus is busy. When BBSY becomes 1, the DMA controller which asserted BR can acquire control of the bus.
  • 33. BBSY BG1 BG2 Bus master BR Processor DMA controller 2 Processor Time DMA controller 2 asserts the BR signal. Processor asserts the BG1 signal BG1 signal propagates to DMA#2. Processor relinquishes control of the bus by setting BBSY to 1.
  • 34. Distributed Arbitration  All devices waiting to use the bus to share the responsibility of carrying out the arbitration process.  Arbitration process does not depend on a central arbiter and hence distributed arbitration has higher reliability.  Each device is assigned a 4-bit ID number.  All the devices are connected using 5 lines, 4 arbitration lines to transmit the ID, and one line for the Start-Arbitration signal.  To request the bus, a device:  Asserts the Start-Arbitration signal.  Places its 4-bit ID number on the arbitration lines.  The pattern that appears on the arbitration lines is the logical- OR of all the 4-bit device IDs placed on the arbitration lines.
  • 36.  Arbitration process:  Each device compares the pattern that appears on the arbitration lines to its own ID, starting with MSB.  If it detects a difference, it transmits 0s on the arbitration lines for that and all lower bit positions.  The pattern that appears on the arbitration lines is the logical-OR of all the 4-bit device IDs placed on the arbitration lines.
  • 37. •Device A has the ID 5 and wants to request the bus: - Transmits the pattern 0101 on the arbitration lines. •Device B has the ID 6 and wants to request the bus: - Transmits the pattern 0110 on the arbitration lines. •Pattern that appears on the arbitration lines is the logical OR of the patterns: - Pattern 0111 appears on the arbitration lines. Arbitration process: •Each device compares the pattern that appears on the arbitration lines to its own ID, starting with MSB. •If it detects a difference, it transmits 0s on the arbitration lines for that and all low bit positions. •Device A compares its ID 5 with a pattern 0101 to pattern 0111. •It detects a difference at bit position 0, as a result, it transmits a pattern 0100 on th arbitration lines. •The pattern that appears on the arbitration lines is the logical-OR of 0100 and 0110 which is 0110. •This pattern is the same as the device ID of B, and hence B has won the arbitration.
  • 38. Internal organization of memory chips  Each memory cell can hold one bit of information.  Memory cells are organized in the form of an array.  One row is one memory word.  All cells of a row are connected to a common line, known as the “word line”.  Word line is connected to the address decoder.  Sense/Write circuits are connected to the data input/output lines of the memory chip.
  • 39. FF circuit Sense / Write Address decoder FF CS cells Memory circuit Sense / Write Sense / Write circuit Data input /output lines: A0 A1 A2 A3 W0 W1 W15 7 1 0 W R / 7 1 0 b7 b1 b0 • • • • • • • • • • • • • • • • • • • • • • • • • • •
  • 40. SRAM Cell  Two transistor inverters are cross connected to implement a basic flip-flop.  The cell is connected to one word line and two bits lines by transistors T1 and T2  When word line is at ground level, the transistors are turned off and the latch retains its state  Read operation: In order to read state of SRAM cell, the word line is activated to close switches T1 and T2. Sense/Write circuits at the bottom monitor the state of b and b’ Y X Word line Bit lines b T 2 T 1 b 
  • 41. Speed, Size, and Cost  A big challenge in the design of a computer system is to provide a sufficiently large memory, with a reasonable speed at an affordable cost.  Static RAM:  Very fast, but expensive, because a basic SRAM cell has a complex circuit making it impossible to pack a large number of cells onto a single chip.  Dynamic RAM:  Simpler basic cell circuit, hence are much less expensive, but significantly slower than SRAMs.  Magnetic disks:  Storage provided by DRAMs is higher than SRAMs, but is still less than what is necessary.  Secondary storage such as magnetic disks provide a large amount of storage, but is much slower than DRAMs.
  • 42. Memory Hierarchy Pr ocessor Primary cache Main memory Increasing size Increasing speed Magnetic disk secondary memory Increasing cost per bit Re gisters L1 Secondary cache L2 •Fastest access is to the data held in processor registers. Registers are at the top of the memory hierarchy. •Relatively small amount of memory that can be implemented on the processor chip. This is processor cache. •Two levels of cache. Level 1 (L1) cache is on the processor chip. Level 2 (L2) cache is in between main memory and processor. •Next level is main memory, implemented as SIMMs. Much larger, but much slower than cache memory. •Next level is magnetic disks. Huge amount of inexepensive storage. •Speed of memory access is critical, the idea is to bring instructions and data that will be used in the near future as close to the processor as possible.
  • 43. Cache memories • Processor issues a Read request, a block of words is transferred from the main memory to the cache, one word at a time. • Subsequent references to the data in this block of words are found in the cache. • At any given time, only some blocks in the main memory are held in the cache. Which blocks in the main memory are in the cache is determined by a “mapping function”. • When the cache is full, and a block of words needs to be transferred from the main memory, some block of words in the cache must be replaced. This is determined by a “replacement algorithm”. Cache Main memory Processor
  • 44. Mapping functions  Mapping functions determine how memory blocks are placed in the cache.  A simple processor example:  Cache consisting of 128 blocks of 16 words each.  Total size of cache is 2048 (2K) words.  Main memory is addressable by a 16-bit address.  Main memory has 64K words.  Main memory has 4K blocks of 16 words each.  Three mapping functions:  Direct mapping  Associative mapping  Set-associative mapping.
  • 45. Main memory Block 0 Block 1 Block 127 Block 128 Block 129 Block 255 Block 256 Block 257 Block 4095 7 4 Main memory address T ag Block W ord 5 tag tag tag Cache Block 0 Block 1 Block 127 •Block j of the main memory maps to j modulo 128 of the cache. 0 maps to 0, 129 maps to 1. •More than one memory block is mapped onto the same position in the cache. •May lead to contention for cache blocks even if the cache is not full. •Resolve the contention by allowing new block to replace the old block, leading to a trivial replacement algorithm. •Memory address is divided into three fields: - Low order 4 bits determine one of the 16 words in a block. - When a new block is brought into the cache, the next 7 bits determine which cache block this new block is placed in. - High order 5 bits determine which of the possible 32 blocks is currently present in the cache. These are tag bits. •Simple to implement but not very flexible.
  • 46. •Main memory block can be placed into any cache position. •Memory address is divided into two fields: - Low order 4 bits identify the word within a block. - High order 12 bits or tag bits identify a memory block when it is resident in the cache. •Flexible, and uses cache space efficiently. •Replacement algorithms can be used to replace an existing block in the cache when the cache is full. •Cost is higher than direct-mapped cache because of the need to search all 128 patterns to determine whether a given block is in the cache. Main memory Block 0 Block 1 Block 127 Block 128 Block 129 Block 255 Block 256 Block 257 Block 4095 4 Main memory address Tag Word 12 tag tag tag Cache Block 0 Block 1 Block 127
  • 47. Blocks of cache are grouped into sets. Mapping function allows a block of the main memory to reside in any block of a specific set. Divide the cache into 64 sets, with two blocks per set. Memory block 0, 64, 128 etc. map to block 0, and they can occupy either of the two positions. Memory address is divided into three fields: - 6 bit field determines the set number. - High order 6 bit fields are compared to the tag fields of the two blocks in a set. Set-associative mapping combination of direct and associative mapping. Number of blocks per set is a design parameter. - One extreme is to have all the blocks in one set, requiring no set bits (fully associative mapping). - Other extreme is to have one block per set, is the same as direct mapping. Main memory Block 0 Block 1 Block 63 Block 64 Block 65 Block 127 Block 128 Block 129 Block 4095 7 4 Main memory address Tag Block Word 5 tag tag tag Cache Block 1 Block 2 Block 126 Block 127 Block 3 Block 0 tag tag tag

Editor's Notes

  • #14: We have seen how one device can alert the processor using an interrupt to indicate that it is ready for data transfer. Repeat the operation of the interrupt. Usually, there are multiple devices that may be connected to the processor and the memory via a bus. When any one of these devices become ready for data transfer they can generate interrupt requests. Now, each device is capable of generating its own interrupt request, and each one of these devices operate independently. That is, they do not consider what the other devices are doing before generating an interrupt request. As a result, there can be no definite order imposed on how the devices generate interrupt requests. When multiple devices are connected to the processor and the memory as is usually the case, and some or all of these devices are capable of generating interrupt requests, there are several questions that need to be answered. How does the processor know which device has generated the interrupt? The next question is how does the processor know which ISR needs to be executed? The other question is that when the processor is servicing the ISR for one device, can it be interrupted by another device? If two or more interrupt requests are received simultaneously, then how does the processor break the tie between which ISR it is going to execute?
  • #16: Let us consider a simple arrangement where all devices send their interrupt requests using a single control line. Recall that one control line of the bus may be dedicated to send interrupt requests. When the processor receives an interrupt request over the this control, it needs to determine which device actually sent the interrupt. When a device sends an interrupt, it sets a bit in its status register. This bit is called as IRQ bit. So, when the processor receives an interrupt and branches to the ISR, it can poll the IRQ bit of the devices which are connected to the bus. Whenever it comes across an I/O device with an IRQ bit set to 1, then it concludes that that is the device which requested an interrupt. Polling mechanism is easy, but one again, any type of polling is expensive. Here polling is being carried out to query the status bits of all the I/O devices connected to the bus.
  • #17: Repeat polling. An alternative approach may be for the device which is requesting an interrupt to identify itself to the processor. This will avoid the overhead incurred by the polling itself. The device may identify itself to the processor by sending a special code to the processor over the bus. This special code may be 4 to 8 bits. This special code used by the device to identify itself to the processor may also represent a part of the starting address of the ISR. The remainder of the starting address could be obtained in a variety of ways such as the ISRs may reside in a particular section of the memory and hence may use memory addresses in a given range. The code supplied by the device plus other information provides the starting address of the ISR. This starting address actually provides the actual starting address of the ISR.
  • #18: Repeat, that the processor did not want to be interrupted by the same device while it was executing its ISR, it disabled the interrupt at the beginning of the ISR, and it then enabled the interrupts before returning from the ISR. In case of multiple devices, the same arrangement is used. That is, when the processor is executing the ISR of one device, it disables the interrupts not only from that device but also from all the other devices. That is, it does not accept interrupt requests from any other device. Since ISRs are usually short, it takes very little time for their execution, as a result, the delay caused by not accepting interrupts from other devices while servicing an ISR is usually acceptable. However, for some time critical devices this delay that may be caused may be unacceptable. So, that leads us to the question of which devices can interrupt a processor when it is executing an ISR of another device?
  • #19: Repeat that multiple I/O devices may be connected to the processor. These multiple I/O devices may be organized according a certain priority. When the processor is servicing an interrupt from a device, only devices which have higher priority can interrupt the processor. That is, only devices which have higher priority can interrupt the processing of the ISR of the device of lower priority. In order to implement this scheme, a priority level is assigned to a processor. This priority level can be changed under program control or it depends on which program is currently being executed by the processor. That is, the priority of the processor is the priority of the program that the processor is currently executing. When the processor receives an interrupt request from a device, and starts executing the ISR of that device, its priority is raised to that of the device. Now, if another device wants to interrupt the processor, then it is allowed to do so, only if its priority is higher than the priority of the processor which is set to the priority of the ISR of the device.
  • #26: This alternative approach is called as direct memory access. DMA consists of a special control unit which is provided to transfer a block of data directly between an I/O device and the main memory without continuous intervention by the processor. A control unit which performs these transfers without the intervention of the processor is a part of the I/O device’s interface circuit, and this controller is called as the DMA controller. DMA controller performs functions that would be normally be performed by the processor. The processor will have to provide a memory address and all the control signals. So, the DMA controller will also provide with the memory address where the data is going to be stored along with the necessary control signals. When a block of data needs to be transferred, the DMA controller will also have to increment the memory addresses and keep track of the number of words that have been transferred.
  • #27: Repeat DMA controller. DMA controller can be used to transfer a block of data from an external device to the processor, without requiring any help from the processor. As a result the processor is free to execute other programs. However, the DMA controller should perform the task of transferring data to or from an I/O device for a program that is being executed by a processor. That is, the DMA controller does not and should not have the capability to determine when a data transfer operation should take place. The processor must initiate DMA transfer of data, when it is indicated or required by the program that is being executed by the processor. When the processor determines that the program that is being executed requires a DMA transfer, it informs the DMA controller which sits in the interface circuit of the device of three things, namely, the starting address of the memory location, the number of words that needs to be transferred, and the direction of transfer that is, whether the data needs to be transferred from the I/O device to the memory or from the memory to the I/O device. After initiating the DMA transfer, the processor suspends the program that initiated the transfer, and continues with the execution of some other program. The program whose execution is suspended is said to be in the blocked state.
  • #28: Let us consider a memory organization with two DMA controllers. In this memory organization, a DMA controller is used to connect a high speed network to the computer bus. In addition, disk controller which also controls two disks may have DMA capability. The disk controller controls two disks and it also has DMA capability. The disk controller provides two DMA channels. The disk controller can two independent DMA operations, as if each disk has its own DMA controller. Each DMA controller has three registers, one to store the memory address, one to store the word count, and the last to store the status and control information. There are two copies of these three registers in order to perform independent DMA operations. That is, these registers are duplicated.
  • #29: Processor also has to transfer data to and from the main memory. Also, the DMA controller is responsible for transferring data to and from the I/O device to the main memory. Both the processor and the DMA controller have to use the external bus to talk to the main memory. Usually, DMA controllers are given higher priority than the processor to access the bus. Now, we also need to decide the priority among different DMA devices that may need to use the bus. Among these different DMA devices, high priority is given to high speed peripherals such as a disk or a graphics display device. Usually, the processor originates most cycles on the bus. The DMA controller can be said to steal memory access cycles on from the bus. Thus, the processor and the DMA controller use the bus in an interwoven fashion. This interweaving technique is called as cycle stealing. An alternate approach would be to provide DMA controllers exclusive capability to initiate transfers on the bus, and hence exclusive access to the main memory. This is known as the block mode or the burst mode of operation.
  • #30: Processor and DMA controllers both need to initiate data transfers on the bus and access main memory. The process of using the bus to perform a data transfer operation is called as the initiation of a transfer operation. At any point in time only one device is allowed to initiate transfers on the bus. The device that is allowed to initiate transfers on the bus at any given time is called the bus master. When the current bus master releases control of the bus, another device can acquire the status of the bus master. How does one determine which is the next device which will acquire the status of the bus master. Note that there may be several DMA controllers plus the processor which requires access to the bus. The process by which the next device to become the bus master is selected and bus mastership is transferred to it is called bus arbitration. There are two types of bus arbitration processes. Centralized arbitration and distributed arbitration. In case of centralized arbitration, a single bus arbiter performs the arbitration. Whereas in case of distributed arbitration all devices which need to initiate data transfers on the bus participate or are involved in the selection of the next bus master.