SlideShare a Scribd company logo
1
IT105
Operating Systems
Process Management: Processes
Harris Chikunya
2
Learning Outcomes
By the end of this Lecture, you should be able to:
• Explain the concepts of process, various states in the process and their scheduling;
• Define a process control block;
• Classify three types of schedulers;
• Explain five types of scheduling algorithms; and
• Compare the performance evaluation of the scheduling algorithms.
3
Concept of Process
• A process is a sequential program in execution
• It defines the fundamental unit of computation for the computer
• Components of process are:
1. Object Program – code to be executed
2. Data – is used for executing the program
3. Resources – while executing the program, it may require some resources
4. Status of the process execution – used for verifying the status of the process execution.
4
Process vs Program
Process
• Is a dynamic entity i.e a program in execution
• Is a sequence of information executions
• Exists in a limited span of time
• Two or more processes could be executing the same program, each using their own data and
resources
Program
• Is a static entity made up of program statements
• Contains the instructions
• A program exist at place in space and continues to exist
• Does not perform the action by itself.
5
Process State
• As a process executes, it changes state
• The state of a process is defined by the current activity of that process
• The process may be in one of the following states
1. New – a process that just been created
2. Ready – processes waiting to have the processor allocated to them by the OS so that they
can run
3. Running – process that is currently being executed
4. Waiting – a process that cannot execute until some event occurs such as the completion of
an I/O operation
5. Terminated – The process has finished execution
6
Process State Diagram
7
Process Control Block (PCB)
• Each process is represented on the OS by a PCB
• It is the data structure used by the OS
1. Process state – new, running, waiting etc
2. Program counter – address of next instruction to
be executed for this process
3. CPU registers – indicates general purpose register, index registers and
accumulators etc.
4. Memory management information – include such information as the
base and limit register, page tables or segment tables in memory
5. I/O status information – I/O requests, I/O devices allocated , a list of
open files
6. Accounting information – CPU used, clock time elapsed since start,
time limits
8
Threads
• Is a lightweight process
• Basic unit of CPU utilisation
• Consists of :
 Program counter – keeps track of which instruction to execute next
 Registers – holds its current working variable
 Stack – execution history
• Thread States
1. bornState :A thread is just created
2. readyState :The thread is waiting for CPU
3. Running :System assigns the processor to the thread
4. Sleep: :A sleeping thread becomes ready after the designated
sleep time expires
5. Dead :The execution of the thread is finished
9
Process vs Thead
• Process
 Is heavy weight or resource intensive
 process takes more time to create
 Execution is very slow
 Cant share the same memory area
 Communication between two processes is difficult
• Thread
 Is light weight, taking lesser resources
 Takes less time to create
 Execution is very fast
 Threads can share same memory area
 Communication between two threads is easy
10
Multithreading
• Is when a number of thread execute at the same time within a process
• In a multithreaded program, even when some portion of it is blocked, the whole program is
not blocked
• The rest of the program continues running if multiple CPUs are available
• Multithreading therefore gives the best performance
• Multithreading can simplify code and increase efficiency
• Kernels are generally multithreaded
11
Single vs Multithreaded process
• Code – Contains instruction
• Data – holds global variable
• Files – opening and closing files
• Register – contain information about CPU state
• Stack – parameters, local varibles, functions
12
Types of Threads
1. User Threads
 Thread creation, scheduling, management are done by a thread library at the
user level
 If a user thread performs a system call which blocks it, all the other threads in
that process are also automatically blocked, whole process is blocked
• Advantages
 Thread switching does not require kernel mode privileges
 User level threads can run on any operating system
 User level threads are fast to create and manage
 Is generic and can run on any Os
• Disadvantages
 In a typical operating system, most system calls are blocking
 Multithreaded application cannot take advantage of multiprocessing
13
Types of Threads cont...
2. Kernel threads
 Are created, scheduled, managed by the kernel.
 If one thread in a process is blocked, the process need not be blocked.
• Advantages
 Kernel can simultaneously schedule multiple threads from the same
process on multiple processes.
 If one thread in a process is blocked, the Kernel can schedule another
thread of the same process.
 Kernel routines can be multithreaded
• Disadvantages
 Kernel threads are generally slower to create and manage than the user
threads.
 Is specific to the operating system
14
15
Multithreading models
• Categories of threading implementation
1. One to One
 One of the earliest implementation of true
multithreading
 One user thread to one kernel thread
 Each user user-level thread created by the
application is known to the kernel and all
threads can access the kernel at the same time.
 Disadvantage of this model is that creating user
thread requires the corresponding Kernel
thread.
 OS/2, windows NT and windows 2000 use one
to one model
16
Multithreading models
2. Many to One Model
 Maps many user threads to one kernel thread
 Allow the application to create any number of threads that can execute concurrently.
 All thread activity is restricted to the user space
 However only one thread can access the kernel at a time, so multiple threads are unable to run
in parallel on multiprocessors
17
Multithreading models
3. Many to Many Model
 Maps many user-level threads to many kernel-level threads
 Aka the two-level model, minimizes programming effort while reducing the cost and
weight of each thread
 A program can have as many threads as are appropriate without making the process
too heavy.
 This implementation provides a standard interface, a simpler programming model, and
optimal performance for each process
18
Process Scheduling
• refers to a set of policies and mechanisms supported by operating
system that controls the order in which jobs are completed.
• A scheduler is an operating system program (module) that select the
next job to be admitted for execution.
• In this section we will describe:
 Scheduling objectives
 Types of schedulers
 Various scheduling algorithms
19
Scheduling Objectives
1. Maximise throughput – service the largest possible number of processes per unit
time
2. Minimise overhead – scheduling should minimise the waster resources
3. Balance resource use – should keep the resources of the system busy
4. Maximise interactive users – maximise the number of interactive users receiving
acceptable response times
5. Enforce priorities – the scheduling mechanism should favour the higher-priority
processes
6. Avoid indefinite postponement – all processes should be treated the same and no
process can suffer indefinite postponement
20
Types of Schedulers
• Long – term Scheduler
 Also called job scheduler
 Selects processes from the queue and loads them into memory for execution
 It is the change from of state from new to ready
 Its main objectives is to provide a balanced mix of jobs, such as I/O bound and
processor bound
• Short-term Scheduler
 Also called the CPU scheduler/dispatcher
 Selects from among the processes that are ready to execute and allocates the CPU to
one of them
 It is the change of state from ready to running for the process
 Faster than long term scheduler
21
Types of Schedulers cont…
• Medium term Scheduler
 Is part of the swapping function i.e. it is in charge of handling the swapped out-
processes
 It removes process from the memory e.g. suspended processes may be moved to
the secondary storage (swapping) and make space for other processes
 It reduces the degree of multiprogramming
22
23
Scheduling Algorithms
Scheduling Criteria
1. CPU Utilization – CPU is costly device, it must be kept as busy as possible
2. Throughput – refers to the number of processes completed in unit time
3. Turn around time – the time interval between the submission of the process and time of
completion
4. Waiting time – amount of time a process has been waiting in the ready queue
5. Response time – time duration between the submission and first response
24
Scheduling Algorithms
• A major division among scheduling algorithms is whether they support
pre-emptive or non-pre-emptive scheduling discipline:
a) Pre-emptive scheduling
 A scheduler may pre-empt a low priority running process anytime when a
high priority process enters into a ready state
 Pre-emptive scheduling is more useful in high priority process which
requires immediate response e.g Real time system.
 Round Robin scheduling, priority based scheduling or event driven
scheduling and SRT
b) Non pre-emptive scheduling
 Once a process enters the running state, it cannot be pre-empted until it
completes its allocated time
 Always processes a scheduled job to its completion.
 First come First Served (FCFS) and Shortest Job First (SJF)
25
Scheduling Algorithms
1. First-Come First-Serve (FCFS)
2. Shortest-Job First (SJF)
3. Round Robin (RR)
4. Shortest Remaining Time Next (SRTN)
5. Priority Based Scheduling or Event Driven (ED) Scheduling
6. Multilevel Queue Scheduling (Research)
7. Multilevel Feedback Queue Scheduling (Research)
26
First-Come First-Serve (FCFS)
• Jobs are scheduled on the order they are received
• Is non-pre-emptive
• Easy to understand and implement
• Its implementation is based on FIFO queue
• Poor in performance as average wait time is high
27
Exercise
• Calculate the turn around time, waiting time, average turnaround time, average
waiting time, throughput and processor utilisation for the given set of processes
that arrive at a given arrive time shown in the table, with the length of
processing time given in milliseconds:
• If the processes arrives as per the arrival time, the Gantt chart will be:
28
Shortest Job First (SJF)
• Also known shortest job next
• This is a non-pre-emptive type of scheduling
• Best approach to minimize waiting time
• The processor should know in advance how much time the process will take
• Easy to implement in Batch systems where required CPU time is known in
advance
• Impossible to implement in interactive systems where required CPU time is
not known
29
Exercise
• Consider the following set of processes with the following processing time which arrived at
the same time.
• Using SJF scheduling the shortest length of process will first get execution, the Gantt chart
will be:
Process Process Time
P1 06
P2 08
P3 07
P4 03
30
Round Robin (RR)
• Is a pre-emptive process scheduling algorithm
• Each process is provided a fixed time to execute known as a Quantum
• Once a process is executed for a given time period, it is pre-empted
and other process executes for a given time period
• No process can run for more than on quantum while others are waiting
in the ready queue
• Primarily used in time-sharing and multiuser system environment
31
Exercise
• Consider the following Scenario
• Quantum Slice = 4
32
Shortest Remaining Time Next (SRTN)
• This is the pre-emptive version of shortest job first
• This permits a process that enters the ready list to pre-empt the running process, if
the time for the new process is less than the remaining time for running process.
33
Priority Scheduling
• A priority is associated with each process, and the CPU is allocated to the
process with the highest priority.
• Equal priority processes are scheduled in FCFS order.
• Is a non-pre-emptive algorithm
• Priorities can be assigned either internally or externally.
• Internal priorities are assigned by the OS using criteria such as average burst
time, ratio of CPU to I/O activity, system resource use, and other factors
available to the kernel.
• External priorities are assigned by users, based on the importance of the job,
fees paid, politics, etc.
34
Exercise
• P2 has the highest priority.
35
End

More Related Content

PPTX
UNIT-2-PROCESS MANAGEMENT in opeartive system.pptx
PDF
Operating system 2 by adi
PPT
Chapter 2 (Part 2)
PPTX
OS_module2. .pptx
PDF
Process And Scheduling Algorithms in os
PPTX
Os unit 3 , process management
PPTX
Process Management Operating Systems .pptx
PPTX
THE BASIC CONCEPTS OF PROCESSING MANAGEMENT chapter 2.pptx
UNIT-2-PROCESS MANAGEMENT in opeartive system.pptx
Operating system 2 by adi
Chapter 2 (Part 2)
OS_module2. .pptx
Process And Scheduling Algorithms in os
Os unit 3 , process management
Process Management Operating Systems .pptx
THE BASIC CONCEPTS OF PROCESSING MANAGEMENT chapter 2.pptx

Similar to Lecture 2 Processes in operating systems.pptx (20)

PPTX
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating System
PPT
Lecture5
PPTX
Epc 3.ppt
PPTX
Processes and operating systems
PPTX
Operating Systems Process Management.pptx
PPTX
ch2nvbjdvsbjbjfjjfjf j Process Mangement.pptx
PPTX
Process Management of Operating Systems.
PPTX
381CCS_CHAPTER3_UPDATED king Khalid .pptx
PPTX
2Chapter Two- Process Management(2) (1).pptx
PDF
Process management- This ppt contains all required information regarding oper...
PPTX
OS Module-2.pptx
PPT
Ch2_Processes_and_process_management_1.ppt
PDF
Cs8493 unit 2
PDF
Unit 2 part 2(Process)
DOCX
CHAPTER READING TASK OPERATING SYSTEM
PDF
Operating System-2 by Adi.pdf
PPTX
Operating Systems
PPTX
PROCESS.pptx
PPTX
Unit 1 process management operating system.pptx
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating System
Lecture5
Epc 3.ppt
Processes and operating systems
Operating Systems Process Management.pptx
ch2nvbjdvsbjbjfjjfjf j Process Mangement.pptx
Process Management of Operating Systems.
381CCS_CHAPTER3_UPDATED king Khalid .pptx
2Chapter Two- Process Management(2) (1).pptx
Process management- This ppt contains all required information regarding oper...
OS Module-2.pptx
Ch2_Processes_and_process_management_1.ppt
Cs8493 unit 2
Unit 2 part 2(Process)
CHAPTER READING TASK OPERATING SYSTEM
Operating System-2 by Adi.pdf
Operating Systems
PROCESS.pptx
Unit 1 process management operating system.pptx
Ad

Recently uploaded (20)

PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Network Security Unit 5.pdf for BCA BBA.
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
Big Data Technologies - Introduction.pptx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Machine learning based COVID-19 study performance prediction
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
Cloud computing and distributed systems.
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Empathic Computing: Creating Shared Understanding
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
KodekX | Application Modernization Development
Understanding_Digital_Forensics_Presentation.pptx
Network Security Unit 5.pdf for BCA BBA.
“AI and Expert System Decision Support & Business Intelligence Systems”
Big Data Technologies - Introduction.pptx
NewMind AI Weekly Chronicles - August'25 Week I
Machine learning based COVID-19 study performance prediction
Review of recent advances in non-invasive hemoglobin estimation
Dropbox Q2 2025 Financial Results & Investor Presentation
Digital-Transformation-Roadmap-for-Companies.pptx
Cloud computing and distributed systems.
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Empathic Computing: Creating Shared Understanding
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Spectral efficient network and resource selection model in 5G networks
Programs and apps: productivity, graphics, security and other tools
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
KodekX | Application Modernization Development
Ad

Lecture 2 Processes in operating systems.pptx

  • 2. 2 Learning Outcomes By the end of this Lecture, you should be able to: • Explain the concepts of process, various states in the process and their scheduling; • Define a process control block; • Classify three types of schedulers; • Explain five types of scheduling algorithms; and • Compare the performance evaluation of the scheduling algorithms.
  • 3. 3 Concept of Process • A process is a sequential program in execution • It defines the fundamental unit of computation for the computer • Components of process are: 1. Object Program – code to be executed 2. Data – is used for executing the program 3. Resources – while executing the program, it may require some resources 4. Status of the process execution – used for verifying the status of the process execution.
  • 4. 4 Process vs Program Process • Is a dynamic entity i.e a program in execution • Is a sequence of information executions • Exists in a limited span of time • Two or more processes could be executing the same program, each using their own data and resources Program • Is a static entity made up of program statements • Contains the instructions • A program exist at place in space and continues to exist • Does not perform the action by itself.
  • 5. 5 Process State • As a process executes, it changes state • The state of a process is defined by the current activity of that process • The process may be in one of the following states 1. New – a process that just been created 2. Ready – processes waiting to have the processor allocated to them by the OS so that they can run 3. Running – process that is currently being executed 4. Waiting – a process that cannot execute until some event occurs such as the completion of an I/O operation 5. Terminated – The process has finished execution
  • 7. 7 Process Control Block (PCB) • Each process is represented on the OS by a PCB • It is the data structure used by the OS 1. Process state – new, running, waiting etc 2. Program counter – address of next instruction to be executed for this process 3. CPU registers – indicates general purpose register, index registers and accumulators etc. 4. Memory management information – include such information as the base and limit register, page tables or segment tables in memory 5. I/O status information – I/O requests, I/O devices allocated , a list of open files 6. Accounting information – CPU used, clock time elapsed since start, time limits
  • 8. 8 Threads • Is a lightweight process • Basic unit of CPU utilisation • Consists of :  Program counter – keeps track of which instruction to execute next  Registers – holds its current working variable  Stack – execution history • Thread States 1. bornState :A thread is just created 2. readyState :The thread is waiting for CPU 3. Running :System assigns the processor to the thread 4. Sleep: :A sleeping thread becomes ready after the designated sleep time expires 5. Dead :The execution of the thread is finished
  • 9. 9 Process vs Thead • Process  Is heavy weight or resource intensive  process takes more time to create  Execution is very slow  Cant share the same memory area  Communication between two processes is difficult • Thread  Is light weight, taking lesser resources  Takes less time to create  Execution is very fast  Threads can share same memory area  Communication between two threads is easy
  • 10. 10 Multithreading • Is when a number of thread execute at the same time within a process • In a multithreaded program, even when some portion of it is blocked, the whole program is not blocked • The rest of the program continues running if multiple CPUs are available • Multithreading therefore gives the best performance • Multithreading can simplify code and increase efficiency • Kernels are generally multithreaded
  • 11. 11 Single vs Multithreaded process • Code – Contains instruction • Data – holds global variable • Files – opening and closing files • Register – contain information about CPU state • Stack – parameters, local varibles, functions
  • 12. 12 Types of Threads 1. User Threads  Thread creation, scheduling, management are done by a thread library at the user level  If a user thread performs a system call which blocks it, all the other threads in that process are also automatically blocked, whole process is blocked • Advantages  Thread switching does not require kernel mode privileges  User level threads can run on any operating system  User level threads are fast to create and manage  Is generic and can run on any Os • Disadvantages  In a typical operating system, most system calls are blocking  Multithreaded application cannot take advantage of multiprocessing
  • 13. 13 Types of Threads cont... 2. Kernel threads  Are created, scheduled, managed by the kernel.  If one thread in a process is blocked, the process need not be blocked. • Advantages  Kernel can simultaneously schedule multiple threads from the same process on multiple processes.  If one thread in a process is blocked, the Kernel can schedule another thread of the same process.  Kernel routines can be multithreaded • Disadvantages  Kernel threads are generally slower to create and manage than the user threads.  Is specific to the operating system
  • 14. 14
  • 15. 15 Multithreading models • Categories of threading implementation 1. One to One  One of the earliest implementation of true multithreading  One user thread to one kernel thread  Each user user-level thread created by the application is known to the kernel and all threads can access the kernel at the same time.  Disadvantage of this model is that creating user thread requires the corresponding Kernel thread.  OS/2, windows NT and windows 2000 use one to one model
  • 16. 16 Multithreading models 2. Many to One Model  Maps many user threads to one kernel thread  Allow the application to create any number of threads that can execute concurrently.  All thread activity is restricted to the user space  However only one thread can access the kernel at a time, so multiple threads are unable to run in parallel on multiprocessors
  • 17. 17 Multithreading models 3. Many to Many Model  Maps many user-level threads to many kernel-level threads  Aka the two-level model, minimizes programming effort while reducing the cost and weight of each thread  A program can have as many threads as are appropriate without making the process too heavy.  This implementation provides a standard interface, a simpler programming model, and optimal performance for each process
  • 18. 18 Process Scheduling • refers to a set of policies and mechanisms supported by operating system that controls the order in which jobs are completed. • A scheduler is an operating system program (module) that select the next job to be admitted for execution. • In this section we will describe:  Scheduling objectives  Types of schedulers  Various scheduling algorithms
  • 19. 19 Scheduling Objectives 1. Maximise throughput – service the largest possible number of processes per unit time 2. Minimise overhead – scheduling should minimise the waster resources 3. Balance resource use – should keep the resources of the system busy 4. Maximise interactive users – maximise the number of interactive users receiving acceptable response times 5. Enforce priorities – the scheduling mechanism should favour the higher-priority processes 6. Avoid indefinite postponement – all processes should be treated the same and no process can suffer indefinite postponement
  • 20. 20 Types of Schedulers • Long – term Scheduler  Also called job scheduler  Selects processes from the queue and loads them into memory for execution  It is the change from of state from new to ready  Its main objectives is to provide a balanced mix of jobs, such as I/O bound and processor bound • Short-term Scheduler  Also called the CPU scheduler/dispatcher  Selects from among the processes that are ready to execute and allocates the CPU to one of them  It is the change of state from ready to running for the process  Faster than long term scheduler
  • 21. 21 Types of Schedulers cont… • Medium term Scheduler  Is part of the swapping function i.e. it is in charge of handling the swapped out- processes  It removes process from the memory e.g. suspended processes may be moved to the secondary storage (swapping) and make space for other processes  It reduces the degree of multiprogramming
  • 22. 22
  • 23. 23 Scheduling Algorithms Scheduling Criteria 1. CPU Utilization – CPU is costly device, it must be kept as busy as possible 2. Throughput – refers to the number of processes completed in unit time 3. Turn around time – the time interval between the submission of the process and time of completion 4. Waiting time – amount of time a process has been waiting in the ready queue 5. Response time – time duration between the submission and first response
  • 24. 24 Scheduling Algorithms • A major division among scheduling algorithms is whether they support pre-emptive or non-pre-emptive scheduling discipline: a) Pre-emptive scheduling  A scheduler may pre-empt a low priority running process anytime when a high priority process enters into a ready state  Pre-emptive scheduling is more useful in high priority process which requires immediate response e.g Real time system.  Round Robin scheduling, priority based scheduling or event driven scheduling and SRT b) Non pre-emptive scheduling  Once a process enters the running state, it cannot be pre-empted until it completes its allocated time  Always processes a scheduled job to its completion.  First come First Served (FCFS) and Shortest Job First (SJF)
  • 25. 25 Scheduling Algorithms 1. First-Come First-Serve (FCFS) 2. Shortest-Job First (SJF) 3. Round Robin (RR) 4. Shortest Remaining Time Next (SRTN) 5. Priority Based Scheduling or Event Driven (ED) Scheduling 6. Multilevel Queue Scheduling (Research) 7. Multilevel Feedback Queue Scheduling (Research)
  • 26. 26 First-Come First-Serve (FCFS) • Jobs are scheduled on the order they are received • Is non-pre-emptive • Easy to understand and implement • Its implementation is based on FIFO queue • Poor in performance as average wait time is high
  • 27. 27 Exercise • Calculate the turn around time, waiting time, average turnaround time, average waiting time, throughput and processor utilisation for the given set of processes that arrive at a given arrive time shown in the table, with the length of processing time given in milliseconds: • If the processes arrives as per the arrival time, the Gantt chart will be:
  • 28. 28 Shortest Job First (SJF) • Also known shortest job next • This is a non-pre-emptive type of scheduling • Best approach to minimize waiting time • The processor should know in advance how much time the process will take • Easy to implement in Batch systems where required CPU time is known in advance • Impossible to implement in interactive systems where required CPU time is not known
  • 29. 29 Exercise • Consider the following set of processes with the following processing time which arrived at the same time. • Using SJF scheduling the shortest length of process will first get execution, the Gantt chart will be: Process Process Time P1 06 P2 08 P3 07 P4 03
  • 30. 30 Round Robin (RR) • Is a pre-emptive process scheduling algorithm • Each process is provided a fixed time to execute known as a Quantum • Once a process is executed for a given time period, it is pre-empted and other process executes for a given time period • No process can run for more than on quantum while others are waiting in the ready queue • Primarily used in time-sharing and multiuser system environment
  • 31. 31 Exercise • Consider the following Scenario • Quantum Slice = 4
  • 32. 32 Shortest Remaining Time Next (SRTN) • This is the pre-emptive version of shortest job first • This permits a process that enters the ready list to pre-empt the running process, if the time for the new process is less than the remaining time for running process.
  • 33. 33 Priority Scheduling • A priority is associated with each process, and the CPU is allocated to the process with the highest priority. • Equal priority processes are scheduled in FCFS order. • Is a non-pre-emptive algorithm • Priorities can be assigned either internally or externally. • Internal priorities are assigned by the OS using criteria such as average burst time, ratio of CPU to I/O activity, system resource use, and other factors available to the kernel. • External priorities are assigned by users, based on the importance of the job, fees paid, politics, etc.
  • 34. 34 Exercise • P2 has the highest priority.

Editor's Notes

  • #8: Threads are a popular way to improve application performance through parallelism E.G Word processor Typing, Formatting, Spell check are threads
  • #15: Most multithreading models fall into one of the following
  • #18: One of the goals of multiprogrammed Os to maximise CPU usage Also goal of time sharing where processes are quickly switched onto CPU for time sharing
  • #19: The primary objective of scheduling is to improve performance
  • #21: Compare and contrast the different types of Schedulers
  • #23: Many criteria have been suggested for comparing CPU Scheduling algorithms
  • #24: CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated to the CPU
  • #27: Process arrived in the order P1, P2, P3, P4, P5. P1 arrived at 0 ms. P2 arrived at 2 ms. P3 arrived at 3 ms. P4 arrived at 5 ms. P5 arrived at 8 ms. Average around Time = Waiting time + Burst Time(Processing time) Turn Around Time for P1 => 0+3= 3 Turn Around Time for P2 => 1+3 = 4 Turn Around Time for P3 => 3+1 = 4 Turn Around Time for P4 => 2+ 4= 6 Turn Around Time for P5 => 3+ 2= 5 Average Turn Around Time => ( 3+4+4+6+5 )/5 =>22/5 = 4.4 ms. Waiting Time = Turnaround Time – Processing Time Waiting time for P1 3 - 3 = 0 Waiting time for P2 4 – 3 = 1 Waiting time for P3 4 – 1 = 3 Waiting time for P4 6 – 4 = 2 Waiting time for P5 5 – 2 = 3 Average waiting time = (0+1+3+2+3)/5 = 9/5 = 1.8ms Average Response Time : Formula : Response Time = First Response - Arrival Time Response Time of P1 = 0 Response Time of P2 => 3-2 = 1 Response Time of P3 => 6-3 = 3 Response Time of P4 => 7-5 = 2 Response Time of P5 => 11-8 =3 Average Response Time => ( 0+1+3+2+3 )/5 => 9/5 = 1.8 ms Throughput = (No. of processes completed)/(Unit time) = 5/13 = 0.38/ms Processor Utilisation = (Processor Busy time)/(Processor Busy Time + Processor idle Time) = 13/(13+0)=1 * 100% = 100% Advantages: Easy to Implement, Simple
  • #29: Average Waiting Time : Formula = Staring Time - Arrival Time waiting Time for P1 => 3-0 = 3 waiting Time for P2 => 16-0 = 16 waiting Time for P3 => 9-0 = 9 waiting Time for P4 => 0-0=0 Average waiting time => ( 3+16+9+0 )/4 => 28/4 =7 ms Average Turn Around Time : Formula = waiting Time + burst Time (Process time) Turn Around Time for P1 => 3+6 =9 Turn Around for P2 => 16+8 =24 Turn Around for P3 => 9+7 = 16 Turn Around Time for P4 => 0+3 =3 Average Turn around time => ( 9+24+16+3 )/4 => 52/4 = 13 ms Average Response Time : Formula : First Response - Arrival Time First Response time for P1 =>3-0 = 3 First Response time for P2 => 16-0 = 16 First Response time for P3 => 9-0 = 9 First Response time for P4 => 0-0 = 0 Average Response Time => ( 3+16+9+0 )/4 => 28/4 = 7 ms Throughput Processor Utilisation
  • #30: Quantum = Time Slice = 4 Average Waiting Time : Formula = Starting Time - Arrival Time waiting Time for P1 => 0-0 = 0 waiting Time for P2 => 16-0 = 16 waiting Time for P3 => 9-0 = 9 waiting Time for P4 => 0-0=0 Average waiting time => ( 3+16+9+0 )/4 => 28/4 =7 ms AVERAGE TURN AROUND TIME : FORMULA : Turn around time = waiting time + burst Time Turn around time for P1 => 14+30 =44 Turn around time for P2 => 15+6 = 21 Turn around time for P3 => 16+8 = 24 Average turn around time => ( 44+21+24 )/3 = 29.66 ms Average Response Time : Formula : First Response - Arrival Time First Response time for P1 =>3-0 = 3 First Response time for P2 => 16-0 = 16 First Response time for P3 => 9-0 = 9 First Response time for P4 => 0-0 = 0 Average Response Time => ( 3+16+9+0 )/4 => 28/7 = 7 ms Throughput Processor Utilisation
  • #31: Quantum = Time Slice = 4
  • #32: At time 0, only process P1 has entered the system, so it is the process that executes. At time 1, process P2 arrives. At that time, process P1 has 4 time units left to execute. At this juncture process 2’s processing time is less compared to the P1 left out time (4 units). So P2 starts executing at time 1. At time 2, process P3 enters the system with the processing time 5 units. Process P2 continues executing as it has the minimum number of time units when compared with P1 and P3. At time 3, process P2 terminates and process P4 enters the system. Of the processes P1, P3 and P4, P4 has the smallest remaining execution time so it starts executing. When process P1 terminates at time 10, process P3 executes. The Gantt chart is shown
  • #34: Note that in practice, priorities are implemented using integers within a fixed range, but there is no agreed-upon convention as to whether "high" priorities use large numbers or small numbers.