SlideShare a Scribd company logo
Process
A program under execution is
called process. When a program is
loaded to the main memory and
starts its execution then it is
called a process.
In computing, scheduling is the action .
In computing, scheduling is the action .
In computing, scheduling is the action .
States of the process:-
1. New: The process is being created.
2. Running: Instructions are being executed.
3. Waiting: The Process is waiting for some event to
occur(such as an I/O completion or reception of a signal).
4. Ready: The Process is waiting to be assigned to a
processor.
5. Terminate: The process has finished execution.
In computing, scheduling is the action .
CPU
Scheduling
In computing, scheduling is the action .
In computing, scheduling is the action .
In computing, scheduling is the action .
CPU Scheduling is a process of
determining which process will own
for execution while another process is
on hold.
In computing, scheduling is the action .
CPU Scheduler
• CPU scheduler will select a process from the ready process from the
memory and will allocate the CPU to that process. CPU scheduling
decisions occur when a process:-
• Switches from "Running" to "Waiting" state.
• Switches from "Running" to "Ready" state.
• Switches from "Waiting" to "Ready".
• Terminates.
Types of CPU Scheduling
Types of CPU scheduling Algorithm
In computing, scheduling is the action .
1.First Come First Serve (FCFS)
2.Shortest-Job-First (SJF)
3.Shortest Remaining Time
4.Priority Scheduling
5.Round Robin Scheduling
6.Multilevel Queue Scheduling
1. Burst time (BT): The time required for the execution of the process
2. Arrival time (AV): The time at which the process enters the ready queue.
3. Completion Time/Finish Time /Exit time (CT) :it is when a process finishes execution and is no
longer being processed by the CPU. It is the summation of the arrival, waiting, and burst times.
4. Turnaround time (TAT): time elapsed between the arrival of a process and its completion is known as
turnaround time. That is, the duration it takes for a process to complete its execution and leave the system.
Turnaround Time = Completion Time – Arrival Time / (WT+BT)
5. Waiting time (WT): The time spent by the process in ready state i.e., the difference between turnaround
time and burst time.
6. Response time: The time taken by a process, to be allocated to the CPU, for the first time after entering
ready queue i.e., the difference between the time a process first gets CPU and the arrival time
7. Throughput: This is a way to find the efficiency of a processor. It is the number of processes executed
by the CPU in a given amount of time.
TAT = (CT-AT) or (WT+BT)
WT = TAT-BT
RT = AT- Time allocated to the CPU, for the first time
CPU Scheduling Criteria
First Come First Serve
(FCFS)
In computing, scheduling is the action .
Problems with FCFS
Scheduling
It is Non Pre-emptive algorithm, which means
the process priority doesn't matter
Not optimal Average Waiting Time.
Resources utilization in parallel is not possible, which
leads to Convoy Effect, and hence poor resource(CPU,
I/O etc) utilization.
What is Convoy Effect?
Convoy Effect is a situation where many processes, who
need to use a resource for short time are blocked by one
process holding that resource for a long time.
This essentially leads to poor utilization of resources and
hence poor performance.
In computing, scheduling is the action .
In computing, scheduling is the action .
In computing, scheduling is the action .
Types of Priority Scheduling Algorithm
Priority scheduling can be of two types:
1.Preemptive Priority Scheduling: If the new process arrived at the ready
queue has a higher priority than the currently running process, the CPU is
preempted, which means the processing of the current process is stopped and
the incoming new process with higher priority gets the CPU for its execution.
2.Non-Preemptive Priority Scheduling: In case of non-preemptive priority
scheduling algorithm if a new process arrives with a higher priority than the
current running process, the incoming process is put at the head of the ready
queue, which means after the execution of the current process it will be
processed.
Characteristics of Priority Scheduling
•Schedules processes on the basis of priority.
•Used to perform batch processes.
•In the case of two processes with similar priorities, we use FCFS and
Round-Robin to choose between them.
•A number is given to each process to indicate its priority level.
•Lower is the number assigned, higher is the priority level of a process.
•If a higher priority task arrives when a lower priority task is executing,
the higher priority task replaces the one with lower priority and the
•latter is put on hold until it completes execution.
Problem with Priority Scheduling Algorithm
->In priority scheduling algorithm, the chances of indefinite
blocking or starvation.
->A process is considered blocked when it is ready to run but has to wait for the
CPU as some other process is running currently.
->But in case of priority scheduling if new higher priority processes keeps
coming in the ready queue then the processes waiting in the ready queue with
lower priority may have to wait for long durations before getting the CPU for
execution.
->In 1973, when the IBM 7904 machine was shut down at MIT, a low-
priority process was found which was submitted in 1967 and had not yet
been run.
Using Aging Technique with Priority
Scheduling
To prevent starvation of any process, we can use the concept
of aging where we keep on increasing the priority of low-priority
process based on the its waiting time.
For example, if we decide the aging factor to be 0.5 for each day of
waiting, then if a process with priority 20(which is comparitively low
priority) comes in the ready queue. After one day of waiting, its
priority is increased to 19.5 and so on.
Doing so, we can ensure that no process will have to wait for
indefinite time for getting CPU time for processing.
Solve problem in class
Non-Preemptive Priority Scheduling
In computing, scheduling is the action .
Preemptive Priority Scheduling
In computing, scheduling is the action .
Multilevel
Queue
Schedulin
g
Algorithm
scheduling algorithm used in operating systems
to manage processes efficiently by categorizing
them into multiple queues based on their
properties and priorities
In computing, scheduling is the action .
In computing, scheduling is the action .
Advantages of Multilevel Queue Scheduling
1.You can use multilevel queue scheduling to apply
different scheduling methods to distinct processes.
It will have low overhead in terms of scheduling.
Disadvantages
1.There is a risk of starvation for lower priority processes.
2.It is rigid in nature.
Multilevel
Feedback Queue
Scheduling
Multilevel Feedback Queue (MLFQ) scheduling is a
dynamic scheduling algorithm used in operating
systems to manage processes by placing them in
multiple queues and using preemption and priority
adjustments based on the behavior of the processes.
In computing, scheduling is the action .
Advantages of MFQS
•This is a flexible Scheduling Algorithm
•This scheduling algorithm allows different processes to move between different
queues.
•In this algorithm, A process that waits too long in a lower priority queue may be
moved to a higher priority queue which helps in preventing starvation.
Disadvantages of MFQS
•This algorithm is too complex.
•As processes are moving around different queues which leads to the production of
more CPU overheads.
•In order to select the best scheduler this algorithm requires some other means to
select the values
Threads
•A thread is a basic unit of execution within a
process, enabling concurrent execution of code.
•Threads within a process share the process's
memory space, file handles, and other resources.
•Threads are also known as Lightweight processes.
•Threads are a popular way to improve the
performance of an application through parallelism.
In computing, scheduling is the action .
In computing, scheduling is the action .
Process Thread
A Process simply means any program in
execution.
Thread simply means a segment of a process.
The process consumes more resources Thread consumes fewer resources.
The process requires more time for creation.
Thread requires comparatively less time for
creation than process.
The process is a heavyweight process Thread is known as a lightweight process
The process takes more time to terminate The thread takes less time to terminate.
Processes have independent data and code
segments
A thread mainly shares the data segment, code
segment, files, etc. with its peer threads.
The process takes more time for context
switching.
The thread takes less time for context
switching.
Communication between processes needs
more time as compared to thread.
Communication between threads needs less
time as compared to processes.
For some reason, if a process gets blocked
then the remaining processes can continue
their execution
In case if a user-level thread gets blocked, all of
its peer threads also get blocked.
Advantages of Thread
1.Responsiveness
2.Resource sharing, hence allowing better utilization of resources.
3.Economy. Creating and managing threads becomes easier.
4.Scalability. One thread runs on one CPU. In Multithreaded
processes, threads can be distributed over a series of processors
to scale.
5.Context Switching is smooth. Context switching refers to the
procedure followed by the CPU to change from one task to another.
6.Enhanced Throughput of the system.
Types of Thread
There are two types of threads:
1.User Threads
2.Kernel Threads
User threads are above the kernel and without kernel support. These
are the threads that application programmers use in their programs.
Kernel threads are supported within the kernel of the OS itself. All
modern OSs support kernel-level threads, allowing the kernel to perform
multiple simultaneous tasks and/or to service multiple kernel system
calls simultaneously.
User Level threads Kernel Level Threads
These threads are implemented by users. These threads are implemented by Operating systems
These threads are not recognized by operating systems, These threads are recognized by operating systems,
In User Level threads, the Context switch requires no
hardware support.
In Kernel Level threads, hardware support is needed.
These threads are mainly designed as dependent
threads.
These threads are mainly designed as independent
threads.
In User Level threads, if one user-level thread performs a
blocking operation then the entire process will be
blocked.
On the other hand, if one kernel thread performs a
blocking operation then another thread can continue the
execution.
Example of User Level threads: Java thread, POSIX
threads.
Example of Kernel level threads: Window Solaris.
Implementation of User Level thread is done by a thread
library and is easy.
While the Implementation of the kernel-level thread is
done by the operating system and is complex.
This thread is generic in nature and can run on any
operating system.
This is specific to the operating system.
Multithreading Models
The user threads must be mapped to kernel
threads, by one of the following strategies:
•Many to One Model
•One to One Model
•Many to Many Model
Many to One Model
Many to One Model
•In the many to one model, many user-level threads are all mapped onto
a single kernel thread.
•Thread management is handled by the thread library in user space, which
is efficient in nature.
•In this case, if user-level thread libraries are implemented in the operating
system in some way that the system does not support them, then the
Kernel threads use this many-to-one relationship model.
One to One Model
•The one to one model creates a separate kernel thread to
handle each and every user thread.
•Most implementations of this model place a limit on how many
threads can be created.
•Linux and Windows from 95 to XP implement the one-to-one
model for threads.
•This model provides more concurrency than that of many to
one Model.
Many to Many Model
•The many to many model multiplexes any number of user
threads onto an equal or smaller number of kernel threads,
combining the best features of the one-to-one and many-to-one
models.
•Users can create any number of threads.
•Blocking the kernel system calls does not block the entire
process.
•Processes can be split across multiple processors.
Thread Libraries
- It provide programmers with API for the creation and
management of threads.
- It may be implemented either in user space or in kernel
space.
user space involves API functions implemented solely within
the user space, with no kernel support
- kernel space involves system calls and requires a kernel
Three types of Thread
1.POSIX Pitheads may be provided as either a user or kernel library,
as an extension to the POSIX standard.
2.Win32 threads are provided as a kernel-level library on Windows
systems.
3.Java threads: Since Java generally runs on a Java Virtual
Machine, the implementation of threads is based upon whatever OS
and hardware the JVM is running on, i.e. either Pitheads or Win32
threads depending on the system.
Multithreading Issues
-Thread Cancellation
Thread cancellation means terminating a thread before it has finished working.
There can be two approaches for this, one is Asynchronous cancellation, which
terminates the target thread immediately. The other is Deferred cancellation allows
the target thread to periodically check if it should be canceled.
-Signal Handling
Signals are used in UNIX systems to notify a process that a particular event has
occurred. Now in when a Multithreaded process receives a signal, to which thread it
must be delivered? It can be delivered to all or a single thread.
fork() System Call
fork() is a system call executed in the kernel through which a process
creates a copy of itself. Now the problem in the Multithreaded process is,
if one thread forks, will the entire process be copied or not?
Security Issues
Yes, there can be security issues because of the extensive sharing of
resources between multiple threads.

More Related Content

PPTX
Process Scheduling Algorithms | Interviews | Operating system
PPTX
PROCESS.pptx
PDF
cpu scheduling.pdfoieheoirwuojorkjp;ooooo
PPT
Unit2 CPU Scheduling 24252 (sssssss1).ppt
PPT
Unit2 CPU Scheduling 24252.ppBBBBBBBBBBt
PPT
Cpu scheduling(suresh)
PPTX
UNIPROCESS SCHEDULING.pptx
PPT
Process management in os
Process Scheduling Algorithms | Interviews | Operating system
PROCESS.pptx
cpu scheduling.pdfoieheoirwuojorkjp;ooooo
Unit2 CPU Scheduling 24252 (sssssss1).ppt
Unit2 CPU Scheduling 24252.ppBBBBBBBBBBt
Cpu scheduling(suresh)
UNIPROCESS SCHEDULING.pptx
Process management in os

Similar to In computing, scheduling is the action . (20)

PPT
Cpu scheduling
PPT
cpu sechduling
PDF
CH06.pdf
PPT
06-scheduling.ppt including multiple CPUs
PPTX
CPU Scheduling
PPTX
CPU Scheduling.pptx this is operating system
PDF
Ch6 cpu scheduling
PPTX
LM10,11,12 - CPU SCHEDULING algorithms and its processes
PPT
Window scheduling algorithm
PPTX
Scheduling algo(by HJ)
PPTX
Study_Material_Presentations_Unit-2.pptx
PPT
Chapter No 4 CPU Scheduling and Algorithms.ppt
PPTX
Cpu_sheduling.pptx
PPTX
Cpu scheduling
PPTX
Osy ppt - Copy.pptx
PPT
Operating System Scheduling
PPTX
Preemptive process example.pptx
PDF
Scheduling
PPT
Ch05 cpu-scheduling
Cpu scheduling
cpu sechduling
CH06.pdf
06-scheduling.ppt including multiple CPUs
CPU Scheduling
CPU Scheduling.pptx this is operating system
Ch6 cpu scheduling
LM10,11,12 - CPU SCHEDULING algorithms and its processes
Window scheduling algorithm
Scheduling algo(by HJ)
Study_Material_Presentations_Unit-2.pptx
Chapter No 4 CPU Scheduling and Algorithms.ppt
Cpu_sheduling.pptx
Cpu scheduling
Osy ppt - Copy.pptx
Operating System Scheduling
Preemptive process example.pptx
Scheduling
Ch05 cpu-scheduling
Ad

Recently uploaded (20)

PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
Business Ethics Teaching Materials for college
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
Pre independence Education in Inndia.pdf
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPTX
Microbial diseases, their pathogenesis and prophylaxis
Renaissance Architecture: A Journey from Faith to Humanism
PPH.pptx obstetrics and gynecology in nursing
Microbial disease of the cardiovascular and lymphatic systems
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
TR - Agricultural Crops Production NC III.pdf
Business Ethics Teaching Materials for college
Anesthesia in Laparoscopic Surgery in India
Supply Chain Operations Speaking Notes -ICLT Program
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
102 student loan defaulters named and shamed – Is someone you know on the list?
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
human mycosis Human fungal infections are called human mycosis..pptx
Pre independence Education in Inndia.pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
Module 4: Burden of Disease Tutorial Slides S2 2025
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Microbial diseases, their pathogenesis and prophylaxis
Ad

In computing, scheduling is the action .

  • 2. A program under execution is called process. When a program is loaded to the main memory and starts its execution then it is called a process.
  • 6. States of the process:-
  • 7. 1. New: The process is being created. 2. Running: Instructions are being executed. 3. Waiting: The Process is waiting for some event to occur(such as an I/O completion or reception of a signal). 4. Ready: The Process is waiting to be assigned to a processor. 5. Terminate: The process has finished execution.
  • 13. CPU Scheduling is a process of determining which process will own for execution while another process is on hold.
  • 15. CPU Scheduler • CPU scheduler will select a process from the ready process from the memory and will allocate the CPU to that process. CPU scheduling decisions occur when a process:- • Switches from "Running" to "Waiting" state. • Switches from "Running" to "Ready" state. • Switches from "Waiting" to "Ready". • Terminates.
  • 16. Types of CPU Scheduling
  • 17. Types of CPU scheduling Algorithm
  • 19. 1.First Come First Serve (FCFS) 2.Shortest-Job-First (SJF) 3.Shortest Remaining Time 4.Priority Scheduling 5.Round Robin Scheduling 6.Multilevel Queue Scheduling
  • 20. 1. Burst time (BT): The time required for the execution of the process 2. Arrival time (AV): The time at which the process enters the ready queue. 3. Completion Time/Finish Time /Exit time (CT) :it is when a process finishes execution and is no longer being processed by the CPU. It is the summation of the arrival, waiting, and burst times. 4. Turnaround time (TAT): time elapsed between the arrival of a process and its completion is known as turnaround time. That is, the duration it takes for a process to complete its execution and leave the system. Turnaround Time = Completion Time – Arrival Time / (WT+BT) 5. Waiting time (WT): The time spent by the process in ready state i.e., the difference between turnaround time and burst time. 6. Response time: The time taken by a process, to be allocated to the CPU, for the first time after entering ready queue i.e., the difference between the time a process first gets CPU and the arrival time 7. Throughput: This is a way to find the efficiency of a processor. It is the number of processes executed by the CPU in a given amount of time.
  • 21. TAT = (CT-AT) or (WT+BT) WT = TAT-BT RT = AT- Time allocated to the CPU, for the first time
  • 23. First Come First Serve (FCFS)
  • 25. Problems with FCFS Scheduling It is Non Pre-emptive algorithm, which means the process priority doesn't matter Not optimal Average Waiting Time. Resources utilization in parallel is not possible, which leads to Convoy Effect, and hence poor resource(CPU, I/O etc) utilization.
  • 26. What is Convoy Effect? Convoy Effect is a situation where many processes, who need to use a resource for short time are blocked by one process holding that resource for a long time. This essentially leads to poor utilization of resources and hence poor performance.
  • 30. Types of Priority Scheduling Algorithm Priority scheduling can be of two types: 1.Preemptive Priority Scheduling: If the new process arrived at the ready queue has a higher priority than the currently running process, the CPU is preempted, which means the processing of the current process is stopped and the incoming new process with higher priority gets the CPU for its execution. 2.Non-Preemptive Priority Scheduling: In case of non-preemptive priority scheduling algorithm if a new process arrives with a higher priority than the current running process, the incoming process is put at the head of the ready queue, which means after the execution of the current process it will be processed.
  • 31. Characteristics of Priority Scheduling •Schedules processes on the basis of priority. •Used to perform batch processes. •In the case of two processes with similar priorities, we use FCFS and Round-Robin to choose between them. •A number is given to each process to indicate its priority level. •Lower is the number assigned, higher is the priority level of a process. •If a higher priority task arrives when a lower priority task is executing, the higher priority task replaces the one with lower priority and the •latter is put on hold until it completes execution.
  • 32. Problem with Priority Scheduling Algorithm ->In priority scheduling algorithm, the chances of indefinite blocking or starvation. ->A process is considered blocked when it is ready to run but has to wait for the CPU as some other process is running currently. ->But in case of priority scheduling if new higher priority processes keeps coming in the ready queue then the processes waiting in the ready queue with lower priority may have to wait for long durations before getting the CPU for execution. ->In 1973, when the IBM 7904 machine was shut down at MIT, a low- priority process was found which was submitted in 1967 and had not yet been run.
  • 33. Using Aging Technique with Priority Scheduling To prevent starvation of any process, we can use the concept of aging where we keep on increasing the priority of low-priority process based on the its waiting time. For example, if we decide the aging factor to be 0.5 for each day of waiting, then if a process with priority 20(which is comparitively low priority) comes in the ready queue. After one day of waiting, its priority is increased to 19.5 and so on. Doing so, we can ensure that no process will have to wait for indefinite time for getting CPU time for processing.
  • 40. scheduling algorithm used in operating systems to manage processes efficiently by categorizing them into multiple queues based on their properties and priorities
  • 43. Advantages of Multilevel Queue Scheduling 1.You can use multilevel queue scheduling to apply different scheduling methods to distinct processes. It will have low overhead in terms of scheduling. Disadvantages 1.There is a risk of starvation for lower priority processes. 2.It is rigid in nature.
  • 45. Multilevel Feedback Queue (MLFQ) scheduling is a dynamic scheduling algorithm used in operating systems to manage processes by placing them in multiple queues and using preemption and priority adjustments based on the behavior of the processes.
  • 47. Advantages of MFQS •This is a flexible Scheduling Algorithm •This scheduling algorithm allows different processes to move between different queues. •In this algorithm, A process that waits too long in a lower priority queue may be moved to a higher priority queue which helps in preventing starvation. Disadvantages of MFQS •This algorithm is too complex. •As processes are moving around different queues which leads to the production of more CPU overheads. •In order to select the best scheduler this algorithm requires some other means to select the values
  • 49. •A thread is a basic unit of execution within a process, enabling concurrent execution of code. •Threads within a process share the process's memory space, file handles, and other resources. •Threads are also known as Lightweight processes. •Threads are a popular way to improve the performance of an application through parallelism.
  • 52. Process Thread A Process simply means any program in execution. Thread simply means a segment of a process. The process consumes more resources Thread consumes fewer resources. The process requires more time for creation. Thread requires comparatively less time for creation than process. The process is a heavyweight process Thread is known as a lightweight process The process takes more time to terminate The thread takes less time to terminate. Processes have independent data and code segments A thread mainly shares the data segment, code segment, files, etc. with its peer threads. The process takes more time for context switching. The thread takes less time for context switching. Communication between processes needs more time as compared to thread. Communication between threads needs less time as compared to processes. For some reason, if a process gets blocked then the remaining processes can continue their execution In case if a user-level thread gets blocked, all of its peer threads also get blocked.
  • 53. Advantages of Thread 1.Responsiveness 2.Resource sharing, hence allowing better utilization of resources. 3.Economy. Creating and managing threads becomes easier. 4.Scalability. One thread runs on one CPU. In Multithreaded processes, threads can be distributed over a series of processors to scale. 5.Context Switching is smooth. Context switching refers to the procedure followed by the CPU to change from one task to another. 6.Enhanced Throughput of the system.
  • 54. Types of Thread There are two types of threads: 1.User Threads 2.Kernel Threads User threads are above the kernel and without kernel support. These are the threads that application programmers use in their programs. Kernel threads are supported within the kernel of the OS itself. All modern OSs support kernel-level threads, allowing the kernel to perform multiple simultaneous tasks and/or to service multiple kernel system calls simultaneously.
  • 55. User Level threads Kernel Level Threads These threads are implemented by users. These threads are implemented by Operating systems These threads are not recognized by operating systems, These threads are recognized by operating systems, In User Level threads, the Context switch requires no hardware support. In Kernel Level threads, hardware support is needed. These threads are mainly designed as dependent threads. These threads are mainly designed as independent threads. In User Level threads, if one user-level thread performs a blocking operation then the entire process will be blocked. On the other hand, if one kernel thread performs a blocking operation then another thread can continue the execution. Example of User Level threads: Java thread, POSIX threads. Example of Kernel level threads: Window Solaris. Implementation of User Level thread is done by a thread library and is easy. While the Implementation of the kernel-level thread is done by the operating system and is complex. This thread is generic in nature and can run on any operating system. This is specific to the operating system.
  • 56. Multithreading Models The user threads must be mapped to kernel threads, by one of the following strategies: •Many to One Model •One to One Model •Many to Many Model
  • 57. Many to One Model
  • 58. Many to One Model •In the many to one model, many user-level threads are all mapped onto a single kernel thread. •Thread management is handled by the thread library in user space, which is efficient in nature. •In this case, if user-level thread libraries are implemented in the operating system in some way that the system does not support them, then the Kernel threads use this many-to-one relationship model.
  • 59. One to One Model
  • 60. •The one to one model creates a separate kernel thread to handle each and every user thread. •Most implementations of this model place a limit on how many threads can be created. •Linux and Windows from 95 to XP implement the one-to-one model for threads. •This model provides more concurrency than that of many to one Model.
  • 61. Many to Many Model
  • 62. •The many to many model multiplexes any number of user threads onto an equal or smaller number of kernel threads, combining the best features of the one-to-one and many-to-one models. •Users can create any number of threads. •Blocking the kernel system calls does not block the entire process. •Processes can be split across multiple processors.
  • 63. Thread Libraries - It provide programmers with API for the creation and management of threads. - It may be implemented either in user space or in kernel space. user space involves API functions implemented solely within the user space, with no kernel support - kernel space involves system calls and requires a kernel
  • 64. Three types of Thread 1.POSIX Pitheads may be provided as either a user or kernel library, as an extension to the POSIX standard. 2.Win32 threads are provided as a kernel-level library on Windows systems. 3.Java threads: Since Java generally runs on a Java Virtual Machine, the implementation of threads is based upon whatever OS and hardware the JVM is running on, i.e. either Pitheads or Win32 threads depending on the system.
  • 65. Multithreading Issues -Thread Cancellation Thread cancellation means terminating a thread before it has finished working. There can be two approaches for this, one is Asynchronous cancellation, which terminates the target thread immediately. The other is Deferred cancellation allows the target thread to periodically check if it should be canceled. -Signal Handling Signals are used in UNIX systems to notify a process that a particular event has occurred. Now in when a Multithreaded process receives a signal, to which thread it must be delivered? It can be delivered to all or a single thread.
  • 66. fork() System Call fork() is a system call executed in the kernel through which a process creates a copy of itself. Now the problem in the Multithreaded process is, if one thread forks, will the entire process be copied or not? Security Issues Yes, there can be security issues because of the extensive sharing of resources between multiple threads.