SlideShare a Scribd company logo
2
Most read
3
Most read
8
Most read
OPERATING SYSTEM NOTES
UNIT-2
[Concept of processes, process scheduling, operations on processes, inter-process communication,
communication in Client-Server-Systems, overview & benefits of threads.]
Concept of Processes
Process is not the same as program. A process is more than a program code. A process is an 'active'
entity as opposed to program which is considered to be a 'passive' entity. A program becomes a
process when an executable file is loaded into memory. Two common techniques for loading
executable files are double-clicking an icon representing the executable file and entering the name
of the executable file on the command line (as in prog.exe or a.out.)
A program in the execution is called a Process. Attributes held by process include hardware state,
memory, CPU etc. Process memory is divided into four sections for efficient working :
• The text section is made up of the compiled program
code, read in from non-volatile storage when the program
is launched.
• The data section is made up the global and static
variables, allocated and initialized prior to executing the
main.
• The heap is used for the dynamic memory allocation, and
is managed via calls to new, delete, malloc, free, etc.
• The stack is used for local variables. Space on the stack is
reserved for local variables when they are declared (at
function entrance or elsewhere, depending on the
language), and the space is freed up when the variables go
out of scope. The stack is also used for function return
values, and the exact mechanisms of stack management
may be language specific.
The stack and the heap start at opposite ends of the process's free space and grow towards each
other. If they should ever meet, then either a stack overflow error will occur, or else a call to new or
malloc will fail due to insufficient memory available.
State of Process
As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. Each process may be in one of the following states:
• New. The process is being created.
• Ready. The process has all the resources available that it needs to run, but the CPU is not
currently working on this process's instructions.
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
• Running. Instructions are being executed.
• Waiting. The process cannot run at the momentis because it is waiting for some event to
occur (such as an I/O completion or reception of a signal).
• Terminated. The process has finished execution.
Some systems may have other states besides the ones listed here.
Process Control Block (PCB)
For each process there is a Process Control Block, PCB, enclosing all process-specific
information. It is a data structure, which is shown below :
• Process state. The state may be new, ready running, waiting,
halted, and so on.
• Process Number. Process ID, and parent process ID.
• Program counter. The counter indicates the address of the
next instruction to be executed for this process.
• CPU registers. The registers vary in number and type,
depending on the computer architecture. They include
accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along
with the program counter, this state information must be
saved and restored when swapping processes in and out of the
CPU. Following Diagram represents the idea of CPU switch
from process to process.
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
• CPU-scheduling information. This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
• Memory-management information. This information may include such information as the
value of the base and limit registers, the page tables, or the segment tables, dependmg on the
memory system used by the operating system.
• Accounting information. This mformation includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers, and so on.
• I/O status information. This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
Process Scheduling
For a single-processor system, there will never be more than one running process. If there are more
processes, the rest will have to wait until the CPU is free. The objective of multiprogramming is to
have some process nnming at all times, to maximize CPU utilization. The objective of time sharing
is to switch the CPU among processes so frequently that users can interact with each program while
it is running.
The act of determining which process in the ready state should be moved to the running state is
known as Process Scheduling. The prime aim of the process scheduling system is to keep the CPU
busy all the time and to deliver minimum response time for all programs. For achieving this, the
scheduler must apply appropriate rules for swapping processes IN and OUT of CPU.
Scheduling Queues
• All processes when enters into the system are stored in the job queue.
• Processes that are residing in main memory and are ready and waiting to execute (Ready
state) are kept on a list called the ready queue.
• Processes waiting for a device to become available are placed in device queues. There are
unique device queues for each I/O device available.
• Other queues may also be created and used as needed.
A new process is initially put in the ready queue. It waits there until it is selected for execution, or is
dispatched. Once the process is allocated the CPU and is executing, one of several events could
occur:
• The process could issue an I/O request and then be placed in an I/O queue.
• The process could create a new subprocess and wait for the subprocess's termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt, and be put
back in the ready queue.
In the first two cases, the process execution halts and eventually switches from the waiting state to
the ready state and is then put back in the ready queue. A process continues this cycle until it
terminates. Then it is removed from all queues and has its PCB and resources deallocated.
Schedulers
A process migrates among the various scheduling queues throughout its lifetime. The operating
system must select processes from these queues in some fashion for scheduling purposes. The
selection process is carried out by the scheduler.
There are three types of schedulers available :
1. Long Term Scheduler: Long term scheduler runs less frequently. Long Term Schedulers
decide which program must get into the job queue. From the job queue, the Job Processor,
selects processes and loads them into the memory for execution. Primary aim of the Job
Scheduler is to maintain a good degree of Multiprogramming (the number of processes in
memory). An optimal degree of Multiprogramming means the average rate of process
creation is equal to the average departure rate of processes from the execution memory.
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
2. Short Term Scheduler: This is also known as CPU Scheduler and runs very frequently. It
selects from among the processes that are ready to execute and allocates the CPU to one of
them. The primary aim of this scheduler is to enhance CPU performance and increase
process execution rate.
3. Medium Term Scheduler: During extra load, this scheduler picks out big processes from
the ready queue for some time, to allow smaller processes to execute, thereby reducing the
number of processes in the ready queue.
An efficient scheduling system will select a good process mix of CPU-bound processes and I/O
bound processes. An I/O-bound process is one that spends more of its time doing I/O than it spends
doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using
more of its time doing computations.
Context Switch
Whenever an interrupt arrives, the CPU needs to save the current context of the currently running
process, so that it can restore that context when interrupt is processed. The context is represented in
the PCB of the process; it includes the value of the CPU registers, the process state, and memory-
management information.
Switching the CPU to another process requires performing a state save of the current process and a
state restore of a different process. This task is known as context switch.
When a context switch occurs, the kernel saves the context of the old process in its PCB and loads
the saved context of the new process scheduled to run. Context-switch times are highly dependent
on hardware support, so it is pure overhead, because the system does no useful work while
switching. Its speed varies from machine to machine, depending on the memory speed, the number
of registers that must be copied, and the existence of special instructions. Typical speeds are a few
milliseconds.
Operations On Processes
The processes in most systems can execute concurrently, and they may be created and deleted
dynamically. Thus, these systems must provide a mechanism for process creation and termination.
1. Process Creation: A process may create several new processes via a create-process system
call, during the course of execution. The creating process is called a parent process, and the
new processes are called the children of that process. Each of these new processes may in
turn create other processes, forming a tree of processes.
In general, a process will need certain resources (such as CPU time, memory, files, I/O
devices) to accomplish its task. When a process creates a subprocess, that subprocess may
be able to obtain its resources directly from the operating system, or it may be constrained to
a subset of the resources of the parent process. The parent may have to partition its resources
among its children, or it may be able to share some resources (such as memory or files)
among several of its children. Restricting a child process to a subset of the parent's resources
prevents any process from overloading the system by creating too many subprocesses.
When a process creates a new process, two possibilities exist in terms of execution:
 The parent continues to execute concurrently with its children.
 The parent waits until some or all of its children have terminated.
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
There are also two possibilities in terms of the address space of the new process:
 The child process is a duplicate of the parent process (it has the same program and
data as the parent).
 The child process has a new program loaded into it.
In UNIX/Linux, each process is identified by its process identifier, which is a
unique integer. A new process is created by the fork system call. init process serves as the
root parent process for all user processes and has its PID = 1. In given figure, there are three
children of init, whose PID are mentioned with the process thread. we can obtain a listing of
processes by using the ps command. For example, the command ps -el will list complete
information for all processes currently active in the system.
2. Process Termination: A Process can terminate for following reasons:
 By making the exit(system call), typically returning an int, processes may request
their own termination. This int is typically zero on successful completion and some
non-zero code in the event of any problem. This return value is passed to the parent
if it is doing a wait().
 The inability of the system to deliver the necessary system resources.
 In response to a KILL command or other unhandled process interrupts.
 A parent may kill its children if the task assigned to them is no longer needed
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
 If the parent exits, the system may or may not allow the child to continue without a
parent.
When a process ends, all the resources of the process-including physical and virtual
memory, open files, and I/0 buffers-are deallocated by the operating system. The process
termination status and execution times are returned to the parent if the parent is waiting for
the child to terminate, or eventually returned to init if the process already became an orphan
(parents process no longer exists).
Inter-Process Communication
Processes executing concurrently in the operating system may be either:
• independent processes, or
• cooperating processes.
A process is independent if it cannot affect or be affected by the other processes executing in the
system. Any process that does not share data with any other process is independent.
A process is cooperating if it can affect or be affected by the other processes executing in the
system. Clearly, any process that shares data with other processes is a cooperating process. A
common paradigm for cooperating processes is Produce-Consumer Problem. A producer process
produces information that is consumed by a consumer process. For example, a compiler may
produce assembly code, which is consumed by an assembler. The assembler, in turn, may produce
object modules, which are consumed by the loader.
There are several reasons for providing an environment that allows process cooperation:
• Information sharing. Since several users may be interested in the same piece of
information (for instance, a shared file), we must provide an environment to allow
concurrent access to such information.
• Computation speedup. If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Notice that such a
speedup can be achieved only if the computer has multiple processing elements (such as
CPUs or I/O channels).
• Modularity. We may want to construct the system in a modular fashion, dividing the system
functions into separate processes or threads.
• Convenience. Even an individual user may work on many tasks at the same time. For
instance, a user may be editing, printing, and compiling in parallel.
Cooperating processes require an interprocess communication (IPC) mechanism that will allow
them to exchange data and information. There are two fundamental models of interprocess
communication:
(1) shared memory and
(2) message passing.
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
In the shared-memory model, a region of memory that is shared by cooperating processes is
established. Processes can then exchange information by reading and writing data to the shared
region. In the message-passing model, communication takes place by means of messages
exchanged between the cooperating processes. Following figure illustrates the difference between
the two systems [(a) message-passing and (b) shared memory]:
(1) Shared Memory Systems: Interprocess communication using shared memory requires
communicating processes to establish a region of shared memory. Typically, a shared-memory
region resides in the address space of the process creating the shared-memory segment. Other
processes that wish to communicate using this shared-memory segment must attach it to their
address space. Normally, the operating system tries to prevent one process from accessing another
process's memory. Shared memory requires that two or more processes agree to remove this
restriction. They can then excbange information by reading and writing data in the shared areas.
The form of the data and the location are determined by these processes and are not under the
operating system's control. The processes are also responsible for ensuring that they are not writing
to the same location simultaneously.
(2) Message Passing Systems: Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the same address space and is
particularly useful in a distributed environment, where the communicating processes may reside on
different computers connected by a network. A message-passing facility provides at least two
operations: send(message) and receive(message). Messages sent by a process can be of either fixed
or variable size.
If different processes want to communicate, they must send messages to and receive messages from
each other. A communication link must be established between the cooperating processes before
messages can be sent. There are three key issues to be resolved in message passing systems:
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
1. Direct or indirect communication ( naming ):
A) With direct communication the sender must know the name of the receiver to
which it wishes to send a message. In this scheme, the send() and receive()
primitives are defined as:
i. send(P, message) -Send a message to process P.
ii. receive (Q, message)- Receive a message from process Q.
A communication link in this scheme has the following properties:
i. A link is established automatically between every pair of processes that want
to communicate. The processes need to know only each other's identity to
communicate.
ii. A link is associated with exactly two processes.
iii. Between each pair of processes, there exists exactly one link.
B) Indirect communication uses shared mailboxes, or ports. A mailbox can be viewed
abstractly as an object into which messages can be placed by processes and from
which messages can be removed. Each mailbox has a unique identification. The
send() and receive() primitives are defined as follows:
i. send (A, message) -Send a message to mailbox A.
ii. receive (A, message)- Receive a message from mailbox A.
In this scheme, a communication link has the following properties:
i. A link is established between a pair of processes only if both members of the
pair have a shared mailbox.
ii. A link may be associated with more than two processes.
iii. Between each pair of communicating processes, there may be a number of
different links, with each link corresponding to one mailbox.
2. Synchronous or asynchronous communication: Communication between processes takes
place through calls to send() and receive () primitives. There are different design options for
implementing each primitive. Message passing may be either blocking or nonblocking - also
known as synchronous and asynchronous.
1. Blocking send. The sending process is blocked until the message is received by
the receiving process or by the mailbox.
2. Nonblocking send. The sending process sends the message and resumes
operation.
3. Blocking receive. The receiver blocks until a message is available.
4. Nonblocking receive. The receiver retrieves either a valid message or a null.
3. Automatic or explicit buffering: Whether communication is direct or indirect, messages
exchanged by communicating processes reside in a temporary queue. Basically, such queues
can be implemented in three ways:
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
1. Zero capacity. The queue has a maximum length of zero; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient receives the
message.
2. Bounded capacity. The queue has finite length n; thus, at most n messages can reside in
it. If the queue is not full when a new message is sent, the message is placed in the queue
and the sender can continue execution without waiting. If the link is full, the sender must
block until space is available in the queue.
3. Unbounded capacity. The queue's length is potentially infinite; thus, any number of
messages can wait in it. The sender never blocks.
Comparison between two methods of IPC:
• Shared Memory is faster once it is set up, because no system calls are required and access
occurs at normal memory speeds. However it is more complicated to set up, and doesn't
work as well across multiple computers. Shared memory is generally preferable when large
amounts of information must be shared quickly on the same computer.
• Message Passing requires system calls for every message transfer, and is therefore slower,
but it is simpler to set up and works well across multiple computers. Message passing is
generally preferable when the amount and/or frequency of data transfers is small, or when
multiple computers are involved.
Communication In Client-Server-Systems
Client-Server Systems are specialized form of distributed system, where a system designated as
Server System, satisfy requests generated by Client Systems. This form of computing system has
general structure as shown below:
Communication in Client-Server Systems takes place using sockets, Remote Procedure Calls (RPC)
and Pipes other than two techniques discussed before (Message Passing and Shared Memory).
1. Sockets: A socket is defined as an endpoint for communication. It is identified by an IP
address concatenated with a port number. A pair of processes communicating over a network
employ a pair of sockets-one for each process. Servers implementing specific services (such
as telnet, FTP, and HTTP) listen to well-known ports (a telnet server listens to port 23; an
FTP server listens to port 21; and a Web, or HTTP, server listens to port 80). All ports below
1024 are considered ·well known; we can use them to implement standard services.
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
If a client on host X with IP address 146.86.5.20 wishes to establish a connection with a Web
server (which is listening on port 80) at address 161.25.19.8 (IP), the client will be assigned
some arbitrary number greater than 1024 as its port. Let's say the port number is 1625. Now,
the connection will consist of a pair of sockets: (146.86.5.20:1625) on host X and
(161.25.19.8:80) on the Web server.
As all connections must be unique. Therefore, if another process also on host X wished to
establish another connection with the same Web server, it would be assigned a port number
greater than 1024 and not equal to 1625. This ensures that all com1.ections consist of a
unique pair of sockets. Communication channels via sockets may be of one of two major
forms:
• Connection-oriented ( TCP, Transmission Control Protocol ): These connections
emulate a telephone connection. All packets sent down the connection are guaranteed
to be delivered at the receiving end in the same order in which they were sent. The
TCP layer of the network protocol takes steps to verify all packets sent, re-send
packets if necessary, and arrange the received packets in the proper order before
delivering them to the receiving process. There is a certain amount of overhead
involved in this procedure, and if one packet is missing or delayed, then any packets
which follow will have to wait until the errant (culprit) packet is delivered.
• Connectionless ( UDP, User Datagram Protocol ): They emulate individual
telegrams. There is no guarantee that any particular packet will get through
undamaged or will be delivered at all, and no guarantee that the packets will get
delivered in any particular order. There may even be duplicate packets delivered,
depending on how the intermediary connections are configured. UDP transmissions
are much faster than TCP, but applications must implement their own error checking
and recovery procedures.
2. Remote Procedure Call: In Client-Server Systems, RPCs allow a client to invoke a
procedure on a remote host as it would invoke a procedure locally by providing a stub on
the client side. Stubs are Client-side proxy for the actual procedure on the server. Typically,
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
a separate stub exists for each separate remote procedure. When the client invokes a remote
procedure, the RPC system calls the appropriate stub, which locates the port on the server
and marshals the parameters (Parameter marshalling involves packaging the parameters into
a form that can be transmitted over a network). The stub then transmits a message to the
server using message passing. A similar stub on the server side receives this message,
unpacks the marshalled parameters and invokes the procedure on the server. If necessary,
return values are passed back to the client using the same technique.
3. Pipes: A pipe acts as a conduit (channel) allowing two processes to communicate. In
implementing a pipe, four issues must be considered:
1. Does the pipe allow unidirectional communication or bidirectional communication?
2. If two-way communication is allowed, is it half duplex (data can travel only one way
at a time) or full duplex (data can travel in both directions at the same time)?
3. Should there be a relationship (such as parent-child) between the communicating
processes?
4. Can the pipes communicate over a network, or the communicating processes must
reside on the same machine?
Types of Pipes:
1. Ordinary pipes: It allows two processes to communicate in standard producer-
consumer fashion; the producer writes to one end of the pipe (the write-end) and the
consumer reads from the other end (the read-end). result, Ordinary pipes are
unidirectional, allowing only one-way communication. If two-way communication is
required, two pipes must be used, with each pipe sending data in a different
direction.
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
2. Named Pipes: Named pipes provide a much more powerful communication tool
than ordinary pipes. Here, communication can be bidirectional, and no parent-child
relationship is required. Once a named pipe is established, several processes can use
it for communication. Named pipes continue to exist after communicating processes
have finished.
Overview & Benefits Of Threads
A thread is a path of execution within a process. Despite of the fact that a thread must execute in
process, the process and its associated threads are different concept. Processes are used to group
resources together and threads are the entities scheduled for execution on the CPU. Because threads
have some of the properties of processes, they are sometimes called lightweight processes. Like a
traditional process (process with one thread), a thread can be in any of several states (Running,
Blocked, Ready or Terminated).
Threads are not independent of one other like processes as threads shares with other threads their
code section, data section and OS resources like open files and signals. But, like process, a thread
has its own thread ID, program counter (PC), a register set, and a stack space. A traditional process
has a single thread of control (Single Threaded Process). If a process has multiple threads of
control, it can perform more than one task at a time (Multi-Threaded System).
Similarities in Process and Thread:
• Like processes threads share CPU and only one thread active (running) at a time.
• Like processes, threads within a processes, threads within a processes execute sequentially.
• Like processes, thread can create children.
• And like process, if one thread is blocked, another thread can run.
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
Difference between Process and Thread:
S.N. Process Thread
1 Process is heavy weight or resource intensive.
Thread is light weight, taking lesser
resources than a process.
2
Process switching needs interaction with
operating system.
Thread switching does not need to
interact with operating system.
3
In multiple processing environments, each process
executes the same code but has its own memory
and file resources.
All threads can share same set of
open files, code section, data
segments, etc.
4
Multiple processes without using threads use more
resources.
Multiple threaded processes use
fewer resources.
5
In multiple processes each process operates
independently of the others.
One thread can read, write or change
another thread's data.
Benefits
The benefits of multithreaded programming can be broken down into four major categories:
1. Responsiveness. If the process is divided into multiple threads (Multithreading), it will
continue running even if part of it is blocked or is performing a lengthy operation, thereby
increasing responsiveness to the user. For instance, a multithreaded Web browser could
allow user interaction in one thread while an image was being loaded in another thread.
2. Resource sharing. Processes may only share resources through techniques such as shared
memory or message passing. However, resources like code, data and file can be shared
among all threads within a process, which allows multiple tasks to be performed
simultaneously in a single address space.
3. Economy. Allocating memory and resources for process creation is costly. Because threads
share the resources of the process to which they belong, it is more economical to create and
context-switch threads.
4. Scalability. The benefits of multithreading can be greatly increased in a multiprocessor
architecture, where threads may be running in parallel on different processors. A single-
threaded process can only run on one processor, regardless how many are available.
Multithreading on a multi-CPU machine increases parallelism and make process execution
faster.
5. Enhanced Throughput of the system. If process is divided into multiple threads and each
thread function is considered as one job, then the number of jobs completed per unit time is
increased. Thus, the throughput of the system can be increased.
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
Types of Threads:
1. User Level Thread: User managed threads.
2. Kernel Level Thread: Supported and managed directly by the operating system.
Difference between User-Level & Kernel-Level Thread
S.N. User-Level Threads Kernel-Level Thread
1
User-level threads are faster to create and
manage.
Kernel-level threads are slower to create
and manage.
2
Implementation is by a thread library at the
user level.
Operating system supports creation of
Kernel threads.
3
User-level thread is generic and can run on
any operating system.
Kernel-level thread is specific to the
operating system.
The user threads must be mapped to kernel threads, by one of the following strategies.
1. Many-To-One Model:
 many user-level threads are all mapped onto a single kernel thread.
 Thread management is handled by the thread library in user space, which is very
efficient.
 However, if a blocking system call is made, then the entire process blocks, even if
the other user threads would otherwise be able to continue.
 Because a single kernel thread can operate only on a single CPU, the many-to-one
model does not allow individual processes to be split across multiple CPUs.
 Green Threads - a thread library available for Solaris-uses this modet. Now few
systems follow this strategy.
Provided By Shipra Swati, PSCET
OPERATING SYSTEM NOTES
2. One-To-One Model:
 maps each user thread to a kernel thread.
 It provides more concurrency than the many-to-one model by allowing another
thread to run when a thread makes a blocking system call;
 it also allows multiple threads to run in parallel on multiprocessors.
 Drawback: creating a user thread requires creating the corresponding kernel thread.
So, overhead of creating kernel threads can burden the performance of an
application.
 most implementations of this model restrict the number of threads supported by the
system.
 Linux, along with the family of Windows operating systems, implement the one-to-
one model.
3. Many-To-Many Model:
 any number of user threads onto an equal or smaller number of kernel threads.
 Users have no restrictions on the number of threads created, developers can create as
many user threads as necessary .
 when a thread performs a blocking system call, the kernel can schedule another
thread for execution.
 the kernel threads ( corresponding to different user threads) can run in parallel on a
multiprocessor, so true concurrency is achieved.
Provided By Shipra Swati, PSCET

More Related Content

PPT
OS Process and Thread Concepts
PPT
OPERATING SYSTEM SERVICES, OPERATING SYSTEM STRUCTURES
PPTX
Threads in Operating System | Multithreading | Interprocess Communication
PDF
Unit 5 Advanced Computer Architecture
PDF
Process scheduling (CPU Scheduling)
PPTX
Instruction Formats
PDF
OS - Process Concepts
OS Process and Thread Concepts
OPERATING SYSTEM SERVICES, OPERATING SYSTEM STRUCTURES
Threads in Operating System | Multithreading | Interprocess Communication
Unit 5 Advanced Computer Architecture
Process scheduling (CPU Scheduling)
Instruction Formats
OS - Process Concepts

What's hot (20)

PPT
Operating system presentation
PDF
Multicore Computers
PDF
computer system structure
PPTX
Memory management ppt
PPTX
process and thread.pptx
ODP
Distributed operating system(os)
PDF
Operating System.pdf
PPT
Secondary storage management in os
PPTX
5_Interprocess Communication.pptx
PPTX
Operating system - Process and its concepts
PPTX
Concurrency
PPTX
Swapping | Computer Science
PPT
Os Swapping, Paging, Segmentation and Virtual Memory
PPT
System call
PPTX
INTER PROCESS COMMUNICATION (IPC).pptx
PPTX
Page replacement algorithms
PPTX
Types Of Operating Systems
PPTX
Multiprocessor
Operating system presentation
Multicore Computers
computer system structure
Memory management ppt
process and thread.pptx
Distributed operating system(os)
Operating System.pdf
Secondary storage management in os
5_Interprocess Communication.pptx
Operating system - Process and its concepts
Concurrency
Swapping | Computer Science
Os Swapping, Paging, Segmentation and Virtual Memory
System call
INTER PROCESS COMMUNICATION (IPC).pptx
Page replacement algorithms
Types Of Operating Systems
Multiprocessor
Ad

Similar to Operating System-Concepts of Process (20)

PPTX
Os unit 3 , process management
DOC
Operating Systems Unit Two - Fourth Semester - Engineering
PDF
OS-Process.pdf
PDF
UNIT - 3 PPT(Part- 1)_.pdf
PPTX
UNIT I-Processes.pptx
PPT
Ch03- PROCESSES.ppt
PPTX
UNIT 2 OS.pptx Introduction of Operating System
PDF
Process management- This ppt contains all required information regarding oper...
PPTX
Unit 2...............................................
PPTX
introduction to operating system unit 2
PPTX
Processing management
PPTX
Operating System.pptx
PPTX
Process management system in operating system
PPTX
Process management in operating system, process creation, process sheduling
PDF
Chapter 3.pdf
PPTX
Operating System Periodic Test 1 Answers.pptx
PPTX
Operating System Periodic test answers t
PPTX
Operating System Periodic Test 1 Answers
PPTX
Operating system || Chapter 3: Process
PDF
Process And Scheduling Algorithms in os
Os unit 3 , process management
Operating Systems Unit Two - Fourth Semester - Engineering
OS-Process.pdf
UNIT - 3 PPT(Part- 1)_.pdf
UNIT I-Processes.pptx
Ch03- PROCESSES.ppt
UNIT 2 OS.pptx Introduction of Operating System
Process management- This ppt contains all required information regarding oper...
Unit 2...............................................
introduction to operating system unit 2
Processing management
Operating System.pptx
Process management system in operating system
Process management in operating system, process creation, process sheduling
Chapter 3.pdf
Operating System Periodic Test 1 Answers.pptx
Operating System Periodic test answers t
Operating System Periodic Test 1 Answers
Operating system || Chapter 3: Process
Process And Scheduling Algorithms in os
Ad

More from Shipra Swati (20)

PDF
Operating System-Process Scheduling
PDF
Operating System-Introduction
PDF
Java unit 11
PDF
Java unit 14
PDF
Java unit 12
PDF
Java unit 7
PDF
Java unit 3
PDF
Java unit 2
PDF
Java unit 1
PDF
OOPS_Unit_1
PDF
Ai lab manual
PDF
Fundamental of Information Technology - UNIT 8
PDF
Fundamental of Information Technology - UNIT 7
PDF
Fundamental of Information Technology - UNIT 6
PDF
Fundamental of Information Technology
PDF
Disk Management
PDF
File Systems
PDF
Memory Management
PDF
Deadlocks
PDF
Process Synchronization
Operating System-Process Scheduling
Operating System-Introduction
Java unit 11
Java unit 14
Java unit 12
Java unit 7
Java unit 3
Java unit 2
Java unit 1
OOPS_Unit_1
Ai lab manual
Fundamental of Information Technology - UNIT 8
Fundamental of Information Technology - UNIT 7
Fundamental of Information Technology - UNIT 6
Fundamental of Information Technology
Disk Management
File Systems
Memory Management
Deadlocks
Process Synchronization

Recently uploaded (20)

PDF
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
01-Introduction-to-Information-Management.pdf
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Business Ethics Teaching Materials for college
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PPTX
PPH.pptx obstetrics and gynecology in nursing
PPTX
Pharma ospi slides which help in ospi learning
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PPTX
Cell Types and Its function , kingdom of life
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
01-Introduction-to-Information-Management.pdf
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
102 student loan defaulters named and shamed – Is someone you know on the list?
Business Ethics Teaching Materials for college
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
Microbial disease of the cardiovascular and lymphatic systems
Supply Chain Operations Speaking Notes -ICLT Program
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
TR - Agricultural Crops Production NC III.pdf
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Renaissance Architecture: A Journey from Faith to Humanism
PPH.pptx obstetrics and gynecology in nursing
Pharma ospi slides which help in ospi learning
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Cell Types and Its function , kingdom of life
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf

Operating System-Concepts of Process

  • 1. OPERATING SYSTEM NOTES UNIT-2 [Concept of processes, process scheduling, operations on processes, inter-process communication, communication in Client-Server-Systems, overview & benefits of threads.] Concept of Processes Process is not the same as program. A process is more than a program code. A process is an 'active' entity as opposed to program which is considered to be a 'passive' entity. A program becomes a process when an executable file is loaded into memory. Two common techniques for loading executable files are double-clicking an icon representing the executable file and entering the name of the executable file on the command line (as in prog.exe or a.out.) A program in the execution is called a Process. Attributes held by process include hardware state, memory, CPU etc. Process memory is divided into four sections for efficient working : • The text section is made up of the compiled program code, read in from non-volatile storage when the program is launched. • The data section is made up the global and static variables, allocated and initialized prior to executing the main. • The heap is used for the dynamic memory allocation, and is managed via calls to new, delete, malloc, free, etc. • The stack is used for local variables. Space on the stack is reserved for local variables when they are declared (at function entrance or elsewhere, depending on the language), and the space is freed up when the variables go out of scope. The stack is also used for function return values, and the exact mechanisms of stack management may be language specific. The stack and the heap start at opposite ends of the process's free space and grow towards each other. If they should ever meet, then either a stack overflow error will occur, or else a call to new or malloc will fail due to insufficient memory available. State of Process As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. Each process may be in one of the following states: • New. The process is being created. • Ready. The process has all the resources available that it needs to run, but the CPU is not currently working on this process's instructions. Provided By Shipra Swati, PSCET
  • 2. OPERATING SYSTEM NOTES • Running. Instructions are being executed. • Waiting. The process cannot run at the momentis because it is waiting for some event to occur (such as an I/O completion or reception of a signal). • Terminated. The process has finished execution. Some systems may have other states besides the ones listed here. Process Control Block (PCB) For each process there is a Process Control Block, PCB, enclosing all process-specific information. It is a data structure, which is shown below : • Process state. The state may be new, ready running, waiting, halted, and so on. • Process Number. Process ID, and parent process ID. • Program counter. The counter indicates the address of the next instruction to be executed for this process. • CPU registers. The registers vary in number and type, depending on the computer architecture. They include accumulators, index registers, stack pointers, and general- purpose registers, plus any condition-code information. Along with the program counter, this state information must be saved and restored when swapping processes in and out of the CPU. Following Diagram represents the idea of CPU switch from process to process. Provided By Shipra Swati, PSCET
  • 3. OPERATING SYSTEM NOTES • CPU-scheduling information. This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters. • Memory-management information. This information may include such information as the value of the base and limit registers, the page tables, or the segment tables, dependmg on the memory system used by the operating system. • Accounting information. This mformation includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on. • I/O status information. This information includes the list of I/O devices allocated to the process, a list of open files, and so on. Provided By Shipra Swati, PSCET
  • 4. OPERATING SYSTEM NOTES Process Scheduling For a single-processor system, there will never be more than one running process. If there are more processes, the rest will have to wait until the CPU is free. The objective of multiprogramming is to have some process nnming at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. The act of determining which process in the ready state should be moved to the running state is known as Process Scheduling. The prime aim of the process scheduling system is to keep the CPU busy all the time and to deliver minimum response time for all programs. For achieving this, the scheduler must apply appropriate rules for swapping processes IN and OUT of CPU. Scheduling Queues • All processes when enters into the system are stored in the job queue. • Processes that are residing in main memory and are ready and waiting to execute (Ready state) are kept on a list called the ready queue. • Processes waiting for a device to become available are placed in device queues. There are unique device queues for each I/O device available. • Other queues may also be created and used as needed. A new process is initially put in the ready queue. It waits there until it is selected for execution, or is dispatched. Once the process is allocated the CPU and is executing, one of several events could occur: • The process could issue an I/O request and then be placed in an I/O queue. • The process could create a new subprocess and wait for the subprocess's termination. • The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue. In the first two cases, the process execution halts and eventually switches from the waiting state to the ready state and is then put back in the ready queue. A process continues this cycle until it terminates. Then it is removed from all queues and has its PCB and resources deallocated. Schedulers A process migrates among the various scheduling queues throughout its lifetime. The operating system must select processes from these queues in some fashion for scheduling purposes. The selection process is carried out by the scheduler. There are three types of schedulers available : 1. Long Term Scheduler: Long term scheduler runs less frequently. Long Term Schedulers decide which program must get into the job queue. From the job queue, the Job Processor, selects processes and loads them into the memory for execution. Primary aim of the Job Scheduler is to maintain a good degree of Multiprogramming (the number of processes in memory). An optimal degree of Multiprogramming means the average rate of process creation is equal to the average departure rate of processes from the execution memory. Provided By Shipra Swati, PSCET
  • 5. OPERATING SYSTEM NOTES 2. Short Term Scheduler: This is also known as CPU Scheduler and runs very frequently. It selects from among the processes that are ready to execute and allocates the CPU to one of them. The primary aim of this scheduler is to enhance CPU performance and increase process execution rate. 3. Medium Term Scheduler: During extra load, this scheduler picks out big processes from the ready queue for some time, to allow smaller processes to execute, thereby reducing the number of processes in the ready queue. An efficient scheduling system will select a good process mix of CPU-bound processes and I/O bound processes. An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time doing computations. Context Switch Whenever an interrupt arrives, the CPU needs to save the current context of the currently running process, so that it can restore that context when interrupt is processed. The context is represented in the PCB of the process; it includes the value of the CPU registers, the process state, and memory- management information. Switching the CPU to another process requires performing a state save of the current process and a state restore of a different process. This task is known as context switch. When a context switch occurs, the kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run. Context-switch times are highly dependent on hardware support, so it is pure overhead, because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers that must be copied, and the existence of special instructions. Typical speeds are a few milliseconds. Operations On Processes The processes in most systems can execute concurrently, and they may be created and deleted dynamically. Thus, these systems must provide a mechanism for process creation and termination. 1. Process Creation: A process may create several new processes via a create-process system call, during the course of execution. The creating process is called a parent process, and the new processes are called the children of that process. Each of these new processes may in turn create other processes, forming a tree of processes. In general, a process will need certain resources (such as CPU time, memory, files, I/O devices) to accomplish its task. When a process creates a subprocess, that subprocess may be able to obtain its resources directly from the operating system, or it may be constrained to a subset of the resources of the parent process. The parent may have to partition its resources among its children, or it may be able to share some resources (such as memory or files) among several of its children. Restricting a child process to a subset of the parent's resources prevents any process from overloading the system by creating too many subprocesses. When a process creates a new process, two possibilities exist in terms of execution:  The parent continues to execute concurrently with its children.  The parent waits until some or all of its children have terminated. Provided By Shipra Swati, PSCET
  • 6. OPERATING SYSTEM NOTES There are also two possibilities in terms of the address space of the new process:  The child process is a duplicate of the parent process (it has the same program and data as the parent).  The child process has a new program loaded into it. In UNIX/Linux, each process is identified by its process identifier, which is a unique integer. A new process is created by the fork system call. init process serves as the root parent process for all user processes and has its PID = 1. In given figure, there are three children of init, whose PID are mentioned with the process thread. we can obtain a listing of processes by using the ps command. For example, the command ps -el will list complete information for all processes currently active in the system. 2. Process Termination: A Process can terminate for following reasons:  By making the exit(system call), typically returning an int, processes may request their own termination. This int is typically zero on successful completion and some non-zero code in the event of any problem. This return value is passed to the parent if it is doing a wait().  The inability of the system to deliver the necessary system resources.  In response to a KILL command or other unhandled process interrupts.  A parent may kill its children if the task assigned to them is no longer needed Provided By Shipra Swati, PSCET
  • 7. OPERATING SYSTEM NOTES  If the parent exits, the system may or may not allow the child to continue without a parent. When a process ends, all the resources of the process-including physical and virtual memory, open files, and I/0 buffers-are deallocated by the operating system. The process termination status and execution times are returned to the parent if the parent is waiting for the child to terminate, or eventually returned to init if the process already became an orphan (parents process no longer exists). Inter-Process Communication Processes executing concurrently in the operating system may be either: • independent processes, or • cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system. Any process that does not share data with any other process is independent. A process is cooperating if it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process. A common paradigm for cooperating processes is Produce-Consumer Problem. A producer process produces information that is consumed by a consumer process. For example, a compiler may produce assembly code, which is consumed by an assembler. The assembler, in turn, may produce object modules, which are consumed by the loader. There are several reasons for providing an environment that allows process cooperation: • Information sharing. Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to such information. • Computation speedup. If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer has multiple processing elements (such as CPUs or I/O channels). • Modularity. We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads. • Convenience. Even an individual user may work on many tasks at the same time. For instance, a user may be editing, printing, and compiling in parallel. Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to exchange data and information. There are two fundamental models of interprocess communication: (1) shared memory and (2) message passing. Provided By Shipra Swati, PSCET
  • 8. OPERATING SYSTEM NOTES In the shared-memory model, a region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes. Following figure illustrates the difference between the two systems [(a) message-passing and (b) shared memory]: (1) Shared Memory Systems: Interprocess communication using shared memory requires communicating processes to establish a region of shared memory. Typically, a shared-memory region resides in the address space of the process creating the shared-memory segment. Other processes that wish to communicate using this shared-memory segment must attach it to their address space. Normally, the operating system tries to prevent one process from accessing another process's memory. Shared memory requires that two or more processes agree to remove this restriction. They can then excbange information by reading and writing data in the shared areas. The form of the data and the location are determined by these processes and are not under the operating system's control. The processes are also responsible for ensuring that they are not writing to the same location simultaneously. (2) Message Passing Systems: Message passing provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space and is particularly useful in a distributed environment, where the communicating processes may reside on different computers connected by a network. A message-passing facility provides at least two operations: send(message) and receive(message). Messages sent by a process can be of either fixed or variable size. If different processes want to communicate, they must send messages to and receive messages from each other. A communication link must be established between the cooperating processes before messages can be sent. There are three key issues to be resolved in message passing systems: Provided By Shipra Swati, PSCET
  • 9. OPERATING SYSTEM NOTES 1. Direct or indirect communication ( naming ): A) With direct communication the sender must know the name of the receiver to which it wishes to send a message. In this scheme, the send() and receive() primitives are defined as: i. send(P, message) -Send a message to process P. ii. receive (Q, message)- Receive a message from process Q. A communication link in this scheme has the following properties: i. A link is established automatically between every pair of processes that want to communicate. The processes need to know only each other's identity to communicate. ii. A link is associated with exactly two processes. iii. Between each pair of processes, there exists exactly one link. B) Indirect communication uses shared mailboxes, or ports. A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has a unique identification. The send() and receive() primitives are defined as follows: i. send (A, message) -Send a message to mailbox A. ii. receive (A, message)- Receive a message from mailbox A. In this scheme, a communication link has the following properties: i. A link is established between a pair of processes only if both members of the pair have a shared mailbox. ii. A link may be associated with more than two processes. iii. Between each pair of communicating processes, there may be a number of different links, with each link corresponding to one mailbox. 2. Synchronous or asynchronous communication: Communication between processes takes place through calls to send() and receive () primitives. There are different design options for implementing each primitive. Message passing may be either blocking or nonblocking - also known as synchronous and asynchronous. 1. Blocking send. The sending process is blocked until the message is received by the receiving process or by the mailbox. 2. Nonblocking send. The sending process sends the message and resumes operation. 3. Blocking receive. The receiver blocks until a message is available. 4. Nonblocking receive. The receiver retrieves either a valid message or a null. 3. Automatic or explicit buffering: Whether communication is direct or indirect, messages exchanged by communicating processes reside in a temporary queue. Basically, such queues can be implemented in three ways: Provided By Shipra Swati, PSCET
  • 10. OPERATING SYSTEM NOTES 1. Zero capacity. The queue has a maximum length of zero; thus, the link cannot have any messages waiting in it. In this case, the sender must block until the recipient receives the message. 2. Bounded capacity. The queue has finite length n; thus, at most n messages can reside in it. If the queue is not full when a new message is sent, the message is placed in the queue and the sender can continue execution without waiting. If the link is full, the sender must block until space is available in the queue. 3. Unbounded capacity. The queue's length is potentially infinite; thus, any number of messages can wait in it. The sender never blocks. Comparison between two methods of IPC: • Shared Memory is faster once it is set up, because no system calls are required and access occurs at normal memory speeds. However it is more complicated to set up, and doesn't work as well across multiple computers. Shared memory is generally preferable when large amounts of information must be shared quickly on the same computer. • Message Passing requires system calls for every message transfer, and is therefore slower, but it is simpler to set up and works well across multiple computers. Message passing is generally preferable when the amount and/or frequency of data transfers is small, or when multiple computers are involved. Communication In Client-Server-Systems Client-Server Systems are specialized form of distributed system, where a system designated as Server System, satisfy requests generated by Client Systems. This form of computing system has general structure as shown below: Communication in Client-Server Systems takes place using sockets, Remote Procedure Calls (RPC) and Pipes other than two techniques discussed before (Message Passing and Shared Memory). 1. Sockets: A socket is defined as an endpoint for communication. It is identified by an IP address concatenated with a port number. A pair of processes communicating over a network employ a pair of sockets-one for each process. Servers implementing specific services (such as telnet, FTP, and HTTP) listen to well-known ports (a telnet server listens to port 23; an FTP server listens to port 21; and a Web, or HTTP, server listens to port 80). All ports below 1024 are considered ·well known; we can use them to implement standard services. Provided By Shipra Swati, PSCET
  • 11. OPERATING SYSTEM NOTES If a client on host X with IP address 146.86.5.20 wishes to establish a connection with a Web server (which is listening on port 80) at address 161.25.19.8 (IP), the client will be assigned some arbitrary number greater than 1024 as its port. Let's say the port number is 1625. Now, the connection will consist of a pair of sockets: (146.86.5.20:1625) on host X and (161.25.19.8:80) on the Web server. As all connections must be unique. Therefore, if another process also on host X wished to establish another connection with the same Web server, it would be assigned a port number greater than 1024 and not equal to 1625. This ensures that all com1.ections consist of a unique pair of sockets. Communication channels via sockets may be of one of two major forms: • Connection-oriented ( TCP, Transmission Control Protocol ): These connections emulate a telephone connection. All packets sent down the connection are guaranteed to be delivered at the receiving end in the same order in which they were sent. The TCP layer of the network protocol takes steps to verify all packets sent, re-send packets if necessary, and arrange the received packets in the proper order before delivering them to the receiving process. There is a certain amount of overhead involved in this procedure, and if one packet is missing or delayed, then any packets which follow will have to wait until the errant (culprit) packet is delivered. • Connectionless ( UDP, User Datagram Protocol ): They emulate individual telegrams. There is no guarantee that any particular packet will get through undamaged or will be delivered at all, and no guarantee that the packets will get delivered in any particular order. There may even be duplicate packets delivered, depending on how the intermediary connections are configured. UDP transmissions are much faster than TCP, but applications must implement their own error checking and recovery procedures. 2. Remote Procedure Call: In Client-Server Systems, RPCs allow a client to invoke a procedure on a remote host as it would invoke a procedure locally by providing a stub on the client side. Stubs are Client-side proxy for the actual procedure on the server. Typically, Provided By Shipra Swati, PSCET
  • 12. OPERATING SYSTEM NOTES a separate stub exists for each separate remote procedure. When the client invokes a remote procedure, the RPC system calls the appropriate stub, which locates the port on the server and marshals the parameters (Parameter marshalling involves packaging the parameters into a form that can be transmitted over a network). The stub then transmits a message to the server using message passing. A similar stub on the server side receives this message, unpacks the marshalled parameters and invokes the procedure on the server. If necessary, return values are passed back to the client using the same technique. 3. Pipes: A pipe acts as a conduit (channel) allowing two processes to communicate. In implementing a pipe, four issues must be considered: 1. Does the pipe allow unidirectional communication or bidirectional communication? 2. If two-way communication is allowed, is it half duplex (data can travel only one way at a time) or full duplex (data can travel in both directions at the same time)? 3. Should there be a relationship (such as parent-child) between the communicating processes? 4. Can the pipes communicate over a network, or the communicating processes must reside on the same machine? Types of Pipes: 1. Ordinary pipes: It allows two processes to communicate in standard producer- consumer fashion; the producer writes to one end of the pipe (the write-end) and the consumer reads from the other end (the read-end). result, Ordinary pipes are unidirectional, allowing only one-way communication. If two-way communication is required, two pipes must be used, with each pipe sending data in a different direction. Provided By Shipra Swati, PSCET
  • 13. OPERATING SYSTEM NOTES 2. Named Pipes: Named pipes provide a much more powerful communication tool than ordinary pipes. Here, communication can be bidirectional, and no parent-child relationship is required. Once a named pipe is established, several processes can use it for communication. Named pipes continue to exist after communicating processes have finished. Overview & Benefits Of Threads A thread is a path of execution within a process. Despite of the fact that a thread must execute in process, the process and its associated threads are different concept. Processes are used to group resources together and threads are the entities scheduled for execution on the CPU. Because threads have some of the properties of processes, they are sometimes called lightweight processes. Like a traditional process (process with one thread), a thread can be in any of several states (Running, Blocked, Ready or Terminated). Threads are not independent of one other like processes as threads shares with other threads their code section, data section and OS resources like open files and signals. But, like process, a thread has its own thread ID, program counter (PC), a register set, and a stack space. A traditional process has a single thread of control (Single Threaded Process). If a process has multiple threads of control, it can perform more than one task at a time (Multi-Threaded System). Similarities in Process and Thread: • Like processes threads share CPU and only one thread active (running) at a time. • Like processes, threads within a processes, threads within a processes execute sequentially. • Like processes, thread can create children. • And like process, if one thread is blocked, another thread can run. Provided By Shipra Swati, PSCET
  • 14. OPERATING SYSTEM NOTES Difference between Process and Thread: S.N. Process Thread 1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser resources than a process. 2 Process switching needs interaction with operating system. Thread switching does not need to interact with operating system. 3 In multiple processing environments, each process executes the same code but has its own memory and file resources. All threads can share same set of open files, code section, data segments, etc. 4 Multiple processes without using threads use more resources. Multiple threaded processes use fewer resources. 5 In multiple processes each process operates independently of the others. One thread can read, write or change another thread's data. Benefits The benefits of multithreaded programming can be broken down into four major categories: 1. Responsiveness. If the process is divided into multiple threads (Multithreading), it will continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. For instance, a multithreaded Web browser could allow user interaction in one thread while an image was being loaded in another thread. 2. Resource sharing. Processes may only share resources through techniques such as shared memory or message passing. However, resources like code, data and file can be shared among all threads within a process, which allows multiple tasks to be performed simultaneously in a single address space. 3. Economy. Allocating memory and resources for process creation is costly. Because threads share the resources of the process to which they belong, it is more economical to create and context-switch threads. 4. Scalability. The benefits of multithreading can be greatly increased in a multiprocessor architecture, where threads may be running in parallel on different processors. A single- threaded process can only run on one processor, regardless how many are available. Multithreading on a multi-CPU machine increases parallelism and make process execution faster. 5. Enhanced Throughput of the system. If process is divided into multiple threads and each thread function is considered as one job, then the number of jobs completed per unit time is increased. Thus, the throughput of the system can be increased. Provided By Shipra Swati, PSCET
  • 15. OPERATING SYSTEM NOTES Types of Threads: 1. User Level Thread: User managed threads. 2. Kernel Level Thread: Supported and managed directly by the operating system. Difference between User-Level & Kernel-Level Thread S.N. User-Level Threads Kernel-Level Thread 1 User-level threads are faster to create and manage. Kernel-level threads are slower to create and manage. 2 Implementation is by a thread library at the user level. Operating system supports creation of Kernel threads. 3 User-level thread is generic and can run on any operating system. Kernel-level thread is specific to the operating system. The user threads must be mapped to kernel threads, by one of the following strategies. 1. Many-To-One Model:  many user-level threads are all mapped onto a single kernel thread.  Thread management is handled by the thread library in user space, which is very efficient.  However, if a blocking system call is made, then the entire process blocks, even if the other user threads would otherwise be able to continue.  Because a single kernel thread can operate only on a single CPU, the many-to-one model does not allow individual processes to be split across multiple CPUs.  Green Threads - a thread library available for Solaris-uses this modet. Now few systems follow this strategy. Provided By Shipra Swati, PSCET
  • 16. OPERATING SYSTEM NOTES 2. One-To-One Model:  maps each user thread to a kernel thread.  It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call;  it also allows multiple threads to run in parallel on multiprocessors.  Drawback: creating a user thread requires creating the corresponding kernel thread. So, overhead of creating kernel threads can burden the performance of an application.  most implementations of this model restrict the number of threads supported by the system.  Linux, along with the family of Windows operating systems, implement the one-to- one model. 3. Many-To-Many Model:  any number of user threads onto an equal or smaller number of kernel threads.  Users have no restrictions on the number of threads created, developers can create as many user threads as necessary .  when a thread performs a blocking system call, the kernel can schedule another thread for execution.  the kernel threads ( corresponding to different user threads) can run in parallel on a multiprocessor, so true concurrency is achieved. Provided By Shipra Swati, PSCET