SlideShare a Scribd company logo
Process Concept
Process Concept
• An operating system executes a variety of
programs:
– Batch system – jobs
– Time-shared systems – user programs or tasks
• Many modern process concepts are still expressed
in terms of jobs, ( e.g. job scheduling ), and the
two terms are often used interchangeably.
2
 In computing, a process is an instance of a computer program that
is being executed. It contains the program code and its current
activity.
 Process memory is divided into four sections:
 The text section comprises the compiled program code, read in
from non-volatile storage when the program is launched.
 The data section stores global and static variables, allocated and
initialized prior to executing main.
 The heap is used for dynamic memory allocation, and is managed via
calls to new, delete, malloc, free, etc.
 The stack is used for local variables. Space on the stack is reserved
for local variables when they are declared ( at function entrance or
elsewhere, depending on the language ), and the space is freed up
when the variables go out of scope.
 A program is a passive entity, such as a file containing a list of
instructions stored on disk (often called an executable file).
 In contrast, a process is an active entity, with a program counter
specifying the next instruction to execute and a set of associated
resources.
 Although two processes may be associated with the same program,
they are nevertheless considered two separate execution
sequences.
 When a process executes, it passes through different states. The
state of a process is defined in part by the current activity of that
process.
 These stages may differ in different operating systems, and the
names of these states are also not standardized.
 A Process may be in one o the following states:
- The process is being created.
- The process is waiting to be assigned to a processor.
- Instructions are being executed.
- The process is waiting for some event to occur(such
as an I/O completion or reception of a signal).
- The process has finished execution
 New - The process is in the stage of being created.
 Ready - The process has all the resources available that it needs to run, but
the CPU is not currently working on this process's instructions.
 Running - The CPU is working on this process's instructions.
 Waiting - The process cannot run at the moment, because it is waiting for
some resource to become available or for some event to occur.
 Terminated - The process has completed.
 Each process is represented in the operating system by a process control
block (PCB)—also called a task control block. It contains many pieces of
information associated with a specific process, including these:
►Process state: The state may be new, ready, running and so on
►Program counter: It indicates the address of the next instruction to be executed
for this program.
►CPU registers: These vary in number and type based on architecture. They
include accumulators, stack pointers, general purpose registers etc. Along with
►CPU scheduling information: This includes process priority, pointers to
scheduling queues and any scheduling parameters.
►Memory-management information: This includes the value of base and limit
registers (protection) and page tables, segment tables depending on memory.
►Accounting information: It includes amount of CPU and real time used,
account numbers, process numbers etc
►I/O status information: It includes list of I/O devices allocated to this process,
a list of open files etc
The architecture of a PCB is completely dependent on
Operating System and may contain different information
in different operating systems.
The PCB is maintained for a process throughout its
lifetime, and is deleted once the process terminates.
OS - Process Concepts
A process is a program that performs a single thread of
execution. This single thread of control allows the process to
perform only one task at a time.
Most modern operating systems have extended the process
concept to allow a process to have multiple threads of
execution and thus to perform more than one task at a time.
This feature is especially beneficial on multicore systems,
where multiple threads can run in parallel.
On a system that supports threads, the PCB is expanded to
include information for each thread. Other changes throughout
the system are also needed to support threads.
Process Scheduling
• The two main objectives of the process scheduling
system are to keep the CPU busy at all times and to
deliver "acceptable" response times for all
programs, particularly for interactive ones.
• The process scheduler must meet these objectives
by implementing suitable policies for swapping
processes in and out of the CPU.
 As processes enter the system, they are put into a , which
consists of all processes in the system.
 The processes that are residing in main memory and are ready and
waiting to execute are kept on a list called the . This
queue is generally stored as a linked list.
 A ready-queue header contains pointers to the first and final PCBs
in the list. Each PCB includes a pointer field that points to the next
PCB in the ready queue.
 The system also includes other queues. When a process is allocated
the CPU, it executes for a while and eventually quits, is interrupted,
or waits for the occurrence of a particular event, such as the
completion of an I/O request.
 The list of processes waiting for a particular I/O device is called a
. Each device has its own device queue.
Ready Queue And Various I/O Device Queues
 A common representation of process scheduling is a
. Each rectangular box represents a queue.
 Two types of queues are present: the and a set of
. The circles represent the resources that serve the queues,
and the arrows indicate the flow of processes in the system.
In the first two cases, the process eventually switches from the
waiting state to the ready state and is then put back in the ready
queue.
A process continues this cycle until it terminates, at which time it is
removed from all queues and has its PCB and resources deallocated.
 A new process is initially put in the ready queue. It waits there until it is
selected for execution, or dispatched.
 Once the process is allocated the CPU and is executing, one of several
events could occur:
 The process could issue an I/O request and then be placed in an I/O queue.
 The process could create a new child process and wait for the child’s
termination.
 The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.
 A process migrates among the various scheduling queues
throughout its lifetime. The operating system must select, for
scheduling purposes, processes from these queues in some
fashion. The selection process is carried out by the appropriate
scheduler.
 A is typical of a batch system or a very heavily
loaded system. When more processes are submitted than that can
be executed immediately, they are spooled to a mass-storage
device and are kept there for later execution. The long-term
scheduler, or job scheduler, selects processes from this pool and
loads them into memory for execution.
 The , or , selects from among
the processes that are ready to execute and allocates the CPU to
one of them.
 The primary distinction between these two schedulers lies in
frequency of execution. The short-term scheduler must select a
new process for the CPU frequently (in the order of milliseconds).
The long-term scheduler executes much less frequently; minutes
may separate the creation of one new process and the next.
The long-term scheduler controls the degree of
multiprogramming (the number of processes in memory).
An efficient scheduling system will select a good process mix
of CPU-bound processes and I/O bound processes.
 Some operating systems, such as time-sharing systems, may introduce an
additional, intermediate level of scheduling called
.
 The key idea behind a medium-term scheduler is that sometimes it can
be advantageous to remove a process from memory (and from active
contention for the CPU) and thus reduce the degree of
multiprogramming.
 Swapping is a mechanism in which a process can be swapped/moved
temporarily out of main memory to a backing store , and then brought
back into memory for continued execution.
 The process is swapped out, and is later swapped in, by the medium-
term scheduler.
OS - Process Concepts
 When an interrupt occurs, the system needs to save the current
context of the process running on the CPU so that it can restore that
context when its processing is done, essentially suspending the
process and then resuming it. The context is represented in the PCB
of the process.
.
 Context-switch time is pure overhead, because the system does no
useful work while switching. Switching speed varies from machine
to machine, depending on the memory speed, the number of
registers that must be copied, and the existence of special
instructions.
 Context-switch times are highly dependent on hardware support.
Operations on Processes
• The processes in the system can execute
concurrently, and they must be created and
deleted dynamically. Thus, the operating system
must provide a mechanism (or facility) for process
creation and termination.
• Other kinds of operations on processes include
Deletion, Suspension, Resumption, Cloning, Inter-
Process Communication and Synchronization
Process Creation
 During the course of execution, a process may create
several new processes.
 The creating process is called a parent process, and the
new processes are called the children of that process. Each
of these new processes may in turn create other processes,
forming a tree of processes.
 Most operating systems (including UNIX, Linux, and
Windows) identify processes according to a unique process
identifier (or pid), which is typically an integer number.
 The pid provides a unique value for each process in the
system, and it can be used as an index to access various
attributes of a process within the kernel.
 On typical UNIX systems the process scheduler is termed sched, and
is given PID 0. The first thing it does at system startup time is to
launch init, which gives that process PID 1.
 Init then launches all system daemons and user logins, and becomes
the ultimate parent of all other processes.
A typical process tree for a Linux system
 In general, when a process creates a child process, that child process
will need certain resources (CPU time, memory, files, I/O devices) to
accomplish its task.
 A child process may be able to obtain its resources directly from the
operating system, or it may be constrained to a subset of the
resources of the parent process.
 The parent may have to partition its resources among its children, or
it may be able to share some resources (such as memory or files)
among several of its children.
 Restricting a child process to a subset of the parent’s resources
prevents any process from overloading the system by creating too
many child processes.
 In addition to supplying various physical and logical resources, the
parent process may pass along initialization data (input) to the child
process.
 There are two options for the parent process after creating the child:
• Wait for the child process to terminate before proceeding. The parent makes a
wait( ) system call, for either a specific child or for any child, which causes the
parent process to block until the wait( ) returns. UNIX shells normally wait for
their children to complete before issuing a new prompt.
• Run concurrently with the child, continuing to process without waiting. This is
the operation seen when a UNIX shell runs a process as a background task. It
is also possible for the parent to run for a while, and then wait for the child
later, which might occur in a sort of a parallel processing operation. ( E.g. the
parent may fork off a number of children without waiting for any of them,
then do a little work of its own, and then wait for the children. )
 Two possibilities for the address space of the child relative to the parent:
• The child may be an exact duplicate of the parent, sharing the same program
and data segments in memory. Each will have their own PCB, including
program counter, registers, and PID. This is the behavior of the fork in UNIX.
• The child process may have a new program loaded into its address space, with
all new code and data segments. This is the behavior of the spawn system calls
in Windows. UNIX systems implement this as a second step, using the exec
system call.
 The figure shows the fork and exec
process on a UNIX system.
 The fork system call returns the PID
of the processes.
 It returns a zero to the child process
and a non-zero child PID to the
parent, so the return value indicates
which process is which.
 Process IDs can be looked up any
time for the current process or its
direct parent using the getpid( ) and
getppid( ) system calls respectively.
Creating a Separate Process via Windows API
The figure shows the more complicated
process for Windows, which must
provide all of the parameter information
for the new process as part of the forking
process.
Process Termination
 A process terminates when it finishes executing its final statement
and asks the operating system to delete it by using the exit()
system call.
 All the resources of the process—including physical and virtual
memory, open files, and I/O buffers—are deallocated by the
operating system.
 A process can cause the termination of another process via an
appropriate system call. Such system calls are usually invoked
only by the parent of the process that is to be terminated. A parent
needs to know the identities of its children if it is to terminate
them.
 A Parent may terminate the execution of one of its children for a
variety of reasons, such as these:
1. The child has exceeded its usage of some of the resources that it
has been allocated. (To determine whether this has occurred,
the parent must have a mechanism to inspect the state of its
children.)
2. The task assigned to the child is no longer required.
3. The parent is exiting, and the operating system does not allow a
child to continue if its parent terminates.
Some systems do not allow a child to exist if its parent has
terminated. In such systems, if a process terminates (either
normally or abnormally), then all its children must also be
terminated. This phenomenon, referred to as cascading
termination, is normally initiated by the operating system.
 When a process terminates, all of its system resources are freed up,
open files flushed and closed, etc.
 The process termination status and execution times are returned to
the parent if the parent is waiting for the child to terminate, or
eventually returned to init if the process becomes an orphan.
 A process that has terminated, but whose parent has not yet called
wait(), is known as a .
 If a parent did not invoke wait() and instead terminated, thereby
leaving its child processes as . Linux and UNIX address this
scenario by assigning the init process as the new parent to orphan
processes.
 The init process periodically invokes wait(), thereby allowing the exit
status of any orphaned process to be collected and releasing the
orphan’s process identifier and process-table entry.
Interprocess Communication
• Processes executing concurrently in the operating
system may be either independent processes or
cooperating processes.
A process is if it cannot affect or be
affected by the other processes executing in the system.
Any process that does not share data with any other
process is independent.
A process is if it can affect or be affected
by the other processes executing in the system. Clearly,
any process that shares data with other processes is a
cooperating process.
There are several reasons for providing an environment that allows
process cooperation:
 Information Sharing - There may be several processes which
need access to the same file. ( e.g. pipelines. )
 Computation speedup - Often a solution to a problem can be
solved faster if the problem can be broken down into sub-tasks to
be solved simultaneously ( particularly when multiple processors
are involved. )
 Modularity - The most efficient architecture may be to break a
system down into cooperating modules. ( E.g. databases with a
client-server architecture. )
 Convenience - Even a single user may be multi-tasking, such as
editing, compiling, printing, and running the same code in
different windows.
Cooperating processes require some type of inter-process communication,
which is most commonly one of two types: Message Passing systems (a) or
Shared Memory systems(b)
 Message Passing requires
system calls for every message
transfer, and is therefore slower,
but it is simpler to set up and
works well across multiple
computers.
 Message passing is generally
preferable when the amount
and/or frequency of data
transfers is small, or when
multiple computers are
involved.
 Shared Memory is faster once it is set up, because no system calls are
required and access occurs at normal memory speeds. However it is more
complicated to set up, and doesn't work as well across multiple computers.
Shared memory is generally preferable when large amounts of information
must be shared quickly on the same computer.
 Interprocess communication using shared memory requires
communicating processes to establish a region of shared memory.
 Typically, a shared-memory region resides in the address space of the
process creating the shared-memory segment.
 Other processes that wish to communicate using this shared-memory
segment must attach it to their address space.
 Shared memory requires that two or more processes agree to remove
the restriction of preventing one process accessing another processes
memory.
 They can then exchange information by reading and writing data in
the shared areas. The form of the data and the location are
determined by these processes and are not under the operating
system’s control. The processes are also responsible for ensuring that
they are not writing to the same location simultaneously.
Producer-Consumer problem is a common paradigm for cooperating
processes in which one process is producing data and another process
is consuming the data.
The producer–consumer problem provides a useful metaphor for the
client–server paradigm. A server is thought as a producer and a client
as a consumer.
One solution to the producer–consumer problem uses shared
memory. To allow producer and consumer processes to run
concurrently, we must have available a buffer of items that can be
filled by the producer and emptied by the consumer.
This buffer will reside in a region of memory that is shared by the
producer and consumer processes. A producer can produce one item
while the consumer is consuming another item. The producer and
consumer must be synchronized, so that the consumer does not try to
consume an item that has not yet been produced.
Two types of buffers can be used. The unbounded buffer places
no practical limit on the size of the buffer. The consumer may
have to wait for new items, but the producer can always
produce new items.
The bounded buffer assumes a fixed buffer size. In this case, the
consumer must wait if the buffer is empty, and the producer
must wait if the buffer is full.
A producer tries to insert data into
an empty slot of the buffer. A
consumer tries to remove data
from a filled slot in the buffer. As
you might have guessed by now,
those two processes won't produce
the expected output if they are
being executed concurrently.
The producer process has a local variable next produced in which
the new item to be produced is stored. The consumer process has a
local variable next consumed in which the item to be consumed is
stored.
 Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the
same address space.
 It is particularly useful in a distributed environment, where the
communicating processes may reside on different computers
connected by a network.
 A message-passing facility provides at least two operations:
send(message) receive(message)
 Messages sent by a process can be either fixed or variable in size.
 If only fixed-sized messages can be sent, the system-level
implementation is straightforward. However, makes the task of
programming more difficult.
 If it is variable-sized messages then it require a more complex system-
level implementation, but the programming task becomes simpler.
If processes P and Q want to communicate, they must send
messages to and receive messages from each other: a
communication link must exist between them. This link can be
implemented in a variety of ways.
Here are several methods for logically implementing a link and
the send()/receive() operations:
 Direct or indirect communication (Naming)
 Synchronous or asynchronous communication
 Automatic or explicit buffering
Processes that want to communicate must have a way to refer to
each other. They can use either direct or indirect communication.
Under , each process that wants to
communicate must explicitly name the recipient or sender of the
communication. In this scheme, the send() and receive() primitives
are defined as:
• send(P, message)—Send a message to process P.
• receive(Q, message)—Receive a message from process Q.
A communication link in this scheme has the following properties:
A link is established automatically between every pair of
processes that want to communicate. The processes need to
know only each other’s identity to communicate.
A link is associated with exactly two processes.
Between each pair of processes, there exists exactly one link.
40
Naming
I
P
C
The previous scheme exhibits symmetry in addressing; that is, both the
sender process and the receiver process must name the other to
communicate.
A variant of this scheme employs asymmetry in addressing. Here, only
the sender names the recipient; the recipient is not required to name the
sender. In this scheme, the send() and receive() primitives are defined as
follows:
• send(P, message)—Send a message to process P.
• receive(id, message)—Receive a message from any process.
The variable id is set to the name of the process with which
communication has taken place.
 The disadvantage in both of these schemes (symmetric and
asymmetric) is the limited modularity of the resulting process
definitions.
 Any such hard-coding techniques, where identifiers must be explicitly
stated, are less desirable than techniques involving indirection.
41
Naming
I
P
C
 With indirect communication, the messages are sent to and received
from .
 A mailbox can be viewed abstractly as an object into which messages
can be placed by processes and from which messages can be removed.
Each mailbox has a unique identification.
 A process can communicate with another process via a number of
different mailboxes, but two processes can communicate only if they
have a shared mailbox. The send() and receive() primitives are defined
as follows:
 send(A, message)—Send a message to mailbox A.
 receive(A, message)—Receive a message from mailbox A.
 In this scheme, a communication link has the following properties:
 A link is established between a pair of processes only if both members of the
pair have a shared mailbox.
 A link may be associated with more than two processes.
 Between each pair of communicating processes, a number of different links
may exist, with each link corresponding to one mailbox
42
Naming – Indirect Communication
I
P
C
 A mailbox may be owned either by a process or by the OS.
 If the mailbox is owned by a process (that is, the mailbox is part of the
address space of the process), then we distinguish between the owner
(which can only receive messages through this mailbox) and the user
(which can only send messages to the mailbox).
 Since each mailbox has a unique owner, there can be no confusion
about which process should receive a message sent to this mailbox.
 When a process that owns a mailbox terminates, the mailbox
disappears. Any process that subsequently sends a message to this
mailbox must be notified that the mailbox no longer exists.
 A mailbox owned by an OS is independent and is not attached to any
process. Then the OS needs to provide a mechanism to do the
following:
 Create a new mailbox.
 Send and receive messages through the mailbox.
 Delete a mailbox.
43
Naming – Indirect Communication
I
P
C
44
Communication between processes takes place through calls to
send() and receive() primitives. There are different design options
for implementing each primitive.
Message passing may be either blocking or nonblocking— also
known as synchronous and asynchronous.
The sending process is blocked until the
message is received by the receiving process or by the mailbox.
. The sending process sends the message and
resumes operation.
. The receiver blocks until a message is
available.
The receiver retrieves either a valid
message or a null.
45
Synchronization
I
P
C
Whether communication is direct or indirect, messages exchanged
by communicating processes reside in a temporary queue.
Basically, such queues can be implemented in three ways:
. The queue has a maximum length of zero; thus, the
link cannot have any messages waiting in it. In this case, the sender
must block until the recipient receives the message.
The queue has finite length n; thus, at most n
messages can reside in it. If the queue is not full when a new
message is sent, the message is placed in the queue, and the sender
can continue execution without waiting. If the link is full, the
sender must block until space is available in the queue.
The queue’s length is potentially infinite;
thus, any number of messages can wait in it. The sender never
blocks.
46
Buffering
I
P
C

More Related Content

PDF
Introduction to Operating Systems
PPTX
Process scheduling
PPTX
Memory Management in OS
PPTX
Operating system - Process and its concepts
PPTX
Introduction to Digital Marketing
PDF
Process scheduling (CPU Scheduling)
PPTX
Introduction to Operating Systems
PPTX
Threads (operating System)
Introduction to Operating Systems
Process scheduling
Memory Management in OS
Operating system - Process and its concepts
Introduction to Digital Marketing
Process scheduling (CPU Scheduling)
Introduction to Operating Systems
Threads (operating System)

What's hot (20)

PPTX
Process management os concept
PPTX
CPU Scheduling in OS Presentation
PPTX
Cpu scheduling in operating System.
PPTX
INTER PROCESS COMMUNICATION (IPC).pptx
PDF
Memory management
PPTX
Os unit 3 , process management
PPTX
Distributed operating system
PPTX
process control block
PPTX
Types Of Operating Systems
PPTX
Input-Buffering
PPT
Memory Management in OS
PPTX
Operating system components
PPTX
SCHEDULING ALGORITHMS
PPTX
Operating system memory management
PPTX
cpu scheduling
PPTX
Demand paging
PPTX
Delay , Loss & Throughput
PPT
Operating system services 9
Process management os concept
CPU Scheduling in OS Presentation
Cpu scheduling in operating System.
INTER PROCESS COMMUNICATION (IPC).pptx
Memory management
Os unit 3 , process management
Distributed operating system
process control block
Types Of Operating Systems
Input-Buffering
Memory Management in OS
Operating system components
SCHEDULING ALGORITHMS
Operating system memory management
cpu scheduling
Demand paging
Delay , Loss & Throughput
Operating system services 9
Ad

Similar to OS - Process Concepts (20)

PDF
Operating System-Concepts of Process
PPT
Ch03- PROCESSES.ppt
PDF
UNIT - 3 PPT(Part- 1)_.pdf
PDF
Chapter 3.pdf
DOC
Operating Systems Unit Two - Fourth Semester - Engineering
PDF
OS-Process.pdf
PPTX
ch2nvbjdvsbjbjfjjfjf j Process Mangement.pptx
PDF
Process And Scheduling Algorithms in os
PPTX
Unit 2...............................................
PPTX
introduction to operating system unit 2
PPTX
UNIT I-Processes.pptx
PPTX
Unit 1 process management operating system.pptx
PPTX
Operating System
PPTX
Operating System process management system.pptx
PPTX
Operating System.pptx
PDF
CSI-503 - 3. Process Scheduling
PDF
Process management- This ppt contains all required information regarding oper...
PDF
process.pdfzljwiyrouyaeutoaetodtusiokklhh
PPTX
UNIT 2 OS.pptx Introduction of Operating System
PPTX
Process Management
Operating System-Concepts of Process
Ch03- PROCESSES.ppt
UNIT - 3 PPT(Part- 1)_.pdf
Chapter 3.pdf
Operating Systems Unit Two - Fourth Semester - Engineering
OS-Process.pdf
ch2nvbjdvsbjbjfjjfjf j Process Mangement.pptx
Process And Scheduling Algorithms in os
Unit 2...............................................
introduction to operating system unit 2
UNIT I-Processes.pptx
Unit 1 process management operating system.pptx
Operating System
Operating System process management system.pptx
Operating System.pptx
CSI-503 - 3. Process Scheduling
Process management- This ppt contains all required information regarding oper...
process.pdfzljwiyrouyaeutoaetodtusiokklhh
UNIT 2 OS.pptx Introduction of Operating System
Process Management
Ad

More from Mukesh Chinta (20)

PDF
CCNA-2 SRWE Mod-10 LAN Security Concepts
PDF
CCNA-2 SRWE Mod-11 Switch Security Configuration
PDF
CCNA-2 SRWE Mod-12 WLAN Concepts
PDF
CCNA-2 SRWE Mod-13 WLAN Configuration
PDF
CCNA-2 SRWE Mod-15 Static IP Routing
PDF
CCNA-2 SRWE Mod-14 Routing Concepts
PDF
Protecting the Organization - Cisco: Intro to Cybersecurity Chap-4
PDF
Protecting Your Data and Privacy- Cisco: Intro to Cybersecurity chap-3
PDF
Attacks, Concepts and Techniques - Cisco: Intro to Cybersecurity Chap-2
PDF
The need for Cybersecurity - Cisco Intro to Cybersec Chap-1
PDF
Cisco Cybersecurity Essentials Chapter- 7
PDF
Protocols and Reference models CCNAv7-1
PDF
Basic Switch and End Device configuration CCNA7 Module 2
PDF
Introduction to networks CCNAv7 Module-1
PDF
Operating systems system structures
PDF
Cisco cybersecurity essentials chapter 8
PDF
Cisco cybersecurity essentials chapter - 6
PDF
Cisco cybersecurity essentials chapter 4
PDF
Cisco cybersecurity essentials chapter -5
PDF
Cisco cybersecurity essentials chapter 3
CCNA-2 SRWE Mod-10 LAN Security Concepts
CCNA-2 SRWE Mod-11 Switch Security Configuration
CCNA-2 SRWE Mod-12 WLAN Concepts
CCNA-2 SRWE Mod-13 WLAN Configuration
CCNA-2 SRWE Mod-15 Static IP Routing
CCNA-2 SRWE Mod-14 Routing Concepts
Protecting the Organization - Cisco: Intro to Cybersecurity Chap-4
Protecting Your Data and Privacy- Cisco: Intro to Cybersecurity chap-3
Attacks, Concepts and Techniques - Cisco: Intro to Cybersecurity Chap-2
The need for Cybersecurity - Cisco Intro to Cybersec Chap-1
Cisco Cybersecurity Essentials Chapter- 7
Protocols and Reference models CCNAv7-1
Basic Switch and End Device configuration CCNA7 Module 2
Introduction to networks CCNAv7 Module-1
Operating systems system structures
Cisco cybersecurity essentials chapter 8
Cisco cybersecurity essentials chapter - 6
Cisco cybersecurity essentials chapter 4
Cisco cybersecurity essentials chapter -5
Cisco cybersecurity essentials chapter 3

Recently uploaded (20)

PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
Insiders guide to clinical Medicine.pdf
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
Cell Types and Its function , kingdom of life
PDF
Basic Mud Logging Guide for educational purpose
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
01-Introduction-to-Information-Management.pdf
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
Business Ethics Teaching Materials for college
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
PPH.pptx obstetrics and gynecology in nursing
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
2.FourierTransform-ShortQuestionswithAnswers.pdf
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Insiders guide to clinical Medicine.pdf
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Anesthesia in Laparoscopic Surgery in India
Module 4: Burden of Disease Tutorial Slides S2 2025
Cell Types and Its function , kingdom of life
Basic Mud Logging Guide for educational purpose
Supply Chain Operations Speaking Notes -ICLT Program
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Renaissance Architecture: A Journey from Faith to Humanism
O5-L3 Freight Transport Ops (International) V1.pdf
01-Introduction-to-Information-Management.pdf
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Business Ethics Teaching Materials for college
Microbial disease of the cardiovascular and lymphatic systems
PPH.pptx obstetrics and gynecology in nursing

OS - Process Concepts

  • 2. Process Concept • An operating system executes a variety of programs: – Batch system – jobs – Time-shared systems – user programs or tasks • Many modern process concepts are still expressed in terms of jobs, ( e.g. job scheduling ), and the two terms are often used interchangeably. 2
  • 3.  In computing, a process is an instance of a computer program that is being executed. It contains the program code and its current activity.  Process memory is divided into four sections:  The text section comprises the compiled program code, read in from non-volatile storage when the program is launched.  The data section stores global and static variables, allocated and initialized prior to executing main.  The heap is used for dynamic memory allocation, and is managed via calls to new, delete, malloc, free, etc.  The stack is used for local variables. Space on the stack is reserved for local variables when they are declared ( at function entrance or elsewhere, depending on the language ), and the space is freed up when the variables go out of scope.
  • 4.  A program is a passive entity, such as a file containing a list of instructions stored on disk (often called an executable file).  In contrast, a process is an active entity, with a program counter specifying the next instruction to execute and a set of associated resources.  Although two processes may be associated with the same program, they are nevertheless considered two separate execution sequences.
  • 5.  When a process executes, it passes through different states. The state of a process is defined in part by the current activity of that process.  These stages may differ in different operating systems, and the names of these states are also not standardized.  A Process may be in one o the following states: - The process is being created. - The process is waiting to be assigned to a processor. - Instructions are being executed. - The process is waiting for some event to occur(such as an I/O completion or reception of a signal). - The process has finished execution
  • 6.  New - The process is in the stage of being created.  Ready - The process has all the resources available that it needs to run, but the CPU is not currently working on this process's instructions.  Running - The CPU is working on this process's instructions.  Waiting - The process cannot run at the moment, because it is waiting for some resource to become available or for some event to occur.  Terminated - The process has completed.
  • 7.  Each process is represented in the operating system by a process control block (PCB)—also called a task control block. It contains many pieces of information associated with a specific process, including these: ►Process state: The state may be new, ready, running and so on ►Program counter: It indicates the address of the next instruction to be executed for this program. ►CPU registers: These vary in number and type based on architecture. They include accumulators, stack pointers, general purpose registers etc. Along with ►CPU scheduling information: This includes process priority, pointers to scheduling queues and any scheduling parameters. ►Memory-management information: This includes the value of base and limit registers (protection) and page tables, segment tables depending on memory. ►Accounting information: It includes amount of CPU and real time used, account numbers, process numbers etc ►I/O status information: It includes list of I/O devices allocated to this process, a list of open files etc
  • 8. The architecture of a PCB is completely dependent on Operating System and may contain different information in different operating systems. The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
  • 10. A process is a program that performs a single thread of execution. This single thread of control allows the process to perform only one task at a time. Most modern operating systems have extended the process concept to allow a process to have multiple threads of execution and thus to perform more than one task at a time. This feature is especially beneficial on multicore systems, where multiple threads can run in parallel. On a system that supports threads, the PCB is expanded to include information for each thread. Other changes throughout the system are also needed to support threads.
  • 11. Process Scheduling • The two main objectives of the process scheduling system are to keep the CPU busy at all times and to deliver "acceptable" response times for all programs, particularly for interactive ones. • The process scheduler must meet these objectives by implementing suitable policies for swapping processes in and out of the CPU.
  • 12.  As processes enter the system, they are put into a , which consists of all processes in the system.  The processes that are residing in main memory and are ready and waiting to execute are kept on a list called the . This queue is generally stored as a linked list.  A ready-queue header contains pointers to the first and final PCBs in the list. Each PCB includes a pointer field that points to the next PCB in the ready queue.  The system also includes other queues. When a process is allocated the CPU, it executes for a while and eventually quits, is interrupted, or waits for the occurrence of a particular event, such as the completion of an I/O request.  The list of processes waiting for a particular I/O device is called a . Each device has its own device queue.
  • 13. Ready Queue And Various I/O Device Queues
  • 14.  A common representation of process scheduling is a . Each rectangular box represents a queue.  Two types of queues are present: the and a set of . The circles represent the resources that serve the queues, and the arrows indicate the flow of processes in the system.
  • 15. In the first two cases, the process eventually switches from the waiting state to the ready state and is then put back in the ready queue. A process continues this cycle until it terminates, at which time it is removed from all queues and has its PCB and resources deallocated.  A new process is initially put in the ready queue. It waits there until it is selected for execution, or dispatched.  Once the process is allocated the CPU and is executing, one of several events could occur:  The process could issue an I/O request and then be placed in an I/O queue.  The process could create a new child process and wait for the child’s termination.  The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue.
  • 16.  A process migrates among the various scheduling queues throughout its lifetime. The operating system must select, for scheduling purposes, processes from these queues in some fashion. The selection process is carried out by the appropriate scheduler.  A is typical of a batch system or a very heavily loaded system. When more processes are submitted than that can be executed immediately, they are spooled to a mass-storage device and are kept there for later execution. The long-term scheduler, or job scheduler, selects processes from this pool and loads them into memory for execution.  The , or , selects from among the processes that are ready to execute and allocates the CPU to one of them.
  • 17.  The primary distinction between these two schedulers lies in frequency of execution. The short-term scheduler must select a new process for the CPU frequently (in the order of milliseconds). The long-term scheduler executes much less frequently; minutes may separate the creation of one new process and the next. The long-term scheduler controls the degree of multiprogramming (the number of processes in memory). An efficient scheduling system will select a good process mix of CPU-bound processes and I/O bound processes.
  • 18.  Some operating systems, such as time-sharing systems, may introduce an additional, intermediate level of scheduling called .  The key idea behind a medium-term scheduler is that sometimes it can be advantageous to remove a process from memory (and from active contention for the CPU) and thus reduce the degree of multiprogramming.  Swapping is a mechanism in which a process can be swapped/moved temporarily out of main memory to a backing store , and then brought back into memory for continued execution.  The process is swapped out, and is later swapped in, by the medium- term scheduler.
  • 20.  When an interrupt occurs, the system needs to save the current context of the process running on the CPU so that it can restore that context when its processing is done, essentially suspending the process and then resuming it. The context is represented in the PCB of the process. .  Context-switch time is pure overhead, because the system does no useful work while switching. Switching speed varies from machine to machine, depending on the memory speed, the number of registers that must be copied, and the existence of special instructions.  Context-switch times are highly dependent on hardware support.
  • 21. Operations on Processes • The processes in the system can execute concurrently, and they must be created and deleted dynamically. Thus, the operating system must provide a mechanism (or facility) for process creation and termination. • Other kinds of operations on processes include Deletion, Suspension, Resumption, Cloning, Inter- Process Communication and Synchronization
  • 22. Process Creation  During the course of execution, a process may create several new processes.  The creating process is called a parent process, and the new processes are called the children of that process. Each of these new processes may in turn create other processes, forming a tree of processes.  Most operating systems (including UNIX, Linux, and Windows) identify processes according to a unique process identifier (or pid), which is typically an integer number.  The pid provides a unique value for each process in the system, and it can be used as an index to access various attributes of a process within the kernel.
  • 23.  On typical UNIX systems the process scheduler is termed sched, and is given PID 0. The first thing it does at system startup time is to launch init, which gives that process PID 1.  Init then launches all system daemons and user logins, and becomes the ultimate parent of all other processes. A typical process tree for a Linux system
  • 24.  In general, when a process creates a child process, that child process will need certain resources (CPU time, memory, files, I/O devices) to accomplish its task.  A child process may be able to obtain its resources directly from the operating system, or it may be constrained to a subset of the resources of the parent process.  The parent may have to partition its resources among its children, or it may be able to share some resources (such as memory or files) among several of its children.  Restricting a child process to a subset of the parent’s resources prevents any process from overloading the system by creating too many child processes.  In addition to supplying various physical and logical resources, the parent process may pass along initialization data (input) to the child process.
  • 25.  There are two options for the parent process after creating the child: • Wait for the child process to terminate before proceeding. The parent makes a wait( ) system call, for either a specific child or for any child, which causes the parent process to block until the wait( ) returns. UNIX shells normally wait for their children to complete before issuing a new prompt. • Run concurrently with the child, continuing to process without waiting. This is the operation seen when a UNIX shell runs a process as a background task. It is also possible for the parent to run for a while, and then wait for the child later, which might occur in a sort of a parallel processing operation. ( E.g. the parent may fork off a number of children without waiting for any of them, then do a little work of its own, and then wait for the children. )  Two possibilities for the address space of the child relative to the parent: • The child may be an exact duplicate of the parent, sharing the same program and data segments in memory. Each will have their own PCB, including program counter, registers, and PID. This is the behavior of the fork in UNIX. • The child process may have a new program loaded into its address space, with all new code and data segments. This is the behavior of the spawn system calls in Windows. UNIX systems implement this as a second step, using the exec system call.
  • 26.  The figure shows the fork and exec process on a UNIX system.  The fork system call returns the PID of the processes.  It returns a zero to the child process and a non-zero child PID to the parent, so the return value indicates which process is which.  Process IDs can be looked up any time for the current process or its direct parent using the getpid( ) and getppid( ) system calls respectively.
  • 27. Creating a Separate Process via Windows API The figure shows the more complicated process for Windows, which must provide all of the parameter information for the new process as part of the forking process.
  • 28. Process Termination  A process terminates when it finishes executing its final statement and asks the operating system to delete it by using the exit() system call.  All the resources of the process—including physical and virtual memory, open files, and I/O buffers—are deallocated by the operating system.  A process can cause the termination of another process via an appropriate system call. Such system calls are usually invoked only by the parent of the process that is to be terminated. A parent needs to know the identities of its children if it is to terminate them.
  • 29.  A Parent may terminate the execution of one of its children for a variety of reasons, such as these: 1. The child has exceeded its usage of some of the resources that it has been allocated. (To determine whether this has occurred, the parent must have a mechanism to inspect the state of its children.) 2. The task assigned to the child is no longer required. 3. The parent is exiting, and the operating system does not allow a child to continue if its parent terminates. Some systems do not allow a child to exist if its parent has terminated. In such systems, if a process terminates (either normally or abnormally), then all its children must also be terminated. This phenomenon, referred to as cascading termination, is normally initiated by the operating system.
  • 30.  When a process terminates, all of its system resources are freed up, open files flushed and closed, etc.  The process termination status and execution times are returned to the parent if the parent is waiting for the child to terminate, or eventually returned to init if the process becomes an orphan.  A process that has terminated, but whose parent has not yet called wait(), is known as a .  If a parent did not invoke wait() and instead terminated, thereby leaving its child processes as . Linux and UNIX address this scenario by assigning the init process as the new parent to orphan processes.  The init process periodically invokes wait(), thereby allowing the exit status of any orphaned process to be collected and releasing the orphan’s process identifier and process-table entry.
  • 31. Interprocess Communication • Processes executing concurrently in the operating system may be either independent processes or cooperating processes. A process is if it cannot affect or be affected by the other processes executing in the system. Any process that does not share data with any other process is independent. A process is if it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process.
  • 32. There are several reasons for providing an environment that allows process cooperation:  Information Sharing - There may be several processes which need access to the same file. ( e.g. pipelines. )  Computation speedup - Often a solution to a problem can be solved faster if the problem can be broken down into sub-tasks to be solved simultaneously ( particularly when multiple processors are involved. )  Modularity - The most efficient architecture may be to break a system down into cooperating modules. ( E.g. databases with a client-server architecture. )  Convenience - Even a single user may be multi-tasking, such as editing, compiling, printing, and running the same code in different windows.
  • 33. Cooperating processes require some type of inter-process communication, which is most commonly one of two types: Message Passing systems (a) or Shared Memory systems(b)  Message Passing requires system calls for every message transfer, and is therefore slower, but it is simpler to set up and works well across multiple computers.  Message passing is generally preferable when the amount and/or frequency of data transfers is small, or when multiple computers are involved.  Shared Memory is faster once it is set up, because no system calls are required and access occurs at normal memory speeds. However it is more complicated to set up, and doesn't work as well across multiple computers. Shared memory is generally preferable when large amounts of information must be shared quickly on the same computer.
  • 34.  Interprocess communication using shared memory requires communicating processes to establish a region of shared memory.  Typically, a shared-memory region resides in the address space of the process creating the shared-memory segment.  Other processes that wish to communicate using this shared-memory segment must attach it to their address space.  Shared memory requires that two or more processes agree to remove the restriction of preventing one process accessing another processes memory.  They can then exchange information by reading and writing data in the shared areas. The form of the data and the location are determined by these processes and are not under the operating system’s control. The processes are also responsible for ensuring that they are not writing to the same location simultaneously.
  • 35. Producer-Consumer problem is a common paradigm for cooperating processes in which one process is producing data and another process is consuming the data. The producer–consumer problem provides a useful metaphor for the client–server paradigm. A server is thought as a producer and a client as a consumer. One solution to the producer–consumer problem uses shared memory. To allow producer and consumer processes to run concurrently, we must have available a buffer of items that can be filled by the producer and emptied by the consumer. This buffer will reside in a region of memory that is shared by the producer and consumer processes. A producer can produce one item while the consumer is consuming another item. The producer and consumer must be synchronized, so that the consumer does not try to consume an item that has not yet been produced.
  • 36. Two types of buffers can be used. The unbounded buffer places no practical limit on the size of the buffer. The consumer may have to wait for new items, but the producer can always produce new items. The bounded buffer assumes a fixed buffer size. In this case, the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full. A producer tries to insert data into an empty slot of the buffer. A consumer tries to remove data from a filled slot in the buffer. As you might have guessed by now, those two processes won't produce the expected output if they are being executed concurrently.
  • 37. The producer process has a local variable next produced in which the new item to be produced is stored. The consumer process has a local variable next consumed in which the item to be consumed is stored.
  • 38.  Message passing provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space.  It is particularly useful in a distributed environment, where the communicating processes may reside on different computers connected by a network.  A message-passing facility provides at least two operations: send(message) receive(message)  Messages sent by a process can be either fixed or variable in size.  If only fixed-sized messages can be sent, the system-level implementation is straightforward. However, makes the task of programming more difficult.  If it is variable-sized messages then it require a more complex system- level implementation, but the programming task becomes simpler.
  • 39. If processes P and Q want to communicate, they must send messages to and receive messages from each other: a communication link must exist between them. This link can be implemented in a variety of ways. Here are several methods for logically implementing a link and the send()/receive() operations:  Direct or indirect communication (Naming)  Synchronous or asynchronous communication  Automatic or explicit buffering
  • 40. Processes that want to communicate must have a way to refer to each other. They can use either direct or indirect communication. Under , each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send() and receive() primitives are defined as: • send(P, message)—Send a message to process P. • receive(Q, message)—Receive a message from process Q. A communication link in this scheme has the following properties: A link is established automatically between every pair of processes that want to communicate. The processes need to know only each other’s identity to communicate. A link is associated with exactly two processes. Between each pair of processes, there exists exactly one link. 40 Naming I P C
  • 41. The previous scheme exhibits symmetry in addressing; that is, both the sender process and the receiver process must name the other to communicate. A variant of this scheme employs asymmetry in addressing. Here, only the sender names the recipient; the recipient is not required to name the sender. In this scheme, the send() and receive() primitives are defined as follows: • send(P, message)—Send a message to process P. • receive(id, message)—Receive a message from any process. The variable id is set to the name of the process with which communication has taken place.  The disadvantage in both of these schemes (symmetric and asymmetric) is the limited modularity of the resulting process definitions.  Any such hard-coding techniques, where identifiers must be explicitly stated, are less desirable than techniques involving indirection. 41 Naming I P C
  • 42.  With indirect communication, the messages are sent to and received from .  A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has a unique identification.  A process can communicate with another process via a number of different mailboxes, but two processes can communicate only if they have a shared mailbox. The send() and receive() primitives are defined as follows:  send(A, message)—Send a message to mailbox A.  receive(A, message)—Receive a message from mailbox A.  In this scheme, a communication link has the following properties:  A link is established between a pair of processes only if both members of the pair have a shared mailbox.  A link may be associated with more than two processes.  Between each pair of communicating processes, a number of different links may exist, with each link corresponding to one mailbox 42 Naming – Indirect Communication I P C
  • 43.  A mailbox may be owned either by a process or by the OS.  If the mailbox is owned by a process (that is, the mailbox is part of the address space of the process), then we distinguish between the owner (which can only receive messages through this mailbox) and the user (which can only send messages to the mailbox).  Since each mailbox has a unique owner, there can be no confusion about which process should receive a message sent to this mailbox.  When a process that owns a mailbox terminates, the mailbox disappears. Any process that subsequently sends a message to this mailbox must be notified that the mailbox no longer exists.  A mailbox owned by an OS is independent and is not attached to any process. Then the OS needs to provide a mechanism to do the following:  Create a new mailbox.  Send and receive messages through the mailbox.  Delete a mailbox. 43 Naming – Indirect Communication I P C
  • 44. 44
  • 45. Communication between processes takes place through calls to send() and receive() primitives. There are different design options for implementing each primitive. Message passing may be either blocking or nonblocking— also known as synchronous and asynchronous. The sending process is blocked until the message is received by the receiving process or by the mailbox. . The sending process sends the message and resumes operation. . The receiver blocks until a message is available. The receiver retrieves either a valid message or a null. 45 Synchronization I P C
  • 46. Whether communication is direct or indirect, messages exchanged by communicating processes reside in a temporary queue. Basically, such queues can be implemented in three ways: . The queue has a maximum length of zero; thus, the link cannot have any messages waiting in it. In this case, the sender must block until the recipient receives the message. The queue has finite length n; thus, at most n messages can reside in it. If the queue is not full when a new message is sent, the message is placed in the queue, and the sender can continue execution without waiting. If the link is full, the sender must block until space is available in the queue. The queue’s length is potentially infinite; thus, any number of messages can wait in it. The sender never blocks. 46 Buffering I P C