SlideShare a Scribd company logo
Amrita
School
of
Engineering,
Bangalore
Ms. Harika Pudugosula
Teaching Assistant
Department of Electronics & Communication Engineering
• Overview
• Multicore Programming
• Multithreading Models
• Thread Libraries
• Implicit Threading
• Threading Issues
• Operating System Examples
2
Threading Issues
• Issues to consider in designing multithreaded programs
1. The fork() and exec() system call
2. Signal Handling
3. Thread Cancellation
4. Thread - Local Storage
5. Scheduler Activations
3
The fork and exec System calls
• fork() system call - Creates a separate duplicate process child process with different
process ID
• exec() system call - Replacing the content of process with another process, but
retaining the same process ID
• The semantics of the fork() and exec() system calls change in a multithreaded
program
• If one thread in a program calls fork(), does the new process duplicate all threads,
or is the new process single-threaded?
• Some UNIX systems have chosen to have two versions of fork(), one that duplicates
all threads and another that duplicates only the thread that invoked the fork()
system call
• But which version of fork() to use and when ?
4
The fork and exec System calls
• If a thread invokes the exec() system call, the program specified in the parameter to
exec() will replace the entire process—including all threads and LWPs (Light Weight
Process - Kernel level threads which needs to deal with hardware)
• Which of the two versions of fork() to use depends on the application
• If exec() is called immediately after forking
• Then duplicating all threads is unnecessary, as the program specified in the
parameters to exec() will replace the process
• In this instance, duplicating only the calling thread is appropriate
• If the separate process does not call exec() after forking
• Then the separate process should duplicate all threads
5
Signal Handling
• A signal is used in UNIX systems to notify a process that a particular event has
occurred
• A signal may be received either synchronously or asynchronously
• All signals, whether synchronous or asynchronous, follow the same pattern:
1. A signal is generated by the occurrence of a particular event
2. The signal is delivered to a process
3. Once delivered, the signal must be handled
• Synchronous signals are delivered to the same process that performed the operation
that caused the signal
• Examples of synchronous signal include illegal memory access and division by 0
• Asynchronous signals are generated by an event external to a running process
• Examples includes terminating a process with specific keystrokes (such as
<control><C>) and having a timer expire
6
Signal Handling
• A signal may be handled by one of two possible handlers:
1. A default signal handler
2. A user-defined signal handler
• Default signal handler - It is a signal handler that kernel runs by default when any
signal occurs
• User-defined signal handler - This can override default signal handling
• Signals are handled in different ways
• Some signals (such as changing the size of a window) are simply ignored
• Others (such as an illegal memory access) are handled by terminating the
program
• Handling signals in single-threaded programs is straightforward, signals are always
delivered to a process
7
Signal Handling
• Delivering signals is more complicated in multithreaded programs, where a process
may have several threads. Where, then, should a signal be delivered?
• In general, the following options exist:
1. Deliver the signal to the thread to which the signal applies.
2. Deliver the signal to every thread in the process.
3. Deliver the signal to certain threads in the process.
4. Assign a specific thread to receive all signals for the process
• The method for delivering a signal depends on the type of signal generated
• Synchronous signals need to be delivered to the thread causing the signal and not to
other threads in the process
• Asynchronous signals should be sent to all threads
8
Signal Handling
• The standard UNIX function for delivering a signal is
kill(pid t pid, int signal)
This function specifies the process (pid) to which a particular signal (signal) is to be
delivered
• Pthreads provides the following function, which allows a signal to be delivered to a
specified thread (tid):
pthread kill(pthread t tid, int signal)
• Asychronous Procedure Calls(APC) allows Windows to provide support for signals.
The APC facility enables a user thread to specify a function that is to be called when
the user thread receives notification of a particular even
9
Thread Cancellation
• Terminating a thread before it has finished - Thread Cancellation
• Example: if multiple threads are concurrently searching through a database and one
thread returns the result, the remaining threads might be canceled
• Thread to be canceled is called as target thread
• Cancellation of a target thread may occur in two different scenarios:
• Asynchronous cancellation - One thread immediately terminates the target
thread
• Deferred cancellation - The target thread periodically checks whether it should
terminate
It is safer when compared to asynchronous cancellation because the thread can
perform check at a point at which it can be canceled safely
- Why Deferred cancellation is safer than Asynchronous cancellation?
- While cancelling the thread in an asynchronous way, we should take care of
reclaiming the resources allotted to a thread. While a thread is updating a data that is
shared with another thread and if it is canceled asynchrously, a problem arises.
10
Thread Cancellation
• In Pthreads, thread cancellation is initiated using the pthread cancel() function. The
identifier of the target thread is passed as a parameter to the function.
• The following code illustrates creating—and then canceling— a thread:
pthread t tid;
/* create the thread */
pthread create(&tid, 0, worker, NULL);
...
/* cancel the thread */
pthread cancel(tid);
• Pthreads supports three cancellation modes. Each mode is defined as a state and a
type
• Pthreads allows threads to disable or enable cancellation
11
Thread Cancellation
• Invoking thread cancellation requests cancellation, but actual cancellation depends
on thread state
• If thread has cancellation disabled, cancellation remains pending until thread enables
it
• Default type is deferred Cancellation only occurs when thread reaches cancellation
point
- pthread_testcancel() is a method to set up cancellation point
- cleanup handler method makes a thread to release all resources before it
gets cancelled.
• On Linux systems, thread cancellation is handled through signals
12
Thread - Local Storage
• Generally, threads belong to a process share the same copy of the data of the
process. Sometimes, threads need their own copy of certain data. Such data is called
as Thread-local storage (TLS)
• Example: In a transaction processing, a separate thread is created for servicing every
transaction. In this, every transaction is assigned an unique identifier. To associate
each thread with its unique identifier, we could use thread-local storage.
• Different from local variables
• Local variables visible only during single function invocation
• TLS visible across function invocations
• Similar to static data, but TLS is unique to each thread
13
Scheduler Activations
• Both Many-to-Many and Two-level models require
communication to happen between kernel and the
user thread library
• Lightweight process (LWP)
• It is an intermediate data structure that sits
between user thread library and kernel thread
• It appears to be a virtual processor to user thread
library
• On this virtual processor, user application can
schedule user thread to run
• Each LWP is attached to a kernel thread
• OS schedules kernel threads to run on physical
processor
• If kernel thread blocks due to some I/O then LWP
blocks then user thread also blocks
14
Scheduler Activations
• Upcall -
• The kernel must inform an application about certain events, this procedure is
known as an Upcall
• Upcalls are handled by the thread library with an upcall handler, and upcall
handlers must run on a virtual processor
• Scheduler activation -
• The kernel provides an application with a set of virtual processors (LWPs), and
the application can schedule user threads onto an available virtual processor
• Sequence of operations takes place when an application thread is about to block:
• The kernel makes an upcall to the application informing it that a thread is about
to block and identifying the specific thread
• The kernel then allocates a new virtual processor to the application
• The application runs an upcall handler on this new virtual processor, which
saves the state of the blocking thread and relinquishes (frees) the virtual
processor on which the blocking thread is running
15
Scheduler Activations
• Sequence of operations takes place when an application thread is about to block:
• The upcall handler then schedules another thread that is eligible to run on this
free virtual processor
• When the event that the blocking thread was waiting for occurs, the kernel
makes another upcall to the thread library informing it that the previously
blocked thread is now eligible to run
• The upcall handler for this event also requires a virtual processor, and the kernel
may allocate a new virtual processor or preempt one of the user threads and
run the upcall handler on its virtual processor
• After marking the unblocked thread as eligible to run, the application schedules
an eligible thread to run on an available virtual processor
16
Operaing System Examples
• Windows Threads
• Linux Threads
17
WIndows Threads
• Windows implements the Windows API – primary API for Win 98, Win NT, Win
2000, Win XP, and Win 7
• A Windows application runs as a separate process, and each process may contain
one or more threads
• Windows uses the one-to-one mapping, where each user-level thread maps to an
associated kernel thread
• The general components of a thread include:
• A thread id uniquely identifying the thread
• Register set representing state of processor
• Separate user and kernel stacks for when thread runs in user mode or kernel
mode
• Private data storage area used by run-time libraries and dynamic link libraries
(DLLs)
• The register set, stacks, and private storage area are known as the context of the
thread
18
WIndows Threads
• The primary data structures of a thread include:
• ETHREAD—executive thread block
• The key components of the ETHREAD include a pointer to the process to
which the thread belongs and the address of the routine in which the thread
starts control.
• The ETHREAD also contains a pointer to the corresponding KTHREAD
• KTHREAD—kernel thread block
• The KTHREAD includes scheduling and synchronization information for the
thread
• The KTHREAD includes the kernel stack (used when the thread is running in
kernel mode) and a pointer to the TEB
• The ETHREAD and the KTHREAD exist entirely in kernel space; this means that
only the kernel can access them
19
20
WIndows Threads
• The primary data structures of a thread include:
• TEB—thread environment block
• The TEB is a user-space data structure that is accessed when the thread is
running in user mode
• The TEB contains the thread identifier, a user-mode stack, and an array for
thread-local storage
21
Linux Threads
• Linux provides the fork() system call with the traditional functionality of duplicating
a process
• Linux also provides the ability to create threads using the clone() system call
• Linux does not distinguish between processes and threads, Linux uses the term task
—rather than process or thread— when referring to a flow of control within a
program
• When clone() is invoked, it is passed a set of flags that determine how much
sharing is to take place between the parent and child tasks
22
23
References
1. Silberscatnz and Galvin, “Operating System Concepts,” Ninth Edition,
John Wiley and Sons, 2012.
24
Thank you

More Related Content

PPTX
Operating system 22 threading issues
PDF
Threads operating system slides easy understand
PDF
Ch4 threads
PDF
Thread
PPTX
Networking threads
PDF
Multithreaded Programming in oprating system
PDF
The Thread Chapter 4 of Operating System
PPTX
OS_module2. .pptx
Operating system 22 threading issues
Threads operating system slides easy understand
Ch4 threads
Thread
Networking threads
Multithreaded Programming in oprating system
The Thread Chapter 4 of Operating System
OS_module2. .pptx

Similar to Multithreaded Programming Part- III.pdf (20)

PPT
Threads in Operating systems and concepts
PDF
CH04.pdf
PPT
Ch04 threads
PPT
Threads Advance in System Administration with Linux
PPT
Ipc ppt
PPT
Operating System 4
PPT
Operating System 4 1193308760782240 2
PPTX
OS Module-2.pptx
PDF
threads (1).pdfmjlkjfwjgliwiufuaiusyroayr
PPTX
Threads, signal and socket system calls.pptx
PPT
chapter4-processes nd processors in DS.ppt
PDF
4 threads
PPTX
PPT
Chapter 4 - Threads
PPTX
Operating system Chapter 3 Part-1 process and threads.pptx
PPT
Operating Systems - "Chapter 4: Multithreaded Programming"
PPT
Ch4 Threads
PPTX
Chapter -2 operating system presentation
PPTX
Chapter 4 - Operating Systems Threads.pptx
PPTX
Chapter 3 chapter reading task
Threads in Operating systems and concepts
CH04.pdf
Ch04 threads
Threads Advance in System Administration with Linux
Ipc ppt
Operating System 4
Operating System 4 1193308760782240 2
OS Module-2.pptx
threads (1).pdfmjlkjfwjgliwiufuaiusyroayr
Threads, signal and socket system calls.pptx
chapter4-processes nd processors in DS.ppt
4 threads
Chapter 4 - Threads
Operating system Chapter 3 Part-1 process and threads.pptx
Operating Systems - "Chapter 4: Multithreaded Programming"
Ch4 Threads
Chapter -2 operating system presentation
Chapter 4 - Operating Systems Threads.pptx
Chapter 3 chapter reading task
Ad

More from Harika Pudugosula (20)

PPTX
Artificial Neural Networks_Part-2.pptx
PPTX
Artificial Neural Networks_Part-1.pptx
PPTX
Introduction.pptx
PDF
CPU Scheduling Part-III.pdf
PDF
CPU Scheduling Part-II.pdf
PDF
CPU Scheduling Part-I.pdf
PDF
Multithreaded Programming Part- II.pdf
PDF
Multithreaded Programming Part- I.pdf
PDF
Deadlocks Part- III.pdf
PDF
Deadlocks Part- II.pdf
PDF
Deadlocks Part- I.pdf
PDF
Memory Management Strategies - IV.pdf
PDF
Memory Management Strategies - III.pdf
PDF
Memory Management Strategies - II.pdf
PDF
Memory Management Strategies - I.pdf
PDF
Virtual Memory Management Part - II.pdf
PDF
Virtual Memory Management Part - I.pdf
PDF
Operating System Structure Part-II.pdf
PDF
Operating System Structure Part-I.pdf
PDF
Lecture-4_Process Management.pdf
Artificial Neural Networks_Part-2.pptx
Artificial Neural Networks_Part-1.pptx
Introduction.pptx
CPU Scheduling Part-III.pdf
CPU Scheduling Part-II.pdf
CPU Scheduling Part-I.pdf
Multithreaded Programming Part- II.pdf
Multithreaded Programming Part- I.pdf
Deadlocks Part- III.pdf
Deadlocks Part- II.pdf
Deadlocks Part- I.pdf
Memory Management Strategies - IV.pdf
Memory Management Strategies - III.pdf
Memory Management Strategies - II.pdf
Memory Management Strategies - I.pdf
Virtual Memory Management Part - II.pdf
Virtual Memory Management Part - I.pdf
Operating System Structure Part-II.pdf
Operating System Structure Part-I.pdf
Lecture-4_Process Management.pdf
Ad

Recently uploaded (20)

PDF
Digital Logic Computer Design lecture notes
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
bas. eng. economics group 4 presentation 1.pptx
DOCX
573137875-Attendance-Management-System-original
PPTX
additive manufacturing of ss316l using mig welding
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPTX
Construction Project Organization Group 2.pptx
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPT
Mechanical Engineering MATERIALS Selection
PDF
composite construction of structures.pdf
PDF
Well-logging-methods_new................
PPTX
Lecture Notes Electrical Wiring System Components
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Digital Logic Computer Design lecture notes
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
bas. eng. economics group 4 presentation 1.pptx
573137875-Attendance-Management-System-original
additive manufacturing of ss316l using mig welding
Model Code of Practice - Construction Work - 21102022 .pdf
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Construction Project Organization Group 2.pptx
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Internet of Things (IOT) - A guide to understanding
R24 SURVEYING LAB MANUAL for civil enggi
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Mechanical Engineering MATERIALS Selection
composite construction of structures.pdf
Well-logging-methods_new................
Lecture Notes Electrical Wiring System Components
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
Mitigating Risks through Effective Management for Enhancing Organizational Pe...

Multithreaded Programming Part- III.pdf

  • 1. Amrita School of Engineering, Bangalore Ms. Harika Pudugosula Teaching Assistant Department of Electronics & Communication Engineering
  • 2. • Overview • Multicore Programming • Multithreading Models • Thread Libraries • Implicit Threading • Threading Issues • Operating System Examples 2
  • 3. Threading Issues • Issues to consider in designing multithreaded programs 1. The fork() and exec() system call 2. Signal Handling 3. Thread Cancellation 4. Thread - Local Storage 5. Scheduler Activations 3
  • 4. The fork and exec System calls • fork() system call - Creates a separate duplicate process child process with different process ID • exec() system call - Replacing the content of process with another process, but retaining the same process ID • The semantics of the fork() and exec() system calls change in a multithreaded program • If one thread in a program calls fork(), does the new process duplicate all threads, or is the new process single-threaded? • Some UNIX systems have chosen to have two versions of fork(), one that duplicates all threads and another that duplicates only the thread that invoked the fork() system call • But which version of fork() to use and when ? 4
  • 5. The fork and exec System calls • If a thread invokes the exec() system call, the program specified in the parameter to exec() will replace the entire process—including all threads and LWPs (Light Weight Process - Kernel level threads which needs to deal with hardware) • Which of the two versions of fork() to use depends on the application • If exec() is called immediately after forking • Then duplicating all threads is unnecessary, as the program specified in the parameters to exec() will replace the process • In this instance, duplicating only the calling thread is appropriate • If the separate process does not call exec() after forking • Then the separate process should duplicate all threads 5
  • 6. Signal Handling • A signal is used in UNIX systems to notify a process that a particular event has occurred • A signal may be received either synchronously or asynchronously • All signals, whether synchronous or asynchronous, follow the same pattern: 1. A signal is generated by the occurrence of a particular event 2. The signal is delivered to a process 3. Once delivered, the signal must be handled • Synchronous signals are delivered to the same process that performed the operation that caused the signal • Examples of synchronous signal include illegal memory access and division by 0 • Asynchronous signals are generated by an event external to a running process • Examples includes terminating a process with specific keystrokes (such as <control><C>) and having a timer expire 6
  • 7. Signal Handling • A signal may be handled by one of two possible handlers: 1. A default signal handler 2. A user-defined signal handler • Default signal handler - It is a signal handler that kernel runs by default when any signal occurs • User-defined signal handler - This can override default signal handling • Signals are handled in different ways • Some signals (such as changing the size of a window) are simply ignored • Others (such as an illegal memory access) are handled by terminating the program • Handling signals in single-threaded programs is straightforward, signals are always delivered to a process 7
  • 8. Signal Handling • Delivering signals is more complicated in multithreaded programs, where a process may have several threads. Where, then, should a signal be delivered? • In general, the following options exist: 1. Deliver the signal to the thread to which the signal applies. 2. Deliver the signal to every thread in the process. 3. Deliver the signal to certain threads in the process. 4. Assign a specific thread to receive all signals for the process • The method for delivering a signal depends on the type of signal generated • Synchronous signals need to be delivered to the thread causing the signal and not to other threads in the process • Asynchronous signals should be sent to all threads 8
  • 9. Signal Handling • The standard UNIX function for delivering a signal is kill(pid t pid, int signal) This function specifies the process (pid) to which a particular signal (signal) is to be delivered • Pthreads provides the following function, which allows a signal to be delivered to a specified thread (tid): pthread kill(pthread t tid, int signal) • Asychronous Procedure Calls(APC) allows Windows to provide support for signals. The APC facility enables a user thread to specify a function that is to be called when the user thread receives notification of a particular even 9
  • 10. Thread Cancellation • Terminating a thread before it has finished - Thread Cancellation • Example: if multiple threads are concurrently searching through a database and one thread returns the result, the remaining threads might be canceled • Thread to be canceled is called as target thread • Cancellation of a target thread may occur in two different scenarios: • Asynchronous cancellation - One thread immediately terminates the target thread • Deferred cancellation - The target thread periodically checks whether it should terminate It is safer when compared to asynchronous cancellation because the thread can perform check at a point at which it can be canceled safely - Why Deferred cancellation is safer than Asynchronous cancellation? - While cancelling the thread in an asynchronous way, we should take care of reclaiming the resources allotted to a thread. While a thread is updating a data that is shared with another thread and if it is canceled asynchrously, a problem arises. 10
  • 11. Thread Cancellation • In Pthreads, thread cancellation is initiated using the pthread cancel() function. The identifier of the target thread is passed as a parameter to the function. • The following code illustrates creating—and then canceling— a thread: pthread t tid; /* create the thread */ pthread create(&tid, 0, worker, NULL); ... /* cancel the thread */ pthread cancel(tid); • Pthreads supports three cancellation modes. Each mode is defined as a state and a type • Pthreads allows threads to disable or enable cancellation 11
  • 12. Thread Cancellation • Invoking thread cancellation requests cancellation, but actual cancellation depends on thread state • If thread has cancellation disabled, cancellation remains pending until thread enables it • Default type is deferred Cancellation only occurs when thread reaches cancellation point - pthread_testcancel() is a method to set up cancellation point - cleanup handler method makes a thread to release all resources before it gets cancelled. • On Linux systems, thread cancellation is handled through signals 12
  • 13. Thread - Local Storage • Generally, threads belong to a process share the same copy of the data of the process. Sometimes, threads need their own copy of certain data. Such data is called as Thread-local storage (TLS) • Example: In a transaction processing, a separate thread is created for servicing every transaction. In this, every transaction is assigned an unique identifier. To associate each thread with its unique identifier, we could use thread-local storage. • Different from local variables • Local variables visible only during single function invocation • TLS visible across function invocations • Similar to static data, but TLS is unique to each thread 13
  • 14. Scheduler Activations • Both Many-to-Many and Two-level models require communication to happen between kernel and the user thread library • Lightweight process (LWP) • It is an intermediate data structure that sits between user thread library and kernel thread • It appears to be a virtual processor to user thread library • On this virtual processor, user application can schedule user thread to run • Each LWP is attached to a kernel thread • OS schedules kernel threads to run on physical processor • If kernel thread blocks due to some I/O then LWP blocks then user thread also blocks 14
  • 15. Scheduler Activations • Upcall - • The kernel must inform an application about certain events, this procedure is known as an Upcall • Upcalls are handled by the thread library with an upcall handler, and upcall handlers must run on a virtual processor • Scheduler activation - • The kernel provides an application with a set of virtual processors (LWPs), and the application can schedule user threads onto an available virtual processor • Sequence of operations takes place when an application thread is about to block: • The kernel makes an upcall to the application informing it that a thread is about to block and identifying the specific thread • The kernel then allocates a new virtual processor to the application • The application runs an upcall handler on this new virtual processor, which saves the state of the blocking thread and relinquishes (frees) the virtual processor on which the blocking thread is running 15
  • 16. Scheduler Activations • Sequence of operations takes place when an application thread is about to block: • The upcall handler then schedules another thread that is eligible to run on this free virtual processor • When the event that the blocking thread was waiting for occurs, the kernel makes another upcall to the thread library informing it that the previously blocked thread is now eligible to run • The upcall handler for this event also requires a virtual processor, and the kernel may allocate a new virtual processor or preempt one of the user threads and run the upcall handler on its virtual processor • After marking the unblocked thread as eligible to run, the application schedules an eligible thread to run on an available virtual processor 16
  • 17. Operaing System Examples • Windows Threads • Linux Threads 17
  • 18. WIndows Threads • Windows implements the Windows API – primary API for Win 98, Win NT, Win 2000, Win XP, and Win 7 • A Windows application runs as a separate process, and each process may contain one or more threads • Windows uses the one-to-one mapping, where each user-level thread maps to an associated kernel thread • The general components of a thread include: • A thread id uniquely identifying the thread • Register set representing state of processor • Separate user and kernel stacks for when thread runs in user mode or kernel mode • Private data storage area used by run-time libraries and dynamic link libraries (DLLs) • The register set, stacks, and private storage area are known as the context of the thread 18
  • 19. WIndows Threads • The primary data structures of a thread include: • ETHREAD—executive thread block • The key components of the ETHREAD include a pointer to the process to which the thread belongs and the address of the routine in which the thread starts control. • The ETHREAD also contains a pointer to the corresponding KTHREAD • KTHREAD—kernel thread block • The KTHREAD includes scheduling and synchronization information for the thread • The KTHREAD includes the kernel stack (used when the thread is running in kernel mode) and a pointer to the TEB • The ETHREAD and the KTHREAD exist entirely in kernel space; this means that only the kernel can access them 19
  • 20. 20
  • 21. WIndows Threads • The primary data structures of a thread include: • TEB—thread environment block • The TEB is a user-space data structure that is accessed when the thread is running in user mode • The TEB contains the thread identifier, a user-mode stack, and an array for thread-local storage 21
  • 22. Linux Threads • Linux provides the fork() system call with the traditional functionality of duplicating a process • Linux also provides the ability to create threads using the clone() system call • Linux does not distinguish between processes and threads, Linux uses the term task —rather than process or thread— when referring to a flow of control within a program • When clone() is invoked, it is passed a set of flags that determine how much sharing is to take place between the parent and child tasks 22
  • 23. 23 References 1. Silberscatnz and Galvin, “Operating System Concepts,” Ninth Edition, John Wiley and Sons, 2012.