SlideShare a Scribd company logo
Inter-Process Communication
   and Synchronization

         Prof.RUCHI SHARMA
               BVCOE
              NEW DELHI

Page 1                       02/16/13
Introduction
• An important and fundamental feature in modern
   operating systems is concurrent execution of
   processes/threads. This feature is essential for the
   realization of multiprogramming, multiprocessing,
   distributed systems, and client-server model of
   computation.
• Concurrency encompasses many design issues
   including communication and synchronization
   among processes, sharing of and contention for
   resources.
• In this discussion we will look at the various design
   issues/problems and the wide variety of solutions
   available.
  Page 2                                            02/16/13
Topics for discussion
• The principles of concurrency
• Interactions among processes
• Mutual exclusion problem
• Mutual exclusion- solutions
   – Software approaches (Dekker’s and Peterson’s)
   – Hardware support (test and set atomic operation)
   – OS solution (semaphores)
   – PL solution (monitors)
   – Distributed OS solution ( message passing)
• Reader/writer problem
• Dining Philosophers Problem


     Page 3                                        02/16/13
Principles of Concurrency
 • Interleaving and overlapping the execution of
    processes.
 • Consider two processes P1 and P2 executing
    the function echo:
 {
   input (in, keyboard);
   out = in;
   output (out, display);
 }


 Page 4                                   02/16/13
...Concurrency (contd.)
•   P1 invokes echo, after it inputs into in , gets interrupted (switched).
    P2 invokes echo, inputs into in and completes the execution and
    exits. When P1 returns in is overwritten and gone. Result: first ch is
    lost and second ch is written twice.
•   This type of situation is even more probable in multiprocessing
    systems where real concurrency is realizable thru’ multiple
    processes executing on multiple processors.
•   Solution: Controlled access to shared resource
     – Protect the shared resource : in buffer; “critical resource”
     – one process/shared code. “critical region”




     Page 5                                                        02/16/13
Interactions among processes
•    In a multi-process application these are the various degrees of
     interaction:
    1. Competing processes : Processes themselves do not share
         anything. But OS has to share the system resources among
         these processes “competing” for system resources such as
         disk, file or printer.
     Co-operating processes : Results of one or more processes
         may be needed for another process.
    2. Co-operation by sharing : Example: Sharing of an IO buffer.
         Concept of critical section. (indirect)
    3. Co-operation by communication : Example: typically no
         data sharing, but co-ordination thru’ synchronization becomes
         essential in certain applications. (direct)



    Page 6                                                   02/16/13
Interactions ...(contd.)
 • Among the three kinds of interactions
      indicated by 1, 2 and 3 above:
 •    1 is at the system level: potential problems :
      deadlock and starvation.
 •    2 is at the process level : significant problem
      is in realizing mutual exclusion.
 •    3 is more a synchronization problem.
 •    We will study mutual exclusion and
      synchronization here, and defer deadlock,
      and starvation for a later time.
     Page 7                                      02/16/13
Race Condition
• Race condition: The situation where
  several processes access – and
  manipulate shared data concurrently.
  The final value of the shared data
  depends upon which process finishes
  last.
• To prevent race conditions, concurrent
  processes must be synchronized.

 Page 8                             02/16/13
Mutual exclusion problem
 • Successful use of concurrency among
   processes requires the ability to define
   critical sections and enforce mutual
   exclusion.
 • Critical section : is that part of the process
   code that affects the shared resource.
 • Mutual exclusion: in the use of a shared
   resource is provided by making its access
   mutually exclusive among the processes that
   share the resource.
 • This is also known as the Critical Section
   (CS) problem.
 Page 9                                    02/16/13
Mutual exclusion
• Any facility that provides mutual exclusion
  should meet these requirements:
1. No assumption regarding the relative speeds of
  the processes.
2. A process is in its CS for a finite time only.
3. Only one process allowed in the CS.
4. Process requesting access to CS should not
  wait indefinitely.
5. A process waiting to enter CS cannot be
  blocking a process in CS or any other
  processes.
   Page 10                                      02/16/13
Software Solutions: Algorithm 1
 • Process 0              • Process 1
 • ...                    • ...
 • while turn != 0 do     • while turn != 1 do
 •             nothing;   •               nothing;
 • // busy waiting
                          •   // busy waiting
 • < Critical Section>
                          •   < Critical Section>
 • turn = 1;
 • ...
                          •   turn = 0;
 Problems : Strict        •   ...
   alternation, Busy
   Waiting
     Page 11                                     02/16/13
Algorithm 2
•    PROCESS 0                   • PROCESS 1
•    ...                         • ...
•    flag[0] = TRUE;             • flag[1] = TRUE;
•    while flag[1] do nothing;   • while flag[0] do nothing;
•    <CRITICAL SECTION>
                                 • <CRITICAL SECTION>
•    flag[0] = FALSE;
                                 • flag[1] = FALSE;

PROBLEM : Potential
 for deadlock, if one
 of the processes fail
 within CS.
    Page 12                                           02/16/13
Algorithm 3
 • Combined shared variables of algorithms 1 and
   2.
 • Process Pi
          do {
             flag [i]:= true;
             turn = j;
             while (flag [j] and turn = j) ;
               critical section
             flag [i] = false;
               remainder section
          } while (1);
 • Solves the critical-section problem for two
   processes.
 Page 13                                         02/16/13
Synchronization Hardware
• Test and modify the content of a word atomically
  .
      boolean TestAndSet(boolean &target) {
       boolean rv = target;
       target = true;

           return rv;
      }



 Page 14                                         02/16/13
Mutual Exclusion with Test-
and-Set
• Shared data:
           boolean lock = false;

• Process Pi
           do {
             while (TestAndSet(lock)) ;
               critical section
             lock = false;
               remainder section
 Page 15
           }                              02/16/13
Synchronization Hardware
• Atomically swap two variables.

void Swap(boolean &a, boolean
 &b) {
      boolean temp = a;
      a = b;
      b = temp;
    }
 Page 16                           02/16/13
Mutual Exclusion with Swap
•    Shared data (initialized to false):
             boolean lock;

•    Process Pi
              do {
                key = true;
                while (key == true)
                          Swap(lock,key);
                   critical section
                lock = false;
                   remainder section
              }

    Page 17                                 02/16/13
Semaphores
• Think about a semaphore as a class
• Attributes: semaphore value, Functions: init,
  wait, signal
• Support provided by OS
• Considered an OS resource, a limited number
  available: a limited number of instances
  (objects) of semaphore class is allowed.
• Can easily implement mutual exclusion among
  any number of processes.


   Page 18                                    02/16/13
Critical Section of n Processes

• Shared data:
       Semaphore mutex; //initially mutex =
   1

• Process Pi:

      do {
         mutex.wait();
            critical section
         mutex.signal();
Page 19
            remainder section          02/16/13
      } while (1);
Semaphore Implementation
•   Define a semaphore as a class:
class Semaphore
{ int value; // semaphore value
   ProcessQueue L; // process queue
   //operations
    wait()
    signal()
}
• In addition, two simple utility operations:
     – block() suspends the process that invokes it.
     – Wakeup() resumes the execution of a blocked process P.




    Page 20                                            02/16/13
Semantics of wait and signal
 •   Semaphore operations now defined as
        S.wait():
                  S.value--;
                  if (S.value < 0) {
                            add this process to S.L;
                            block(); // block a process
                  }

           S.signal():
                     S.value++;
                     if (S.value <= 0) {
                            remove a process P from S.L;
                            wakeup(); // wake a process
                     }

 Page 21                                                   02/16/13
Semaphores for CS
• Semaphore is initialized to 1. The first process that
  executes a wait() will be able to immediately enter the
  critical section (CS). (S.wait() makes S value zero.)
• Now other processes wanting to enter the CS will each
  execute the wait() thus decrementing the value of S, and
  will get blocked on S. (If at any time value of S is
  negative, its absolute value gives the number of
  processes waiting blocked. )
• When a process in CS departs, it executes S.signal()
  which increments the value of S, and will wake up any
  one of the processes blocked. The queue could be FIFO
  or priority queue.

     Page 22                                              02/16/13
Two Types of Semaphores
• Counting semaphore – integer value
  can range over an unrestricted
  domain.
• Binary semaphore – integer value
  can range only between 0 and 1; can
  be simpler to implement. ex: nachos
• Can implement a counting
  semaphore using a binary
  semaphore.
 Page 23                            02/16/13
Semaphore for Synchronization

• Execute B in Pj only after A executed in
  Pi
• Use semaphore flag initialized to 0
• Code:
             Pi              Pj
                             
               A         flag.wait()
 Page 24 flag.signal()        B         02/16/13
Classical Problems of
 Synchronization

• Bounded-Buffer Problem

• Readers and Writers Problem

• Dining-Philosophers Problem




 Page 25                        02/16/13
Producer/Consumer problem
• Producer        • Consumer
repeat            repeat
produce item v;   while (in <= out) nop;
b[in] = v;        w = b[out];
in = in + 1;      out = out + 1;
forever;          consume w;
                  forever;


 Page 26                            02/16/13
Solution for P/C using
Semaphores
•   Producer            •   Consumer
•   repeat              •   repeat
•   produce item v;     •   while (in <= out) nop;
•   MUTEX.wait();       •   MUTEX.wait();
•   b[in] = v;          •   w = b[out];
•   in = in + 1;        •   out = out + 1;
•   MUTEX.signal();     •   MUTEX.signal();
•   forever;            •   consume w;
                        •   forever;
• What if Producer is   •   Ans: Consumer will
    slow or late?           busy-wait at the
                            while statement.
    Page 27                                  02/16/13
P/C: improved solution
•   Producer                   •  Consumer
repeat                         repeat
produce item v;                AVAIL.wait();
MUTEX.wait();                  MUTEX.wait();
b[in] = v;                     w = b[out];
in = in + 1;                   out = out + 1;
MUTEX.signal();                MUTEX.signal();
AVAIL.signal();                consume w;
forever;                       forever;

•   What will be the initial   •   ANS: Initially MUTEX =
    values of MUTEX and            1, AVAIL = 0.
    AVAIL?


    Page 28                                        02/16/13
P/C problem: Bounded buffer
• Producer               • Consumer
repeat                   repeat
produce item v;          while (in == out) NOP;
while((in+1)%n == out)   w = b[out];
   NOP;
b[in] = v;               out = (out + 1)%n;
in = ( in + 1)% n;       consume w;
forever;                 forever;
• How to enforce         • ANS: Using another
   bufsize?                 counting
                            semaphore.
 Page 29                                  02/16/13
P/C: Bounded Buffer solution
•   Producer
repeat                        •  Consumer
produce item v;               repeat
BUFSIZE.wait();               AVAIL.wait();
MUTEX.wait();                 MUTEX.wait();
b[in] = v;                    w = b[out];
in = (in + 1)%n;              out = (out + 1)%n;
MUTEX.signal();               MUTEX.signal();
AVAIL.signal();               BUFSIZE.signal();
forever;                      consume w;
• What is the initial value   forever;
    of BUFSIZE?               • ANS: size of the
                                 bounded buffer.


    Page 30                                        02/16/13
Semaphores - comments
• Intuitively easy to use.
• wait() and signal() are to be implemented as atomic
  operations.
• Difficulties:
   – signal() and wait() may be exchanged
     inadvertently by the programmer. This may result
     in deadlock or violation of mutual exclusion.
   – signal() and wait() may be left out.
• Related wait() and signal() may be scattered all over
  the code among the processes.

 Page 31                                          02/16/13
Monitors
• Monitor is a predecessor of the “class” concept.
• Initially it was implemented as a programming
  language construct and more recently as library. The
  latter made the monitor facility available for general
  use with any PL.
• Monitor consists of procedures, initialization
  sequences, and local data. Local data is accessible
  only thru’ monitor’s procedures. Only one process can
  be executing in a monitor at a time. Other process that
  need the monitor wait suspended.


  Page 32                                            02/16/13
Monitors
           monitor monitor-name
           {
             shared variable declarations
             procedure body P1 (…) {
               . . .}
             procedure body P2 (…) {
               . . .}
             procedure body Pn (…) {
                . . .}
             {
               initialization code
             }
 Page 33
           }                                02/16/13
Monitors
• To allow a process to wait within the monitor, a
   condition variable must be declared, as
                condition x, y;
• Condition variable can only be used with the
   operations wait and signal.
    – The operation
                    x.wait();
        means that the process invoking this operation
        is suspended until another process invokes
                  x.signal();
    – The x.signal operation resumes exactly one
        suspended process. If no process is
        suspended, then the signal operation has no
        effect.
  Page 34                                           02/16/13
Schematic View of a Monitor




 Page 35                  02/16/13
Monitor With Condition
Variables




 Page 36                 02/16/13
Message passing
• Both synchronization and communication
  requirements are taken care of by this
  mechanism.
• More over, this mechanism yields to
  synchronization methods among distributed
  processes.
• Basic primitives are:
send (destination, message);
receive ( source, message);

 Page 37                                02/16/13
Issues in message passing
•    Send and receive: could be blocking or non-blocking:
      – Blocking send: when a process sends a message it blocks
        until the message is received at the destination.
      – Non-blocking send: After sending a message the sender
        proceeds with its processing without waiting for it to reach
        the destination.
      – Blocking receive: When a process executes a receive it
        waits blocked until the receive is completed and the required
        message is received.
      – Non-blocking receive: The process executing the receive
        proceeds without waiting for the message(!).
•    Blocking Receive/non-blocking send is a common combination.

    Page 38                                                  02/16/13
Reader/Writer problem
• Data is shared among a number of processes.
• Any number of reader processes could be accessing the
  shared data concurrently.
• But when a writer process wants to access, only that process
  must be accessing the shared data. No reader should be
  present.
• Solution 1 : Readers have priority; If a reader is in CS any
  number of readers could enter irrespective of any writer
  waiting to enter CS.
• Solution 2: If a writer wants CS as soon as the CS is available
  writer enters it.


      Page 39                                         02/16/13
Reader/writer: Priority
Readers
                   • Reader:
 • Writer:         ES.wait();
 ForCS.wait();     NumRdr = NumRdr + 1;
 CS;               if NumRdr = 1 ForCS.wait();
 ForCS.signal();   ES.signal();
                   CS;
                   ES.wait();
                   NumRdr = NumRdr -1;
                   If NumRdr = 0 ForCS.signal();
                   ES.signal();
  Page 40                                   02/16/13
Dining Philosophers Example
     monitor dp
     {
       enum {thinking, hungry, eating}
     state[5];
       condition self[5];
       void pickup(int i)           // following
     slides
       void putdown(int i) // following slides
       void test(int i)             // following
     slides
       void init() {
          for (int i = 0; i < 5; i++)
                state[i] = thinking;}
 Page 41                                         02/16/13
     }
Dining Philosophers
           void pickup(int i) {
               state[i] = hungry;
               test[i];
               if (state[i] != eating)
                    self[i].wait();
           }

           void putdown(int i) {
               state[i] = thinking;
               // test left and right neighbors
               test((i+4) % 5);
 Page 42       test((i+1) % 5);                 02/16/13
Dining Philosophers

           void test(int i) {
              if ( (state[(I + 4) % 5] !=
           eating) &&
                (state[i] == hungry) &&
                (state[(i + 1) % 5] !=
           eating)) {
                   state[i] = eating;
                   self[i].signal();
              }
 Page 43                                    02/16/13
Summary
• We looked at various ways/levels of realizing
  synchronization among concurrent
  processes.
• Synchronization at the kernel level is usually
  solved using hardware mechanisms such as
  interrupt priority levels, basic hardware lock,
  using non-preemptive kernel (older BSDs),
  using special signals.


 Page 44                                     02/16/13

More Related Content

PPT
Ipc feb4
PPTX
Concurrency: Mutual Exclusion and Synchronization
PPT
Os module 2 c
DOCX
Operating System Process Synchronization
PPTX
Operating systems question bank
PPTX
Process synchronization
PPTX
Mutual Exclusion using Peterson's Algorithm
PDF
Operating Systems - Process Synchronization and Deadlocks
Ipc feb4
Concurrency: Mutual Exclusion and Synchronization
Os module 2 c
Operating System Process Synchronization
Operating systems question bank
Process synchronization
Mutual Exclusion using Peterson's Algorithm
Operating Systems - Process Synchronization and Deadlocks

What's hot (20)

PPT
Process Synchronization And Deadlocks
PPTX
Process synchronization in Operating Systems
PPTX
Process synchronization
PDF
Synchronization
PPTX
Concurrency
PPT
Process synchronization(deepa)
PPT
Process Synchronization
PPTX
Process synchronization in operating system
PDF
Operating Systems 1 (8/12) - Concurrency
PPT
Chapter 6 - Process Synchronization
PPTX
SYNCHRONIZATION
PPT
Operating Systems - "Chapter 5 Process Synchronization"
PDF
Lect04
PPTX
Synchronization hardware
PDF
6 Synchronisation
PDF
ITFT_Semaphores and bounded buffer
PPT
Process Synchronization
PPT
Mutual exclusion and sync
PDF
Operating System-Ch6 process synchronization
PPTX
Process synchronization
Process Synchronization And Deadlocks
Process synchronization in Operating Systems
Process synchronization
Synchronization
Concurrency
Process synchronization(deepa)
Process Synchronization
Process synchronization in operating system
Operating Systems 1 (8/12) - Concurrency
Chapter 6 - Process Synchronization
SYNCHRONIZATION
Operating Systems - "Chapter 5 Process Synchronization"
Lect04
Synchronization hardware
6 Synchronisation
ITFT_Semaphores and bounded buffer
Process Synchronization
Mutual exclusion and sync
Operating System-Ch6 process synchronization
Process synchronization
Ad

Similar to Ipc feb4 (20)

PPT
OS Process Synchronization, semaphore and Monitors
PDF
OS Process synchronization Unit3 synchronization
PPTX
Lecture 5 inter process communication
PDF
Lecture 5- Process Synchonization_revised.pdf
PDF
Shared Memory
PPTX
Interprocess Communication important topic in iOS .pptx
PDF
CH05.pdf
PPT
PPTX
9-Operating Systems -Synchronization, interprocess communication, deadlock.pptx
PPTX
Chapter 6 Concurrency: Deadlock and Starvation
PPT
Inter process communication
PPTX
Cs problem [repaired]
PDF
Ch5 process synchronization
PPT
Chapter three- Process Synchronization.ppt
PPT
Lecture18-19 (1).ppt
PDF
OPERATING SYSTEM NOTESS ppt Unit 2.1.pdf
PPT
Chapter 5-Process Synchronization OS.ppt
PPTX
Synchronization Peterson’s Solution.pptx
PDF
Lecture 5 process synchronization
OS Process Synchronization, semaphore and Monitors
OS Process synchronization Unit3 synchronization
Lecture 5 inter process communication
Lecture 5- Process Synchonization_revised.pdf
Shared Memory
Interprocess Communication important topic in iOS .pptx
CH05.pdf
9-Operating Systems -Synchronization, interprocess communication, deadlock.pptx
Chapter 6 Concurrency: Deadlock and Starvation
Inter process communication
Cs problem [repaired]
Ch5 process synchronization
Chapter three- Process Synchronization.ppt
Lecture18-19 (1).ppt
OPERATING SYSTEM NOTESS ppt Unit 2.1.pdf
Chapter 5-Process Synchronization OS.ppt
Synchronization Peterson’s Solution.pptx
Lecture 5 process synchronization
Ad

More from Ruchi Sharma (6)

PPT
PPT
Ipc ppt
PPT
EMBEDDED SYSTEMS
PPT
PPT
Nn ppt
PPT
neural networks
Ipc ppt
EMBEDDED SYSTEMS
Nn ppt
neural networks

Ipc feb4

  • 1. Inter-Process Communication and Synchronization Prof.RUCHI SHARMA BVCOE NEW DELHI Page 1 02/16/13
  • 2. Introduction • An important and fundamental feature in modern operating systems is concurrent execution of processes/threads. This feature is essential for the realization of multiprogramming, multiprocessing, distributed systems, and client-server model of computation. • Concurrency encompasses many design issues including communication and synchronization among processes, sharing of and contention for resources. • In this discussion we will look at the various design issues/problems and the wide variety of solutions available. Page 2 02/16/13
  • 3. Topics for discussion • The principles of concurrency • Interactions among processes • Mutual exclusion problem • Mutual exclusion- solutions – Software approaches (Dekker’s and Peterson’s) – Hardware support (test and set atomic operation) – OS solution (semaphores) – PL solution (monitors) – Distributed OS solution ( message passing) • Reader/writer problem • Dining Philosophers Problem Page 3 02/16/13
  • 4. Principles of Concurrency • Interleaving and overlapping the execution of processes. • Consider two processes P1 and P2 executing the function echo: { input (in, keyboard); out = in; output (out, display); } Page 4 02/16/13
  • 5. ...Concurrency (contd.) • P1 invokes echo, after it inputs into in , gets interrupted (switched). P2 invokes echo, inputs into in and completes the execution and exits. When P1 returns in is overwritten and gone. Result: first ch is lost and second ch is written twice. • This type of situation is even more probable in multiprocessing systems where real concurrency is realizable thru’ multiple processes executing on multiple processors. • Solution: Controlled access to shared resource – Protect the shared resource : in buffer; “critical resource” – one process/shared code. “critical region” Page 5 02/16/13
  • 6. Interactions among processes • In a multi-process application these are the various degrees of interaction: 1. Competing processes : Processes themselves do not share anything. But OS has to share the system resources among these processes “competing” for system resources such as disk, file or printer. Co-operating processes : Results of one or more processes may be needed for another process. 2. Co-operation by sharing : Example: Sharing of an IO buffer. Concept of critical section. (indirect) 3. Co-operation by communication : Example: typically no data sharing, but co-ordination thru’ synchronization becomes essential in certain applications. (direct) Page 6 02/16/13
  • 7. Interactions ...(contd.) • Among the three kinds of interactions indicated by 1, 2 and 3 above: • 1 is at the system level: potential problems : deadlock and starvation. • 2 is at the process level : significant problem is in realizing mutual exclusion. • 3 is more a synchronization problem. • We will study mutual exclusion and synchronization here, and defer deadlock, and starvation for a later time. Page 7 02/16/13
  • 8. Race Condition • Race condition: The situation where several processes access – and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last. • To prevent race conditions, concurrent processes must be synchronized. Page 8 02/16/13
  • 9. Mutual exclusion problem • Successful use of concurrency among processes requires the ability to define critical sections and enforce mutual exclusion. • Critical section : is that part of the process code that affects the shared resource. • Mutual exclusion: in the use of a shared resource is provided by making its access mutually exclusive among the processes that share the resource. • This is also known as the Critical Section (CS) problem. Page 9 02/16/13
  • 10. Mutual exclusion • Any facility that provides mutual exclusion should meet these requirements: 1. No assumption regarding the relative speeds of the processes. 2. A process is in its CS for a finite time only. 3. Only one process allowed in the CS. 4. Process requesting access to CS should not wait indefinitely. 5. A process waiting to enter CS cannot be blocking a process in CS or any other processes. Page 10 02/16/13
  • 11. Software Solutions: Algorithm 1 • Process 0 • Process 1 • ... • ... • while turn != 0 do • while turn != 1 do • nothing; • nothing; • // busy waiting • // busy waiting • < Critical Section> • < Critical Section> • turn = 1; • ... • turn = 0; Problems : Strict • ... alternation, Busy Waiting Page 11 02/16/13
  • 12. Algorithm 2 • PROCESS 0 • PROCESS 1 • ... • ... • flag[0] = TRUE; • flag[1] = TRUE; • while flag[1] do nothing; • while flag[0] do nothing; • <CRITICAL SECTION> • <CRITICAL SECTION> • flag[0] = FALSE; • flag[1] = FALSE; PROBLEM : Potential for deadlock, if one of the processes fail within CS. Page 12 02/16/13
  • 13. Algorithm 3 • Combined shared variables of algorithms 1 and 2. • Process Pi do { flag [i]:= true; turn = j; while (flag [j] and turn = j) ; critical section flag [i] = false; remainder section } while (1); • Solves the critical-section problem for two processes. Page 13 02/16/13
  • 14. Synchronization Hardware • Test and modify the content of a word atomically . boolean TestAndSet(boolean &target) { boolean rv = target; target = true; return rv; } Page 14 02/16/13
  • 15. Mutual Exclusion with Test- and-Set • Shared data: boolean lock = false; • Process Pi do { while (TestAndSet(lock)) ; critical section lock = false; remainder section Page 15 } 02/16/13
  • 16. Synchronization Hardware • Atomically swap two variables. void Swap(boolean &a, boolean &b) { boolean temp = a; a = b; b = temp; } Page 16 02/16/13
  • 17. Mutual Exclusion with Swap • Shared data (initialized to false): boolean lock; • Process Pi do { key = true; while (key == true) Swap(lock,key); critical section lock = false; remainder section } Page 17 02/16/13
  • 18. Semaphores • Think about a semaphore as a class • Attributes: semaphore value, Functions: init, wait, signal • Support provided by OS • Considered an OS resource, a limited number available: a limited number of instances (objects) of semaphore class is allowed. • Can easily implement mutual exclusion among any number of processes. Page 18 02/16/13
  • 19. Critical Section of n Processes • Shared data: Semaphore mutex; //initially mutex = 1 • Process Pi: do { mutex.wait(); critical section mutex.signal(); Page 19 remainder section 02/16/13 } while (1);
  • 20. Semaphore Implementation • Define a semaphore as a class: class Semaphore { int value; // semaphore value ProcessQueue L; // process queue //operations wait() signal() } • In addition, two simple utility operations: – block() suspends the process that invokes it. – Wakeup() resumes the execution of a blocked process P. Page 20 02/16/13
  • 21. Semantics of wait and signal • Semaphore operations now defined as S.wait(): S.value--; if (S.value < 0) { add this process to S.L; block(); // block a process } S.signal(): S.value++; if (S.value <= 0) { remove a process P from S.L; wakeup(); // wake a process } Page 21 02/16/13
  • 22. Semaphores for CS • Semaphore is initialized to 1. The first process that executes a wait() will be able to immediately enter the critical section (CS). (S.wait() makes S value zero.) • Now other processes wanting to enter the CS will each execute the wait() thus decrementing the value of S, and will get blocked on S. (If at any time value of S is negative, its absolute value gives the number of processes waiting blocked. ) • When a process in CS departs, it executes S.signal() which increments the value of S, and will wake up any one of the processes blocked. The queue could be FIFO or priority queue. Page 22 02/16/13
  • 23. Two Types of Semaphores • Counting semaphore – integer value can range over an unrestricted domain. • Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement. ex: nachos • Can implement a counting semaphore using a binary semaphore. Page 23 02/16/13
  • 24. Semaphore for Synchronization • Execute B in Pj only after A executed in Pi • Use semaphore flag initialized to 0 • Code: Pi Pj   A flag.wait() Page 24 flag.signal() B 02/16/13
  • 25. Classical Problems of Synchronization • Bounded-Buffer Problem • Readers and Writers Problem • Dining-Philosophers Problem Page 25 02/16/13
  • 26. Producer/Consumer problem • Producer • Consumer repeat repeat produce item v; while (in <= out) nop; b[in] = v; w = b[out]; in = in + 1; out = out + 1; forever; consume w; forever; Page 26 02/16/13
  • 27. Solution for P/C using Semaphores • Producer • Consumer • repeat • repeat • produce item v; • while (in <= out) nop; • MUTEX.wait(); • MUTEX.wait(); • b[in] = v; • w = b[out]; • in = in + 1; • out = out + 1; • MUTEX.signal(); • MUTEX.signal(); • forever; • consume w; • forever; • What if Producer is • Ans: Consumer will slow or late? busy-wait at the while statement. Page 27 02/16/13
  • 28. P/C: improved solution • Producer • Consumer repeat repeat produce item v; AVAIL.wait(); MUTEX.wait(); MUTEX.wait(); b[in] = v; w = b[out]; in = in + 1; out = out + 1; MUTEX.signal(); MUTEX.signal(); AVAIL.signal(); consume w; forever; forever; • What will be the initial • ANS: Initially MUTEX = values of MUTEX and 1, AVAIL = 0. AVAIL? Page 28 02/16/13
  • 29. P/C problem: Bounded buffer • Producer • Consumer repeat repeat produce item v; while (in == out) NOP; while((in+1)%n == out) w = b[out]; NOP; b[in] = v; out = (out + 1)%n; in = ( in + 1)% n; consume w; forever; forever; • How to enforce • ANS: Using another bufsize? counting semaphore. Page 29 02/16/13
  • 30. P/C: Bounded Buffer solution • Producer repeat • Consumer produce item v; repeat BUFSIZE.wait(); AVAIL.wait(); MUTEX.wait(); MUTEX.wait(); b[in] = v; w = b[out]; in = (in + 1)%n; out = (out + 1)%n; MUTEX.signal(); MUTEX.signal(); AVAIL.signal(); BUFSIZE.signal(); forever; consume w; • What is the initial value forever; of BUFSIZE? • ANS: size of the bounded buffer. Page 30 02/16/13
  • 31. Semaphores - comments • Intuitively easy to use. • wait() and signal() are to be implemented as atomic operations. • Difficulties: – signal() and wait() may be exchanged inadvertently by the programmer. This may result in deadlock or violation of mutual exclusion. – signal() and wait() may be left out. • Related wait() and signal() may be scattered all over the code among the processes. Page 31 02/16/13
  • 32. Monitors • Monitor is a predecessor of the “class” concept. • Initially it was implemented as a programming language construct and more recently as library. The latter made the monitor facility available for general use with any PL. • Monitor consists of procedures, initialization sequences, and local data. Local data is accessible only thru’ monitor’s procedures. Only one process can be executing in a monitor at a time. Other process that need the monitor wait suspended. Page 32 02/16/13
  • 33. Monitors monitor monitor-name { shared variable declarations procedure body P1 (…) { . . .} procedure body P2 (…) { . . .} procedure body Pn (…) { . . .} { initialization code } Page 33 } 02/16/13
  • 34. Monitors • To allow a process to wait within the monitor, a condition variable must be declared, as condition x, y; • Condition variable can only be used with the operations wait and signal. – The operation x.wait(); means that the process invoking this operation is suspended until another process invokes x.signal(); – The x.signal operation resumes exactly one suspended process. If no process is suspended, then the signal operation has no effect. Page 34 02/16/13
  • 35. Schematic View of a Monitor Page 35 02/16/13
  • 37. Message passing • Both synchronization and communication requirements are taken care of by this mechanism. • More over, this mechanism yields to synchronization methods among distributed processes. • Basic primitives are: send (destination, message); receive ( source, message); Page 37 02/16/13
  • 38. Issues in message passing • Send and receive: could be blocking or non-blocking: – Blocking send: when a process sends a message it blocks until the message is received at the destination. – Non-blocking send: After sending a message the sender proceeds with its processing without waiting for it to reach the destination. – Blocking receive: When a process executes a receive it waits blocked until the receive is completed and the required message is received. – Non-blocking receive: The process executing the receive proceeds without waiting for the message(!). • Blocking Receive/non-blocking send is a common combination. Page 38 02/16/13
  • 39. Reader/Writer problem • Data is shared among a number of processes. • Any number of reader processes could be accessing the shared data concurrently. • But when a writer process wants to access, only that process must be accessing the shared data. No reader should be present. • Solution 1 : Readers have priority; If a reader is in CS any number of readers could enter irrespective of any writer waiting to enter CS. • Solution 2: If a writer wants CS as soon as the CS is available writer enters it. Page 39 02/16/13
  • 40. Reader/writer: Priority Readers • Reader: • Writer: ES.wait(); ForCS.wait(); NumRdr = NumRdr + 1; CS; if NumRdr = 1 ForCS.wait(); ForCS.signal(); ES.signal(); CS; ES.wait(); NumRdr = NumRdr -1; If NumRdr = 0 ForCS.signal(); ES.signal(); Page 40 02/16/13
  • 41. Dining Philosophers Example monitor dp { enum {thinking, hungry, eating} state[5]; condition self[5]; void pickup(int i) // following slides void putdown(int i) // following slides void test(int i) // following slides void init() { for (int i = 0; i < 5; i++) state[i] = thinking;} Page 41 02/16/13 }
  • 42. Dining Philosophers void pickup(int i) { state[i] = hungry; test[i]; if (state[i] != eating) self[i].wait(); } void putdown(int i) { state[i] = thinking; // test left and right neighbors test((i+4) % 5); Page 42 test((i+1) % 5); 02/16/13
  • 43. Dining Philosophers void test(int i) { if ( (state[(I + 4) % 5] != eating) && (state[i] == hungry) && (state[(i + 1) % 5] != eating)) { state[i] = eating; self[i].signal(); } Page 43 02/16/13
  • 44. Summary • We looked at various ways/levels of realizing synchronization among concurrent processes. • Synchronization at the kernel level is usually solved using hardware mechanisms such as interrupt priority levels, basic hardware lock, using non-preemptive kernel (older BSDs), using special signals. Page 44 02/16/13