SlideShare a Scribd company logo
Processes and Threads
   in Windows Vista
Agenda
•   Fundamental Concepts
•   IPC
•   Synchronization
•   Implementation
    – Process Creation
    – Scheduling
Fundamental Concepts
• In window Vista processes are containers for
  programs.
• Each process includes:
  – Virtual address space
  – Handles to kernel-mode objects
  – Threads and resources to threads execution
• Each process have user-mode system data called
  the process environment block (PEB), includes:
  – List of loaded modules
  – The current working directory
  – Pointer to process’ heaps
Fundamental Concepts
• Threads are kernel’s abstraction for
  scheduling the CPU
• Threads can also be affinitized to only run on
  certain processors
• Each thread has two separate call stacks, one
  for execution in user-mode and one for
  kernel-mode
Fundamental Concepts
• There is also a threads environment block
  (TEB) that keeps user-mode data, includes:
  – Thread local storages
  – Fields
• Another data structure that kernel-mode
  shared is user shared data which is contains
  various form of time, version info, amount of
  physical memory, number of shared flags.
Fundamental Concepts
                    Process
• Process are created from section objects, each
  of which describes a memory object backed
  by a file on disk.
• Create process:
  – Modify a new process by mapping section
  – Allocating virtual memory
  – Writing parameters and environmental data
  – Duplicating file descriptors
  – Creating threads.
Fundamental Concepts
                     Jobs and Fibers
• Definition: Jobs is a group of processes.
• The main functions of a job is to constraints to the
  threads they contain such at:
   – Limiting resources
   – Prevents threads from accessing system objects by
     enforcing restricted token
• Once a process in a jobs, all process threads in
  those process create will also be in the job.
• Problems: one process can be in one job, there
  will be conflicts if many jobs attempt to manage
  the same process.
Fundamental Concepts
                         Fibers
• Definition: A fiber is a unit of execution that must
  be manually scheduled by the application
• Fibers are created by allocating a stack and a
  user-mode fiber data structure for storing
  registers and data can also be created
  independently of threads
• Fibers will not run until another running fiber in
  thread make explicitly call SwithToFiber.
• Advantage:
   – It easier and take fewer time to switch between fiber
     than threads
Fundamental Concepts
                  Jobs and Fibers

• Disadvantage: need a lot of synchronization
  to make sure fibers do not interface with each
  other.
• Solution: create only as many threads as
  there are processors to run them, and
  affinitize the threads to each run only on a
  distinct set of available processors.
Fundamental Concepts
      Jobs and Fibers
Fundamental Concepts
                     Threads
• Every process start out with one threads and can
  create threads dynamically.
• OS always selected threads to run not a process
• Every threads have state whereas processes do
  not have scheduling states.
• Each thread has an ID, which is taken from the
  same space as process IDs
• System use a handle table, which keep pointer
  field to lookup of a process and thread by ID.
Fundamental Concepts
                       Threads
• There are two types of threads: normal thread
  and system thread
• Normal thread:
  – A thread normally run in user-mode but when make
    a system call it switch to kernel-mode .
  – Each thread has two stacks: one for user-mode and
    one for kernel-mode.
• System thread:
  – Run only in kernel-mode
  – Not associated with any user process
Fundamental Concepts
                        Threads
• The thread CONTEXT includes the thread's set of
  registers, the kernel stack, a thread environment
  block, and a user stack in the address space of
  the thread's process.
• Running threads
   – Using access token of their containing process
   – Client/server
• Threads are also normal focal point for I/O.
• Any thread is able to access all the objects that
  belong to its process.
Fundamental Concepts
InterProcess Communication IPC
            Overview
 1 -How one process passes information to
       another in Windows Vista?

 2 – Six (6) main ways to pass information
       between process and process.
InterProcess Communication IPC
                         Windows Vista
   Windows Vista provides mechanisms for facilitating(thuận
    tiện) communications and data sharing between applications.
   Typically, applications can use IPC for categorized (phân
    loại) as client or server
    – Client: a application or process that requests a service from some other
      applications or processes.
    – Server: a application or process that responds client request
   As developers, we must choose suitable IPC methods to use
    for our applications.
InterProcess Communication IPC
                            Windows Vista
                                PIPES
 Two types (Pipes and Named pipes)
 Pipes have two modes:
  – Byte-mode: work the same ways as in UNIX – Allows a child process to inherit a
    communication channels from its parents: data written to one end of the pipe can be
    read at other.
  – To exchange data in both directions (duplex operation), you must create two pipes.
  – Message-mode: is somewhat similar but preserve message boundaries. To recognize the
    separated messages. If we send 4 writes of 128-byte, it will read as 4 message separated
    with 128-byte each, but not one message with 512-bytes.
 Named pipes: also have to modes as regular pipes. But named pipes can be used over a
  network.
  – Use for transfer data between unrelated process on different computers.
  – Exchange data by performing read and write operations on the pipe.
InterProcess Communication IPC
                            Windows Vista
                            MAILSLOTS
   Function of OS/2 operating system but implemented in
    Windows for capability.
   Similar to pipes, but not all.
   Is one-way communication.
   Two-way communication is possible.
    – A process can be both a mailslot server and a mailslot client
    Can broadcast a message to many receivers instead of one
    (if they have same mailslots name).
   Use over a network but not guarantee the delivery of
    message.
   Mailslots and named pipes are implemented as file system.
    – Allow them to access over the network using existed remote file system protocol.
InterProcess Communication IPC
                      Windows Vista
                        SOCKETS
   Connect processes on different machines.
   Generally used in networking context.
   2 ways of communication channels.
   Also can be used to connect processes on same machine,
    however it more complicated than pipes.
   READ/WRITE on socket is transferring data between 2
    processes.
   Windows Sockets are based on the sockets first popularized
    by Berkeley Software Distribution (BSD)..
InterProcess Communication IPC
                     Windows Vista
                  remote procedure calls
 RPC  enables applications to call functions remotely.
 Operates between processes on a single computer or
  on different computers on a network.
 Supports data conversion for different hardware
  architectures and for byte-ordering between
  dissimilar environments.
 In Windows, transport could be many ways:
  – TCP/IP sockets.
  – Named pipes.
  – Advanced Local Procedure Call (ALDC) – message-passing
    facility in the kernel mode.
InterProcess Communication IPC
                     Windows Vista
                     SHARE FILE
 Process   can share object
  –Includes section objects, which can be mapped into
   the virtual address space of different processes at
   the same time.
  –All writes done by one process then appear in the
   address spaces of the other processes.
SYNCHRONIZATION
Critical section
•   The simplest type of thread synchronization object
•   Can be used only by the threads of a single process (differ from Mutex)
•   Provide a slightly faster, more efficient mechanism for mutual-exclusion
    synchronization
•   There is no way to tell whether a critical section has been abandoned
•   Critical sections are not kernel-mode objects
•                     One writes to the list, and the other reads from it. To prevent
    the two threads from accessing the list at exactly the same time, you can
    protect the list with a critical section. The following example uses a globally
    declared CCriticalSection object to demonstrate how:



• // Global data
• CCriticalSection g_cs;

• .
• .
• //Thread A                                         //Thread B
• g_cs.Lock ();                                      g_cs.Lock();
• //Write to the linked list.                              Read from
  the llinked list
• g_cs.Unlock();                                     g_cs.Unlock();
• Figure 17-3. Protecting a shared resource with
  a critical section
mutex
• Mutexes are kernel mode
• To gain exclusive access to a resource shared by two or more
  threads
• Can be used to synchronize threads running in the same
  process or in different processes
• Suppose two applications use a block of shared memory to
  exchange data. Inside that shared memory is a linked list that
  must be protected against concurrent thread accesses. A
  critical section won't work because it can't reach across
  process boundaries, but a Mutex will do the job nicely. Here's
  what you do in each process before reading or writing the
  linked li
• // Global data
• CMutex g_mutex (FALSE, _T ("MyMutex"));
• g_mutex.Lock ();
• // Read or write the linked list.
• g_mutex.Unlock ();
•   The first parameter passed to the CMutex constructor specifies whether the mutex
    is initially locked (TRUE) or unlocked (FALSE).
•    The second parameter specifies the mutex's name, which is required if the mutex
    is used to synchronize threads in two different processes.
•   You pick the name, but both processes must specify the same name so that the
    two CMutex objects will reference the same mutex object in the Windows kernel.
•   Naturally, Lock blocks on a mutex locked by another thread, and Unlock frees the
    mutex so that others can lock it
• Note that:
       There is one other difference between mutexes
and critical sections:
        If a thread locks a critical section and terminates
without unlocking it, other threads waiting for the
critical section to come free will block indefinitely.
However, if a thread that locks a mutex fails to unlock it
before terminating, the system deems the mutex to be
"abandoned" and automatically frees the mutex so that
waiting threads can resume.
EVENTS
• Events are a way of signaling one thread
  from another, allowing one thread to wait or
  sleep until it’s signaled by another thread.
•     The producing thread, Thread A
  generates some data and puts it in a shared
  working space. In this example, the
  consuming thread, Thread B is sleeping on
  the event (waiting for the event to trigger):
Shared working space
           Thread A                                 Thread B
                      data      data      data



                                  Event




10/18/12                                                       32
•      Once the producing thread has finished
    writing data, it triggers the event.


                  Shared working space
     Thread A                             Thread B
                  data      data   data



                         Event
• This signals the consuming thread, Thread B,
  thereby waking it up

                  Shared working space
     Thread A                             Thread B
                  data      data   data



                         Event
• Once the consuming thread, Thread B has
  woken up, it starts doing work. The
  assumption is that the producing thread will
  no longer touch the data.
               Shared working space
  Thread A                                   Thread B
                 data       data      data



                    Event
•                    Here's an example involving one thread (thread A)
    that fills a buffer with data and another thread (thread B) that does
    something with that data. Assume that thread B must wait for a
    signal from thread A saying that the buffer is initialized and ready
    to go. An autoreset event is the perfect tool for the job:
•   // Global data
•   CEvent g_event; // Autoreset, initially nonsignaled
•   .
•   .
•   // Thread A
•   InitBuffer (&buffer); // Initialize the buffer.
•   g_event.SetEvent (); // Release thread B.
•   .
•   .
•   // Thread B
•   g_event.Lock (); // Wait for the signal.
SEMAPHORE
•   A semaphore is a mutex that multiple threads can access
•   Events, critical sections, and mutexes are "all or nothing" objects in the sense that
    Lock blocks on them if any other thread has them locked. Semaphores are
    different.
•   Maintain resource counts representing the number of resources available
•   Locking a semaphore decrements its resource count, and unlocking a semaphore
    increments the resource count
•   Can be used to synchronize threads within a process or threads that belong to
    different processes.
•               To limit the time that Lock will wait for the
    semaphore's resource count to become nonzero, you
    can pass a maximum wait time (in milliseconds, as
    always) to the Lock function.




• Figure 17-6. Using a semaphore to guard a shared
  resource.
Dynamic-link Library: thư viện cấp phát động mà các ứng
dụng dùng chung.
API: giao diện lập trình ứng dụng, giúp cho sự giao tiếp giữa
các tiến trình
Subsystem: cung cấp môi trường hoạt động cho các ứng dụng
Section object: phần đại diện cho vùng bộ nhớ được chia sẻ
Process object: dùng để kiểm tra và điều khiển process
Thread object: dùng để kiểm soát thread
Process Environment Block (PEB): chứa dữ liệu hoạt động
trong suốt quá trình hoạt động của một process
Thread Environment Block (TEB): chưa dữ liệu của các threads
đang hoạt động
A new process is created when another process make
a Win32 CreateProcess call.
There are 7 step in creating a new process:

1.Converse from Win32 pathname to NT pathname.
2.Open EXE and create Section object.
3.Create Process object.
4.Create Thread object.
5.Checking
6.Shim the program application if necessary.
Scheduling
                        Content
1. Overview
2. Scheduling Priorities
3. Scheduling Algorithm
4. Priority Boosting
5. Priority Inversion
Scheduling
                               Overview
• Windows schedules threads, not processes.

• The Scheduler is called when:

   –   The currently running thread blocks on a semaphore, mutex, event, I/O, …
   –   The thread signals an object (e.g., does an up on a semaphore)
   –   The quantum expires
   –   An I/O operation completes
   –   A timed wait expires
Scheduling
                                  Overview
•    Scheduling is preemptive, priority-based, and round-robin at the highest priority.
•    Scheduler tries to keep a thread on its ideal processor in order to improve
     performance




      (a) Symmetric Multiprocessing (SMP)
      (b) Non-Uniform Memory Access (NUMA)


    • Processes/Threads can specify affinity mask to run only on certain processors:
      SetProcessAffinityMask(), SetThreadAffinityMask(), …
Scheduling
                            Scheduling Priorities
•   Threads are scheduled to run based on their scheduling priority.
    Each thread is assigned a scheduling priority.
•   The priority levels range from zero (lowest priority) to 31 (highest priority),
    correspondingly associated with 32 queues.
     – Priorities are divided into 6 priority classes that are applied to processes.
     – In each class, there are 7 priority levels that are applied to threads in that process.
•   Base priority (of a thread) = F(priority class, priority level) = constant.
•   Dynamic priority = Base priority + Boost Amount, is used to determine which
    thread to execute.
Scheduling
                    Scheduling Algorithm
• The system treats all threads with the same priority as equal.

• The system assigns time slices in a round-robin fashion to all
  threads with the highest priority.

   – If none of these threads are ready to run, the system assigns time slices in a
     round-robin fashion to all threads with the next highest priority.

   – If a higher-priority thread becomes available to run, the system ceases to
     execute the lower-priority thread (without allowing it to finish using its time
     slice), and assigns a full time slice to the higher-priority thread.
Scheduling
    Scheduling Algorithm




Figure 11-28. Thread priorities in Windows Vista
Scheduling
                             Priority Boosting
•   Only applied to non-realtime threads
    that belong to:
     – GUI foreground
       (GUI input, mouse message …)
     – Waking for event
       (disk operation completion +1, keyboard
       input +6, sound +8...)


•   After raising a thread's dynamic
    priority, the scheduler reduces that
    priority by one level each time the
    thread completes a time slice, until
    the thread drops back to its base
    priority.
Scheduling
                                 Priority Inversion
•   Priority inversion occurs when a
    mutex or critical section held by a                 1
    lower-priority thread delays the
    running of a higher-priority thread
    when both are contending for the
    same resource.                                      2
     – Thread 1 has high priority.
     – Thread 2 has medium priority.
     – Thread 3 has low priority.
                                                        3
•   Win32 Solution: randomly boosting the priority of the ready threads (in this
    case, the low priority lock-holders).
     – The low priority threads run long enough to exit the critical section, and the high-priority
       thread can enter the critical section.
     – If the low-priority thread does not get enough CPU time to exit the critical section the first
       time, it will get another chance during the next round of scheduling.
Summary
•   Fundamental Concepts
•   IPC
•   Synchronization
•   Implementation
    • Process Creation
    • Scheduling              Q&A

More Related Content

PPTX
Linux Memory Management
PPTX
A simple approach of lexical analyzers
PPTX
Monolithic kernel
PPTX
CPU Scheduling in OS Presentation
PPTX
Multi Processors And Multi Computers
PPTX
Software engineering: design for reuse
PPTX
Peephole optimization techniques in compiler design
PPTX
Memory Organization
Linux Memory Management
A simple approach of lexical analyzers
Monolithic kernel
CPU Scheduling in OS Presentation
Multi Processors And Multi Computers
Software engineering: design for reuse
Peephole optimization techniques in compiler design
Memory Organization

What's hot (20)

PPTX
Unified process model
PPTX
INTER PROCESS COMMUNICATION (IPC).pptx
PPTX
Structure of the page table
PPTX
First Come First Serve (FCFS).pptx
PDF
Distributed Operating System_1
PPT
Kernel mode vs user mode in linux
PPTX
Computer Organization : CPU, Memory and I/O organization
PPTX
SCHEDULING ALGORITHMS
PPTX
Protection Domain and Access Matrix Model -Operating System
PPTX
Threads .ppt
PPTX
contiguous memory allocation.pptx
PPTX
Directory structure
PPTX
Semaphore
PPTX
Structure of shared memory space
PPTX
System call
PPT
Concurrent process
PDF
parallel Questions & answers
PPTX
Methods for handling deadlock
PDF
CPU Scheduling
Unified process model
INTER PROCESS COMMUNICATION (IPC).pptx
Structure of the page table
First Come First Serve (FCFS).pptx
Distributed Operating System_1
Kernel mode vs user mode in linux
Computer Organization : CPU, Memory and I/O organization
SCHEDULING ALGORITHMS
Protection Domain and Access Matrix Model -Operating System
Threads .ppt
contiguous memory allocation.pptx
Directory structure
Semaphore
Structure of shared memory space
System call
Concurrent process
parallel Questions & answers
Methods for handling deadlock
CPU Scheduling
Ad

Viewers also liked (12)

PPTX
Windows process-scheduling
PPTX
CPU scheduling algorithms in OS
PPT
OS Process and Thread Concepts
PPT
Os..
PPTX
SSL Layer
PPT
Window scheduling algorithm
PPTX
Thread scheduling in Operating Systems
PPT
Operating System Chapter 4 Multithreaded programming
PPT
Operating System-Threads-Galvin
PPT
The Windows Scheduler
PPTX
Presentation on Android operating system
Windows process-scheduling
CPU scheduling algorithms in OS
OS Process and Thread Concepts
Os..
SSL Layer
Window scheduling algorithm
Thread scheduling in Operating Systems
Operating System Chapter 4 Multithreaded programming
Operating System-Threads-Galvin
The Windows Scheduler
Presentation on Android operating system
Ad

Similar to Processes and Threads in Windows Vista (20)

PPTX
OS Module-2.pptx
PPT
Chapter 6 os
PDF
Inter-Process-Communication (or IPC for short) are mechanisms provid.pdf
PDF
4 threads
PPTX
Lecture 3- Threads (1).pptx
PPT
Ch04 threads
PPT
EMBEDDED OS
PPTX
Chapter04-OS.pptx........................
PDF
Lecture 3- Threads.pdf
PDF
DISTRIBUTED SYSTEM CHAPTER THREE UP TO FIVE.pdf
PPTX
Chapter 3 chapter reading task
PDF
The Thread Chapter 4 of Operating System
PDF
Threads operating system slides easy understand
PPTX
PDF
threads (1).pdfmjlkjfwjgliwiufuaiusyroayr
PPTX
Topic 4- processes.pptx
PPTX
Threads and Processes in Operating Systems.pptx
PPTX
W-9.pptx
PPTX
Scheduling Thread
PPTX
OS Module-2.pptx
Chapter 6 os
Inter-Process-Communication (or IPC for short) are mechanisms provid.pdf
4 threads
Lecture 3- Threads (1).pptx
Ch04 threads
EMBEDDED OS
Chapter04-OS.pptx........................
Lecture 3- Threads.pdf
DISTRIBUTED SYSTEM CHAPTER THREE UP TO FIVE.pdf
Chapter 3 chapter reading task
The Thread Chapter 4 of Operating System
Threads operating system slides easy understand
threads (1).pdfmjlkjfwjgliwiufuaiusyroayr
Topic 4- processes.pptx
Threads and Processes in Operating Systems.pptx
W-9.pptx
Scheduling Thread

Recently uploaded (20)

PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
sap open course for s4hana steps from ECC to s4
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Cloud computing and distributed systems.
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
MYSQL Presentation for SQL database connectivity
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
Big Data Technologies - Introduction.pptx
PDF
Encapsulation theory and applications.pdf
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Spectroscopy.pptx food analysis technology
PDF
Machine learning based COVID-19 study performance prediction
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
sap open course for s4hana steps from ECC to s4
The AUB Centre for AI in Media Proposal.docx
Understanding_Digital_Forensics_Presentation.pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Cloud computing and distributed systems.
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Programs and apps: productivity, graphics, security and other tools
MYSQL Presentation for SQL database connectivity
The Rise and Fall of 3GPP – Time for a Sabbatical?
Big Data Technologies - Introduction.pptx
Encapsulation theory and applications.pdf
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Mobile App Security Testing_ A Comprehensive Guide.pdf
Unlocking AI with Model Context Protocol (MCP)
Spectroscopy.pptx food analysis technology
Machine learning based COVID-19 study performance prediction
20250228 LYD VKU AI Blended-Learning.pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf

Processes and Threads in Windows Vista

  • 1. Processes and Threads in Windows Vista
  • 2. Agenda • Fundamental Concepts • IPC • Synchronization • Implementation – Process Creation – Scheduling
  • 3. Fundamental Concepts • In window Vista processes are containers for programs. • Each process includes: – Virtual address space – Handles to kernel-mode objects – Threads and resources to threads execution • Each process have user-mode system data called the process environment block (PEB), includes: – List of loaded modules – The current working directory – Pointer to process’ heaps
  • 4. Fundamental Concepts • Threads are kernel’s abstraction for scheduling the CPU • Threads can also be affinitized to only run on certain processors • Each thread has two separate call stacks, one for execution in user-mode and one for kernel-mode
  • 5. Fundamental Concepts • There is also a threads environment block (TEB) that keeps user-mode data, includes: – Thread local storages – Fields • Another data structure that kernel-mode shared is user shared data which is contains various form of time, version info, amount of physical memory, number of shared flags.
  • 6. Fundamental Concepts Process • Process are created from section objects, each of which describes a memory object backed by a file on disk. • Create process: – Modify a new process by mapping section – Allocating virtual memory – Writing parameters and environmental data – Duplicating file descriptors – Creating threads.
  • 7. Fundamental Concepts Jobs and Fibers • Definition: Jobs is a group of processes. • The main functions of a job is to constraints to the threads they contain such at: – Limiting resources – Prevents threads from accessing system objects by enforcing restricted token • Once a process in a jobs, all process threads in those process create will also be in the job. • Problems: one process can be in one job, there will be conflicts if many jobs attempt to manage the same process.
  • 8. Fundamental Concepts Fibers • Definition: A fiber is a unit of execution that must be manually scheduled by the application • Fibers are created by allocating a stack and a user-mode fiber data structure for storing registers and data can also be created independently of threads • Fibers will not run until another running fiber in thread make explicitly call SwithToFiber. • Advantage: – It easier and take fewer time to switch between fiber than threads
  • 9. Fundamental Concepts Jobs and Fibers • Disadvantage: need a lot of synchronization to make sure fibers do not interface with each other. • Solution: create only as many threads as there are processors to run them, and affinitize the threads to each run only on a distinct set of available processors.
  • 10. Fundamental Concepts Jobs and Fibers
  • 11. Fundamental Concepts Threads • Every process start out with one threads and can create threads dynamically. • OS always selected threads to run not a process • Every threads have state whereas processes do not have scheduling states. • Each thread has an ID, which is taken from the same space as process IDs • System use a handle table, which keep pointer field to lookup of a process and thread by ID.
  • 12. Fundamental Concepts Threads • There are two types of threads: normal thread and system thread • Normal thread: – A thread normally run in user-mode but when make a system call it switch to kernel-mode . – Each thread has two stacks: one for user-mode and one for kernel-mode. • System thread: – Run only in kernel-mode – Not associated with any user process
  • 13. Fundamental Concepts Threads • The thread CONTEXT includes the thread's set of registers, the kernel stack, a thread environment block, and a user stack in the address space of the thread's process. • Running threads – Using access token of their containing process – Client/server • Threads are also normal focal point for I/O. • Any thread is able to access all the objects that belong to its process.
  • 15. InterProcess Communication IPC Overview 1 -How one process passes information to another in Windows Vista? 2 – Six (6) main ways to pass information between process and process.
  • 16. InterProcess Communication IPC Windows Vista  Windows Vista provides mechanisms for facilitating(thuận tiện) communications and data sharing between applications.  Typically, applications can use IPC for categorized (phân loại) as client or server – Client: a application or process that requests a service from some other applications or processes. – Server: a application or process that responds client request  As developers, we must choose suitable IPC methods to use for our applications.
  • 17. InterProcess Communication IPC Windows Vista PIPES  Two types (Pipes and Named pipes)  Pipes have two modes: – Byte-mode: work the same ways as in UNIX – Allows a child process to inherit a communication channels from its parents: data written to one end of the pipe can be read at other. – To exchange data in both directions (duplex operation), you must create two pipes. – Message-mode: is somewhat similar but preserve message boundaries. To recognize the separated messages. If we send 4 writes of 128-byte, it will read as 4 message separated with 128-byte each, but not one message with 512-bytes.  Named pipes: also have to modes as regular pipes. But named pipes can be used over a network. – Use for transfer data between unrelated process on different computers. – Exchange data by performing read and write operations on the pipe.
  • 18. InterProcess Communication IPC Windows Vista MAILSLOTS  Function of OS/2 operating system but implemented in Windows for capability.  Similar to pipes, but not all.  Is one-way communication.  Two-way communication is possible. – A process can be both a mailslot server and a mailslot client  Can broadcast a message to many receivers instead of one (if they have same mailslots name).  Use over a network but not guarantee the delivery of message.  Mailslots and named pipes are implemented as file system. – Allow them to access over the network using existed remote file system protocol.
  • 19. InterProcess Communication IPC Windows Vista SOCKETS  Connect processes on different machines.  Generally used in networking context.  2 ways of communication channels.  Also can be used to connect processes on same machine, however it more complicated than pipes.  READ/WRITE on socket is transferring data between 2 processes.  Windows Sockets are based on the sockets first popularized by Berkeley Software Distribution (BSD)..
  • 20. InterProcess Communication IPC Windows Vista remote procedure calls  RPC enables applications to call functions remotely.  Operates between processes on a single computer or on different computers on a network.  Supports data conversion for different hardware architectures and for byte-ordering between dissimilar environments.  In Windows, transport could be many ways: – TCP/IP sockets. – Named pipes. – Advanced Local Procedure Call (ALDC) – message-passing facility in the kernel mode.
  • 21. InterProcess Communication IPC Windows Vista SHARE FILE  Process can share object –Includes section objects, which can be mapped into the virtual address space of different processes at the same time. –All writes done by one process then appear in the address spaces of the other processes.
  • 23. Critical section • The simplest type of thread synchronization object • Can be used only by the threads of a single process (differ from Mutex) • Provide a slightly faster, more efficient mechanism for mutual-exclusion synchronization • There is no way to tell whether a critical section has been abandoned • Critical sections are not kernel-mode objects
  • 24. One writes to the list, and the other reads from it. To prevent the two threads from accessing the list at exactly the same time, you can protect the list with a critical section. The following example uses a globally declared CCriticalSection object to demonstrate how: • // Global data • CCriticalSection g_cs; • . • . • //Thread A //Thread B • g_cs.Lock (); g_cs.Lock(); • //Write to the linked list. Read from the llinked list • g_cs.Unlock(); g_cs.Unlock();
  • 25. • Figure 17-3. Protecting a shared resource with a critical section
  • 26. mutex • Mutexes are kernel mode • To gain exclusive access to a resource shared by two or more threads • Can be used to synchronize threads running in the same process or in different processes • Suppose two applications use a block of shared memory to exchange data. Inside that shared memory is a linked list that must be protected against concurrent thread accesses. A critical section won't work because it can't reach across process boundaries, but a Mutex will do the job nicely. Here's what you do in each process before reading or writing the linked li • // Global data • CMutex g_mutex (FALSE, _T ("MyMutex")); • g_mutex.Lock (); • // Read or write the linked list. • g_mutex.Unlock ();
  • 27. The first parameter passed to the CMutex constructor specifies whether the mutex is initially locked (TRUE) or unlocked (FALSE). • The second parameter specifies the mutex's name, which is required if the mutex is used to synchronize threads in two different processes. • You pick the name, but both processes must specify the same name so that the two CMutex objects will reference the same mutex object in the Windows kernel. • Naturally, Lock blocks on a mutex locked by another thread, and Unlock frees the mutex so that others can lock it
  • 28. • Note that: There is one other difference between mutexes and critical sections: If a thread locks a critical section and terminates without unlocking it, other threads waiting for the critical section to come free will block indefinitely. However, if a thread that locks a mutex fails to unlock it before terminating, the system deems the mutex to be "abandoned" and automatically frees the mutex so that waiting threads can resume.
  • 29. EVENTS • Events are a way of signaling one thread from another, allowing one thread to wait or sleep until it’s signaled by another thread. • The producing thread, Thread A generates some data and puts it in a shared working space. In this example, the consuming thread, Thread B is sleeping on the event (waiting for the event to trigger):
  • 30. Shared working space Thread A Thread B data data data Event 10/18/12 32
  • 31. Once the producing thread has finished writing data, it triggers the event. Shared working space Thread A Thread B data data data Event
  • 32. • This signals the consuming thread, Thread B, thereby waking it up Shared working space Thread A Thread B data data data Event
  • 33. • Once the consuming thread, Thread B has woken up, it starts doing work. The assumption is that the producing thread will no longer touch the data. Shared working space Thread A Thread B data data data Event
  • 34. Here's an example involving one thread (thread A) that fills a buffer with data and another thread (thread B) that does something with that data. Assume that thread B must wait for a signal from thread A saying that the buffer is initialized and ready to go. An autoreset event is the perfect tool for the job: • // Global data • CEvent g_event; // Autoreset, initially nonsignaled • . • . • // Thread A • InitBuffer (&buffer); // Initialize the buffer. • g_event.SetEvent (); // Release thread B. • . • . • // Thread B • g_event.Lock (); // Wait for the signal.
  • 35. SEMAPHORE • A semaphore is a mutex that multiple threads can access • Events, critical sections, and mutexes are "all or nothing" objects in the sense that Lock blocks on them if any other thread has them locked. Semaphores are different. • Maintain resource counts representing the number of resources available • Locking a semaphore decrements its resource count, and unlocking a semaphore increments the resource count • Can be used to synchronize threads within a process or threads that belong to different processes.
  • 36. To limit the time that Lock will wait for the semaphore's resource count to become nonzero, you can pass a maximum wait time (in milliseconds, as always) to the Lock function. • Figure 17-6. Using a semaphore to guard a shared resource.
  • 37. Dynamic-link Library: thư viện cấp phát động mà các ứng dụng dùng chung. API: giao diện lập trình ứng dụng, giúp cho sự giao tiếp giữa các tiến trình Subsystem: cung cấp môi trường hoạt động cho các ứng dụng Section object: phần đại diện cho vùng bộ nhớ được chia sẻ Process object: dùng để kiểm tra và điều khiển process Thread object: dùng để kiểm soát thread Process Environment Block (PEB): chứa dữ liệu hoạt động trong suốt quá trình hoạt động của một process Thread Environment Block (TEB): chưa dữ liệu của các threads đang hoạt động
  • 38. A new process is created when another process make a Win32 CreateProcess call. There are 7 step in creating a new process: 1.Converse from Win32 pathname to NT pathname. 2.Open EXE and create Section object. 3.Create Process object. 4.Create Thread object. 5.Checking 6.Shim the program application if necessary.
  • 39. Scheduling Content 1. Overview 2. Scheduling Priorities 3. Scheduling Algorithm 4. Priority Boosting 5. Priority Inversion
  • 40. Scheduling Overview • Windows schedules threads, not processes. • The Scheduler is called when: – The currently running thread blocks on a semaphore, mutex, event, I/O, … – The thread signals an object (e.g., does an up on a semaphore) – The quantum expires – An I/O operation completes – A timed wait expires
  • 41. Scheduling Overview • Scheduling is preemptive, priority-based, and round-robin at the highest priority. • Scheduler tries to keep a thread on its ideal processor in order to improve performance (a) Symmetric Multiprocessing (SMP) (b) Non-Uniform Memory Access (NUMA) • Processes/Threads can specify affinity mask to run only on certain processors: SetProcessAffinityMask(), SetThreadAffinityMask(), …
  • 42. Scheduling Scheduling Priorities • Threads are scheduled to run based on their scheduling priority. Each thread is assigned a scheduling priority. • The priority levels range from zero (lowest priority) to 31 (highest priority), correspondingly associated with 32 queues. – Priorities are divided into 6 priority classes that are applied to processes. – In each class, there are 7 priority levels that are applied to threads in that process. • Base priority (of a thread) = F(priority class, priority level) = constant. • Dynamic priority = Base priority + Boost Amount, is used to determine which thread to execute.
  • 43. Scheduling Scheduling Algorithm • The system treats all threads with the same priority as equal. • The system assigns time slices in a round-robin fashion to all threads with the highest priority. – If none of these threads are ready to run, the system assigns time slices in a round-robin fashion to all threads with the next highest priority. – If a higher-priority thread becomes available to run, the system ceases to execute the lower-priority thread (without allowing it to finish using its time slice), and assigns a full time slice to the higher-priority thread.
  • 44. Scheduling Scheduling Algorithm Figure 11-28. Thread priorities in Windows Vista
  • 45. Scheduling Priority Boosting • Only applied to non-realtime threads that belong to: – GUI foreground (GUI input, mouse message …) – Waking for event (disk operation completion +1, keyboard input +6, sound +8...) • After raising a thread's dynamic priority, the scheduler reduces that priority by one level each time the thread completes a time slice, until the thread drops back to its base priority.
  • 46. Scheduling Priority Inversion • Priority inversion occurs when a mutex or critical section held by a 1 lower-priority thread delays the running of a higher-priority thread when both are contending for the same resource. 2 – Thread 1 has high priority. – Thread 2 has medium priority. – Thread 3 has low priority. 3 • Win32 Solution: randomly boosting the priority of the ready threads (in this case, the low priority lock-holders). – The low priority threads run long enough to exit the critical section, and the high-priority thread can enter the critical section. – If the low-priority thread does not get enough CPU time to exit the critical section the first time, it will get another chance during the next round of scheduling.
  • 47. Summary • Fundamental Concepts • IPC • Synchronization • Implementation • Process Creation • Scheduling Q&A

Editor's Notes

  • #42: Manipulate thao tác
  • #43: Manipulate thao tác
  • #44: Manipulate thao tác
  • #45: Manipulate thao tác
  • #46: Manipulate thao tác
  • #47: Manipulate thao tác
  • #48: Manipulate thao tác
  • #49: Manipulate thao tác