SlideShare a Scribd company logo
SCHEDULER ACTIVATIONS Effective Kernel Support for the User-Level Management of Parallelism Kasun Gajasinghe Nisansa de Silva University of Moratuwa Based on the paper “Scheduler Activations -  Effective Kernel Support for the User-Level Management of Parallelism” by Thomas E. Anderson et.al.
Scheduler Activations Introduction to Threads   User-Level and Kernel-Level Threads Implementation Pros and Cons   Effective Kernel Support for User-Level Management of Parallelism Scheduler Activations Design Implementation Performance   Conclusion
What is a Thread? Smallest unit of processing that can be  scheduled  by the operating system     Similar to a process   Separate “thread of execution”     Each thread has a separate stack, program counter, and run state
Why Threads? Perform operations in parallel on the same data   Avoid complex explicit scheduling by applications   Often more efficient than separate processes    Remain responsive for user inputs
Implementing Threads Three basic strategies User-level: possible on most operating systems, even ancient ones   Kernel-level: looks almost like a process (i.e., Linux, Solaris)   Scheduler activations  (Digital Tru Unix 64, NetBSD, some Mach)
       User Thread      Kernel Thread
Thread System - User Level Thread library’s code + data structures in user space   Invocation of library function = local function call. Not system call   Pros - Good performance Fast thread context switching Better Scaling  Highly flexible Custom scheduling algorithms  Cons - Poor concurrency support   blocking system calls starve sibling threads
Thread System - System Level Thread library’s code + data structures in kernel space   Invocation of library function = system call   Pros -  Good concurrency support Blocking system calls do not starve sibling threads    Cons -  Poor performance Operations involve system calls Full context switch to perform thread switches  Less Flexible Generic scheduling Algorithm  
User Threads Excellent performance   No system calls to perform thread operations   More flexible =>Can use domain specific scheduling algorithm (‘customized’ thread library)   Blocking system calls such as I/O problematic; Starvation of sibling threads Kernel Threads Bad performance    System calls needed to perform thread operations   Generic scheduling algorithm( scheduled by kernel)   Good integration with system services – blocking calls do not prevent other user threads from being scheduled.  Less likelihood of starvation
  Fig. Thread Operation Latencies (micro seconds) Topaz - A highly tuned kernel.  FastThreads - A user-level thread library Ultrix - BSD family unix-like OS
SCHEDULER ACTIVATIONS  
Scheduler Activations Get the best of both worlds — The efficiency and flexibility of user-level threads  The non-blocking ability of kernel threads (Poor integration with system services)   Relies on upcalls   The term “scheduler activation” was selected because each vectored event causes the user-level thread system to reconsider its scheduling decision of which threads to run on which processors. 
Scheduler Activations What’s an Upcall? Normally, user programs call functions in the kernel. Which call as  system calls   Sometimes, though, the kernel calls a user-level process to report an event   Sometimes considered unclean — violates usual layering
Scheduler Activations Thread creation and scheduling is done at user level   When a system call from a thread blocks, the kernel does an upcall to the thread manager   The thread manager marks that thread as blocked, and starts running another thread   When a kernel interrupt occurs that’s relevant to the thread, another upcall is done to unblock it
Scheduler Activations  
Scheduler Activations It serves as a vessel, or execution context, for running user-level threads, in exactly the same way that a kernel thread does. It notifies the user-level thread system of a kernel event. It provides space in the kernel for saving the processor context of the activation’s current user-level thread,  when the thread is stopped by the kernel (e.g., because the thread blocks in the kernel on I/0 or the kernel preempts its processor).
Scheduler Activations Two execution stacks One mapped into the kernel  one mapped into the application address space. Kernel Application Address space
Scheduler Activations kernel stack is used whenever the user-level thread running in the scheduler activation’s context executes in the kernel (e.g. system call) The kernel also maintains a control block for each activation (akin to a thread control block) to record the state of the scheduler activation when its thread blocks in the kernel or is preempted. Kernel Application Address space
Scheduler Activations The user-level thread scheduler runs on the activation’s user-level stack and maintains a record of which user-level thread is running in which scheduler activation. Each user-level thread is allocated its own stack when it starts running Kernel Application Address space
Scheduler Activations - Programs  Kernel Application 1 Address space
Scheduler Activations - Programs  Kernel Application 1 Address space
Scheduler Activations - Upcalls  Kernel Application 1 Address space
Scheduler Activations – thread stopping  The crucial distinction between scheduler activations and kernel threads is that once an activation’s user-level thread is stopped by the kernel, the thread is never directly resumed by the kernel.
Scheduler Activations – thread stopping  A new scheduler activation is created to notify the user-level thread system that the thread has been stopped. The user-level thread system removes the state of the thread from the old activation, Tells the kernel that the old activation can be re-used (Explained later) Decides which thread to run on the processor.
Scheduler Activations – thread stopping  The kernel is able to maintain the invariant that there are always exactly as many running scheduler activations (vessels for running user-level threads) as there are processors assigned to the address space.
Scheduler Activations -Blocking Kernel Application 1 Address space
Scheduler Activations -Blocking Kernel Application 1 Address space
Scheduler Activations -Blocking Kernel Application 1 Address space 
Scheduler Activations -Blocking Kernel Application 1 Address space
Scheduler Activations -Blocking Kernel Application 1 Address space 
Scheduler Activations -Blocking Kernel Application 1 Address space  P
Scheduler Activations -Multiprogramming Application 2 Address space Kernel Application 1 Address space
Scheduler Activations -Multiprogramming Application 2 Address space Kernel Application 1 Address space P
Scheduler Activations - Additional if threads have priorities, and there exists running thread with a lower priority than both the unblocked and the preempted thread. Application concurrency model
Scheduler Activations – Upcall Points
Scheduler Activations – Address space to Kernel
Scheduler Activations - Enhancements The processor allocator can favour address spaces that use fewer processors and penalize those that use more.  If overall the system has fewer threads than processors, the idle processors should be left in the address spaces most likely to create work in the near future.
Scheduler Activations – Critical sections A user-level thread could be executing in a critical section at the instant when it is blocked or preempted. Poor performance Deadlock Use “Recovery” instead of “Prevention”
Scheduler Activations –Critical section Kernel Application 1 Address space P User level context switch!
Implementation Topaz kernel thread management system Where Topaz formerly blocked, resumed, or preempted a thread, it now performs upcalls to allow the user level to take these actions  Do explicit allocation of processors to address spaces Fast threads Process upcalls Resume interrupted critical sections Provide Topaz with needed infomation
Implementation Topaz kernel thread management system Add 1200 lines (on 4000) Mostly processor allocation policy Fast threads Few hundred lines NB: design is “neutral” on the choice of policies for allocating processors to address spaces and for scheduling threads onto processors.
Implementation– Processor Allocation Policy Processors are divided evenly among address spaces if some address spaces do not need all of the processors in their share, those processors are divided evenly among the remainder. processors are time-sliced only if the number of available processors is not an integer multiple of the number of address spaces. The kernel processor allocator only needs to know whether each address space could use more processors or has some processors that are idle.
Implementation– Thread Scheduling Policy kernel has no knowledge of an application’s concurrency model or scheduling policy, or of the data structures used to manage parallelism at the user level. Each application is completely free to choose these as appropriate; they can be tuned to fit the application’s needs.
Performance Enhancements– Critical sections Have a duplicate code section of the critical section At the end of the copy (Only!) put code to yield the processor back to the resumer.
Performance Enhancements– Critical sections Normal execution uses the original code.  When a preemption occurs, the kernel starts a new scheduler activation to notify the user-level thread system This activation checks the preempted thread’s program counter to see if it was in one of these critical sections If so, continue the thread at the corresponding place in the  copy  of the critical section. The copy relinquishes control back to the original upcall at the end of the critical section.
Performance Enhancements– UPCalls Logically, a new  scheduler activation is created for each upcall.  Creating a new scheduler activation is not free, however, because it requires data structures to be allocated and initialized. Instead, discarded scheduler activations can be cached for eventual re-use. (Remember?   )
Performance Enhancements– Debugging Not very significant to note The point is that you have to use a debugger which have as little effect as possible on the sequence of instructions being debugged.
Performance Thread performance - without kernel involvement quite similar to FastThreads before the changes.   Upcall performance - significantly worse than Topaz threads.  Untuned implementation. Topaz in assembler, this system in Modula-2+.   Application performance - Negligible I/O: As quick as original FastThreads. With I/O: Performs better than either FastThreads or    Topaz threads.
Performance  
Performance  
Performance  
Summary Processor allocation (the allocation of processors to address spaces) is done by the kernel.   Thread scheduling (the assignment of an address space’ threads to its processors) is done by each address space.   The kernel notifies the address space thread scheduler of every event affecting the address space.   The address space notifies the kernel of the subset of user-level events that can affect processor allocation decisions.
THANK YOU!  

More Related Content

PDF
Practical ,Transparent Operating System Support For Superpages
PPT
Linux file system
PPTX
Evening out the uneven: dealing with skew in Flink
PPTX
page replacement.pptx
PPT
Os Threads
PPTX
Kubernetes #6 advanced scheduling
PDF
Distributed Systems: scalability and high availability
PPTX
Distributed operating system
Practical ,Transparent Operating System Support For Superpages
Linux file system
Evening out the uneven: dealing with skew in Flink
page replacement.pptx
Os Threads
Kubernetes #6 advanced scheduling
Distributed Systems: scalability and high availability
Distributed operating system

What's hot (20)

PDF
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
PPTX
Introduction to kubernetes
PDF
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
PPTX
Parallel processing
PDF
Nagios An Open Source Network Management System Powerpoint Presentation Slides
PDF
IT Automation with Ansible
PDF
Kvm performance optimization for ubuntu
PPTX
Operating System and Building Blocks
PDF
100 M pps on PC.
PPTX
Introduction to linux containers
PPTX
SHELL PROGRAMMING
PPTX
process and thread.pptx
PDF
Geospatial Indexing at Scale: The 15 Million QPS Redis Architecture Powering ...
PPTX
Chorus - Distributed Operating System [ case study ]
PDF
18 Months of Event Sourcing and CQRS Using Microsoft Orleans
PPSX
User Administration in Linux
DOCX
Linux admin interview questions
PDF
Introduction to Terraform and Google Cloud Platform
PPT
Red Hat Ansible 적용 사례
PDF
Introduction to Linux
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
Introduction to kubernetes
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
Parallel processing
Nagios An Open Source Network Management System Powerpoint Presentation Slides
IT Automation with Ansible
Kvm performance optimization for ubuntu
Operating System and Building Blocks
100 M pps on PC.
Introduction to linux containers
SHELL PROGRAMMING
process and thread.pptx
Geospatial Indexing at Scale: The 15 Million QPS Redis Architecture Powering ...
Chorus - Distributed Operating System [ case study ]
18 Months of Event Sourcing and CQRS Using Microsoft Orleans
User Administration in Linux
Linux admin interview questions
Introduction to Terraform and Google Cloud Platform
Red Hat Ansible 적용 사례
Introduction to Linux
Ad

Viewers also liked (14)

PPT
Ideas @ work from the Social Biz Roadshow
PPTX
BL Labs Roadshow 2016 - Digital Research Team
PDF
creative roadshow for BMW
PDF
The Brand Activations - Agency Profile and Portfolio
PPTX
How_to_do_RAW_activity_supported_with_road_show
PDF
Innovative Marketing Ideas to augment & boost sales
PPTX
Kernel (computing)
PPTX
RAW Activation 2011
PPT
Operating System-Threads-Galvin
PPT
WINGS BRAND ACTIVATIONS (P) LTD
PDF
Scheduler activations
PDF
BTL,Onground activation, Offline advertising, Promotion, On ground promotions...
PDF
Brand activation
Ideas @ work from the Social Biz Roadshow
BL Labs Roadshow 2016 - Digital Research Team
creative roadshow for BMW
The Brand Activations - Agency Profile and Portfolio
How_to_do_RAW_activity_supported_with_road_show
Innovative Marketing Ideas to augment & boost sales
Kernel (computing)
RAW Activation 2011
Operating System-Threads-Galvin
WINGS BRAND ACTIVATIONS (P) LTD
Scheduler activations
BTL,Onground activation, Offline advertising, Promotion, On ground promotions...
Brand activation
Ad

Similar to Scheduler Activations - Effective Kernel Support for the User-Level Management of Parallelism (20)

PPT
4.Process.ppt
PPT
OS Thr schd.ppt
PPTX
Multi threaded programming
PDF
PDF
CS9222 ADVANCED OPERATING SYSTEMS
PPTX
Multiprocessor Scheduling
PPTX
Threads
PPT
Module2 MultiThreads.ppt
DOC
Wiki 2
PPTX
Concept of thread, multi thread, tcb
PPTX
THREADS IN OPERATING SYSTEM & multitasking
PPTX
Threading.pptx
PPTX
Parallel Processing (Part 2)
PPTX
OS Module-2.pptx
PPTX
Multiprocessor Operating System in Distributed Operating System
PDF
Operating system (OS) itself is a process, what approaches are there.pdf
PPTX
Threads (operating System)
PPTX
Thread scheduling...................pptx
PDF
Sucet os module_2_notes
PPTX
thread os.pptx
4.Process.ppt
OS Thr schd.ppt
Multi threaded programming
CS9222 ADVANCED OPERATING SYSTEMS
Multiprocessor Scheduling
Threads
Module2 MultiThreads.ppt
Wiki 2
Concept of thread, multi thread, tcb
THREADS IN OPERATING SYSTEM & multitasking
Threading.pptx
Parallel Processing (Part 2)
OS Module-2.pptx
Multiprocessor Operating System in Distributed Operating System
Operating system (OS) itself is a process, what approaches are there.pdf
Threads (operating System)
Thread scheduling...................pptx
Sucet os module_2_notes
thread os.pptx

More from Kasun Gajasinghe (7)

PDF
Building Services with WSO2 Microservices framework for Java and WSO2 AS
PDF
Building Services with WSO2 Microservices framework for Java and WSO2 AS
PDF
Distributed caching with java JCache
PDF
[WSO2] Deployment Synchronizer for Deployment Artifact Synchronization Betwee...
PDF
Siddhi CEP Engine
PPT
Survey on Frequent Pattern Mining on Graph Data - Slides
PDF
Google Summer of Code 2011 Sinhalese flyer
Building Services with WSO2 Microservices framework for Java and WSO2 AS
Building Services with WSO2 Microservices framework for Java and WSO2 AS
Distributed caching with java JCache
[WSO2] Deployment Synchronizer for Deployment Artifact Synchronization Betwee...
Siddhi CEP Engine
Survey on Frequent Pattern Mining on Graph Data - Slides
Google Summer of Code 2011 Sinhalese flyer

Recently uploaded (20)

PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Machine learning based COVID-19 study performance prediction
PDF
NewMind AI Monthly Chronicles - July 2025
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
KodekX | Application Modernization Development
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPT
Teaching material agriculture food technology
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Machine learning based COVID-19 study performance prediction
NewMind AI Monthly Chronicles - July 2025
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Chapter 3 Spatial Domain Image Processing.pdf
Unlocking AI with Model Context Protocol (MCP)
KodekX | Application Modernization Development
Building Integrated photovoltaic BIPV_UPV.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Understanding_Digital_Forensics_Presentation.pptx
Network Security Unit 5.pdf for BCA BBA.
The Rise and Fall of 3GPP – Time for a Sabbatical?
Teaching material agriculture food technology
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Encapsulation_ Review paper, used for researhc scholars
Review of recent advances in non-invasive hemoglobin estimation
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx

Scheduler Activations - Effective Kernel Support for the User-Level Management of Parallelism

  • 1. SCHEDULER ACTIVATIONS Effective Kernel Support for the User-Level Management of Parallelism Kasun Gajasinghe Nisansa de Silva University of Moratuwa Based on the paper “Scheduler Activations - Effective Kernel Support for the User-Level Management of Parallelism” by Thomas E. Anderson et.al.
  • 2. Scheduler Activations Introduction to Threads   User-Level and Kernel-Level Threads Implementation Pros and Cons   Effective Kernel Support for User-Level Management of Parallelism Scheduler Activations Design Implementation Performance   Conclusion
  • 3. What is a Thread? Smallest unit of processing that can be scheduled by the operating system   Similar to a process   Separate “thread of execution”     Each thread has a separate stack, program counter, and run state
  • 4. Why Threads? Perform operations in parallel on the same data   Avoid complex explicit scheduling by applications   Often more efficient than separate processes   Remain responsive for user inputs
  • 5. Implementing Threads Three basic strategies User-level: possible on most operating systems, even ancient ones   Kernel-level: looks almost like a process (i.e., Linux, Solaris)   Scheduler activations (Digital Tru Unix 64, NetBSD, some Mach)
  • 6.       User Thread      Kernel Thread
  • 7. Thread System - User Level Thread library’s code + data structures in user space   Invocation of library function = local function call. Not system call   Pros - Good performance Fast thread context switching Better Scaling Highly flexible Custom scheduling algorithms  Cons - Poor concurrency support  blocking system calls starve sibling threads
  • 8. Thread System - System Level Thread library’s code + data structures in kernel space   Invocation of library function = system call   Pros - Good concurrency support Blocking system calls do not starve sibling threads    Cons -  Poor performance Operations involve system calls Full context switch to perform thread switches Less Flexible Generic scheduling Algorithm  
  • 9. User Threads Excellent performance   No system calls to perform thread operations   More flexible =>Can use domain specific scheduling algorithm (‘customized’ thread library)   Blocking system calls such as I/O problematic; Starvation of sibling threads Kernel Threads Bad performance    System calls needed to perform thread operations   Generic scheduling algorithm( scheduled by kernel)   Good integration with system services – blocking calls do not prevent other user threads from being scheduled.  Less likelihood of starvation
  • 10.   Fig. Thread Operation Latencies (micro seconds) Topaz - A highly tuned kernel.  FastThreads - A user-level thread library Ultrix - BSD family unix-like OS
  • 12. Scheduler Activations Get the best of both worlds — The efficiency and flexibility of user-level threads The non-blocking ability of kernel threads (Poor integration with system services)   Relies on upcalls   The term “scheduler activation” was selected because each vectored event causes the user-level thread system to reconsider its scheduling decision of which threads to run on which processors. 
  • 13. Scheduler Activations What’s an Upcall? Normally, user programs call functions in the kernel. Which call as system calls   Sometimes, though, the kernel calls a user-level process to report an event   Sometimes considered unclean — violates usual layering
  • 14. Scheduler Activations Thread creation and scheduling is done at user level   When a system call from a thread blocks, the kernel does an upcall to the thread manager   The thread manager marks that thread as blocked, and starts running another thread   When a kernel interrupt occurs that’s relevant to the thread, another upcall is done to unblock it
  • 16. Scheduler Activations It serves as a vessel, or execution context, for running user-level threads, in exactly the same way that a kernel thread does. It notifies the user-level thread system of a kernel event. It provides space in the kernel for saving the processor context of the activation’s current user-level thread, when the thread is stopped by the kernel (e.g., because the thread blocks in the kernel on I/0 or the kernel preempts its processor).
  • 17. Scheduler Activations Two execution stacks One mapped into the kernel one mapped into the application address space. Kernel Application Address space
  • 18. Scheduler Activations kernel stack is used whenever the user-level thread running in the scheduler activation’s context executes in the kernel (e.g. system call) The kernel also maintains a control block for each activation (akin to a thread control block) to record the state of the scheduler activation when its thread blocks in the kernel or is preempted. Kernel Application Address space
  • 19. Scheduler Activations The user-level thread scheduler runs on the activation’s user-level stack and maintains a record of which user-level thread is running in which scheduler activation. Each user-level thread is allocated its own stack when it starts running Kernel Application Address space
  • 20. Scheduler Activations - Programs Kernel Application 1 Address space
  • 21. Scheduler Activations - Programs Kernel Application 1 Address space
  • 22. Scheduler Activations - Upcalls Kernel Application 1 Address space
  • 23. Scheduler Activations – thread stopping The crucial distinction between scheduler activations and kernel threads is that once an activation’s user-level thread is stopped by the kernel, the thread is never directly resumed by the kernel.
  • 24. Scheduler Activations – thread stopping A new scheduler activation is created to notify the user-level thread system that the thread has been stopped. The user-level thread system removes the state of the thread from the old activation, Tells the kernel that the old activation can be re-used (Explained later) Decides which thread to run on the processor.
  • 25. Scheduler Activations – thread stopping The kernel is able to maintain the invariant that there are always exactly as many running scheduler activations (vessels for running user-level threads) as there are processors assigned to the address space.
  • 26. Scheduler Activations -Blocking Kernel Application 1 Address space
  • 27. Scheduler Activations -Blocking Kernel Application 1 Address space
  • 28. Scheduler Activations -Blocking Kernel Application 1 Address space 
  • 29. Scheduler Activations -Blocking Kernel Application 1 Address space
  • 30. Scheduler Activations -Blocking Kernel Application 1 Address space 
  • 31. Scheduler Activations -Blocking Kernel Application 1 Address space  P
  • 32. Scheduler Activations -Multiprogramming Application 2 Address space Kernel Application 1 Address space
  • 33. Scheduler Activations -Multiprogramming Application 2 Address space Kernel Application 1 Address space P
  • 34. Scheduler Activations - Additional if threads have priorities, and there exists running thread with a lower priority than both the unblocked and the preempted thread. Application concurrency model
  • 35. Scheduler Activations – Upcall Points
  • 36. Scheduler Activations – Address space to Kernel
  • 37. Scheduler Activations - Enhancements The processor allocator can favour address spaces that use fewer processors and penalize those that use more. If overall the system has fewer threads than processors, the idle processors should be left in the address spaces most likely to create work in the near future.
  • 38. Scheduler Activations – Critical sections A user-level thread could be executing in a critical section at the instant when it is blocked or preempted. Poor performance Deadlock Use “Recovery” instead of “Prevention”
  • 39. Scheduler Activations –Critical section Kernel Application 1 Address space P User level context switch!
  • 40. Implementation Topaz kernel thread management system Where Topaz formerly blocked, resumed, or preempted a thread, it now performs upcalls to allow the user level to take these actions Do explicit allocation of processors to address spaces Fast threads Process upcalls Resume interrupted critical sections Provide Topaz with needed infomation
  • 41. Implementation Topaz kernel thread management system Add 1200 lines (on 4000) Mostly processor allocation policy Fast threads Few hundred lines NB: design is “neutral” on the choice of policies for allocating processors to address spaces and for scheduling threads onto processors.
  • 42. Implementation– Processor Allocation Policy Processors are divided evenly among address spaces if some address spaces do not need all of the processors in their share, those processors are divided evenly among the remainder. processors are time-sliced only if the number of available processors is not an integer multiple of the number of address spaces. The kernel processor allocator only needs to know whether each address space could use more processors or has some processors that are idle.
  • 43. Implementation– Thread Scheduling Policy kernel has no knowledge of an application’s concurrency model or scheduling policy, or of the data structures used to manage parallelism at the user level. Each application is completely free to choose these as appropriate; they can be tuned to fit the application’s needs.
  • 44. Performance Enhancements– Critical sections Have a duplicate code section of the critical section At the end of the copy (Only!) put code to yield the processor back to the resumer.
  • 45. Performance Enhancements– Critical sections Normal execution uses the original code. When a preemption occurs, the kernel starts a new scheduler activation to notify the user-level thread system This activation checks the preempted thread’s program counter to see if it was in one of these critical sections If so, continue the thread at the corresponding place in the copy of the critical section. The copy relinquishes control back to the original upcall at the end of the critical section.
  • 46. Performance Enhancements– UPCalls Logically, a new scheduler activation is created for each upcall. Creating a new scheduler activation is not free, however, because it requires data structures to be allocated and initialized. Instead, discarded scheduler activations can be cached for eventual re-use. (Remember?  )
  • 47. Performance Enhancements– Debugging Not very significant to note The point is that you have to use a debugger which have as little effect as possible on the sequence of instructions being debugged.
  • 48. Performance Thread performance - without kernel involvement quite similar to FastThreads before the changes.   Upcall performance - significantly worse than Topaz threads. Untuned implementation. Topaz in assembler, this system in Modula-2+.   Application performance - Negligible I/O: As quick as original FastThreads. With I/O: Performs better than either FastThreads or    Topaz threads.
  • 52. Summary Processor allocation (the allocation of processors to address spaces) is done by the kernel.   Thread scheduling (the assignment of an address space’ threads to its processors) is done by each address space.   The kernel notifies the address space thread scheduler of every event affecting the address space.   The address space notifies the kernel of the subset of user-level events that can affect processor allocation decisions.

Editor's Notes

  • #20: This way, when a thread blocks on a user-level lock or condition variable, the thread scheduler can resume running without kernel intervention.
  • #34: Inform that 2 threads have stopped!
  • #35: an additional preemption may have to take place beyond the ones described above. For inst ante, on an I/0 completion, some processor could be running a thread with a lower priority than both the unblocked and the preempted thread. In that case, the user-level thread system can ask the kernel to interrupt the thread running on that processor and start a scheduler activation once the thread has been stopped. The user level can know to do this because it knows exactly which thread is running on each of its processors. the kernel’s interaction with the application is entirely in terms of scheduler activations. The application is free to build any other concurrency model on top of scheduler activations; the kernel’s behavior is exactly the same in every case. In particular, the kernel needs no knowledge of the data structures used to represent parallelism at the user level.
  • #37: More runnable threads than processors More processors than runnable threads If an applicatlon has notified the kernel that it has idle processors, and the kernel has not taken them away, then there must be no other work in the system, and the kernel need not be notified of changes in parallelism, up to the point where the application has more work than processors. These notifications are only hints: if the kernel gives an address space a processor that is no longer needed by the time it gets there, the address space simply returns the processor to the kernel with the updated information. Of course, the user-level thread system must serialize its notifications to the kernel, since ordering matters.
  • #38: This encourages address spaces to give up processors when they are needed elsewhere, since the priorities imply that it is likely that the processors will be returned when they are needed. This avoids the overhead of processor re-allocation when the work is created.
  • #39: because other threads continue to test an application-level spin-lock held by the preempted thread the preempted thread could be holding a lock on the user-level thread ready list; if so, deadlock would occur if the upcall attempted to place the preempted thread onto the ready list