SlideShare a Scribd company logo
Communication and synchronization in distributed systems Johanna OrtizDiego NiñoAlejandro Velandia
Communication in distributed systemsIn a distributed system there is no shared memory and thus the whole nature of the communication between processes should be reconsidered. The processes, to communicate, must adhere to rules known as protocols. For distributed systems in a wide area, these protocols often take the form of several layers and each layer has its own goals and rules. Messages are exchanged in various ways, and there are many design options in this regard, an important option is the "remote procedure call. It is also important to consider the possibilities of communication between groups of processes, not only between two processes.
client-server modelTwo different roles in the interactionClient: requesting service. Request: Operation + dataServer: provides service. response: result
RPCThe model client - server is a convenient way of structuring a S. O. distributed, but has a flaw The essential paradigm that is built around communication is the input / output. The procedures send / receive are reserved for conducting e / s. A different option was raised by Birrell and Nelson: Allow programs that call procedures located on other machines. When a process in the machine "A" calls a procedure in the machine "B": The process that makes the call is suspended. The implementation of the procedure is done in "B". The information can be transported from one place to another using the parameters and can return in the subsequent procedure. The programmer does not worry about a transfer of messages or e / s. This method is called Remote Procedure Call or RPC. The procedure makes the call and the person receiving it run on different machines, ie using different address spaces.
proxy or cache modelthree different roles in the interaction:Client: requesting service Server: provides service Proxy: agent
Multilayer modelserver can be a client of another server typical web applications:presentation + business logic + Data access
Peer-to-peer model Dialogue protocol:chordin entities among themselvesthe end of each stage entities synchronize and exchange information
communication characteristicsblocking or non-blocking operation modeShipping:blocking: the sender is blocked until it has been successfully sent to destination.non-blocking: the sender stores the data in a       kernel buffer and resumes executionreception: reception non-blocking: If there is available data is read by the receiver, otherwise indicates that no message had.blocking reception: if there is no available data the receiver is blocked
ReliabilityIssues related to reliability of communication; - Ensuring that the message was received on node (s) target (s) - Maintenance of order in the delivery of messages - Flow control to avoid "flooding" the receiving node - Fragmentation of the messages to eliminate limitations on size Maximum messages If the communication system does not guarantee some of these aspects, it must send the application
Communication in groups:Destination of message is a group of processes:   Multicast Possible applications in distributed systems - Using multiple updates replicated data. - Use of replicated services. - Collective operations in parallel computation. Implementation depends on whether the network provides multicast - If no, is implemented by sending N messages A process can belong to several groups There is a group address EI group usually has a dynamic nature - You can add and remove group processes - Management of membership must be coordinated with the communication
Com design aspects group· Models of groups:- Open Group. · External process can send message to group - Typically used to replicate data or services - Group closed. - Only group processes can send messages. · Is commonly used in parallel processing (model peer-to-peer · Atomicity - Or get the message all processes or none
Order reception of messages:three choices: - Order FIFO: Messages from one source reach each receiver in the order they are sent. · There are no guarantees on messages from different issuers - Causal ordering: If the messages sent between two emitting a possible relationship "cause and effect, all group process first receive the message "cause" and then message "effect." - If no connection, no guarantee any delivery order - Definition of "causality" is discussed in "Synchronization" - Total Management: All messages (various sources) sent a group are received in the same order for all items.
SYNCHRONIZATION OF DISTRIBUTED SYSTEMS
SYNCHRONIZATION OF DISTRIBUTED SYSTEMSAlgorithms forclocksynchronizationCristian'salgorithmBerkeley Algorithm Algorithm with AverageAlgorithms for Mutual ExclusionCentralizationDistributed
Cristian's algorithmThis algorithm is based on the use of coordinated universal time (acronym in English, UTC) which is received by a computer within the distributed system. This team, called receptor UTC, in turn receives periodic requests from the time of other machines to each system of which sends a reply in the shortest possible time UTC requested report, which all machines update their system time and keeping it synchronized across the system. The receiver receives the UTC time through various means available, including mention the airwaves, the Internet, among others. A major problem in this algorithm is that time can run backwards: The UTC time of the receiver can not be shorter than the time of the machine that requested time. The UTC server must process requests for time with the concept of disruption, which affects the attention span. The range of transmission of the request and its response must be taken into account for synchronization. The propagation time adds to the time server to synchronize the sender when it receives the response.
Berkeley AlgorithmA distributed system based on the Berkeley algorithm has no coordinated universal time (UTC) instead, the system manages its own time. For synchronizing the time in the system, there is also a time server that unlike Cristian algorithm behaves proactively. This server performs periodic sampling of the time have some of the machines in the system, which calculates an average time, which is sent to all machines in the system to synchronize.
Algorithm with Average This algorithm does not have a server to control, centralize and maintain time synchronization in the system. In contrast, each machine in the system informs its local time with each message you send requires another machine or machines of the system. From that moment, each machine locally initialize a timer, whose duration is fixed interval and length. From that moment, each machine averages your local time using the hours you report the rest of the machines that interact with it.
Algorithms for Mutual ExclusionThese algorithms are defined to ensure compliance of mutual exclusion between processes that require access to a critical region of the system. Centralized: This algorithm simulates the operating philosophy of mutual exclusion used in uniprocessor systems. To do this, there is a machine in the distributed system which is responsible for controlling access to the various critical sections, which is called the coordinator. Each system process that requires access to a critical section, you must request access to the coordinator, which is awarded in the event that the critical section is available, otherwise placed in a queue to process applicant. When a process received access to the critical section completes its task, inform equally to the coordinator to enable it to grant access to a next requesting process or that is in queue. This algorithm presents a major constraint, namely that the coordinator represents a single point of control for access to the various critical sections of the distributed system, which becomes a bottleneck that can affect the efficiency of processes running in the system. Similarly, any failure to present the result in the cessation coordinator processes.
DistributedThis algorithm was developed to eliminate the latent problem in the centralized algorithm. Therefore, its approach is based on not having a single coordinator to control access to critical sections of the distributed system. In this sense, each process that requires access to a critical section, send your request to all processes in the system, identifying themselves as the critical section who wish to access. Each receiving process sends its response to the process requesting   indicating one of the following possible answers: Critical section not in use by the receiving process. Response Message: OK. Critical section used by the receiving process.   Response Message: Not applicable, place the transmitter in process queue. Critical section in use but not requested by the receiving process. Response Message: OK, if the application is earlier than the receiver. Response Message: Not applicable, if the request is downstream of the receptor,   sender puts the process in queue. However, this algorithm also contains a problem, namely that if a process has a failure can not send its response to a request from a sender process, so this will be interpreted as a denial of access, blocking all processes requesting access to any critical section.
Ring (Token Ring)This algorithm establishes a logical ring of processes, software-controlled through the which circulates a token or control (token) between each process. When a process receives the tab, you can enter a   critical section if required, to process all tasks, leaving the critical section and deliver the card to the next process of the ring. This process is repeated continuously in the ring of processes. When a process receives the information and does not require entering a critical section, it passes the tab immediately to the next process. This algorithm contains a weakness associated with the possible loss of the card for access control to critical sections. If this occurs, the system processes assume that the card is in use by some process that is in the critical section.
ElectionThese algorithms are designed to elect a coordinator process. In the same ensures that once the   coordinator election process, it concluded with the agreement of all the system processes the election of a new coordinator. The big fella (Garcia Molina) This algorithm is initiated when a process determines that there is any response to requests made to the process coordinator. At this time, this process sends to all processes older than him a message of electing a new coordinator, which can lead to the following scenarios: A process with a number greater than the sender of the message process, answer OK, which was elected as coordinator of the system. No process responds election message, which the sender process is elected as the coordinator process. Ring This algorithm operates similarly to the algorithm of the big fella, with the difference that in this method has the following variants: The election message is circulated to all system processes, and processes not only larger than the issuer. Each part of the message process identification. Once the complete message back to the ring and sending process, who sets the new coordinator to process the larger number. It circulates through the ring a new message indicating who is the coordinator of the system.
Atomic TransactionsSynchronization method is a high level, unlike the revised methods so far, does not hold the developer on issues of mutual exclusion, prevent crashes and failover.   On the contrary, this method guides the developer's effort to real problems and substance of the synchronization of distributed systems. The concept of atomic transactions is to ensure that all processes that make a transaction should   implemented fully and satisfactorily. Of a breakdown in one of the processes, the entire transaction fails,   reversed the same and proceeded to restart.
Threads, Processes (Threads)Today's operating systems can support multiple threads of control within a process.   Two notable features in the processes is that threads share a single address space,   and in turn, simulate a multi-ordered, as if it were separate processes in parallel. Only in a multiprocessor machine with may actually run parallel processes. The wires can be placed in four states: Running, when the process is running. Locked, when a process depends on a critical resource. Usually, when it can be used again. Over, when the task ends. Implementation of a Yarn Package There are two ways to implement threads: In the user When performing the installation of user-level packages, the core must not know of its existence, so the kernel will handle only a single thread. The threads are executed in the runtime system in groups of procedures. In the event the system or procedure required to suspend a thread in its handling, the thread stores the records in a table, look for unlocked and reload the machine registers with initial values. Its main advantages are: Each process or thread can have its own algorithm or process planning. The exchange is faster, and identifiers used in the core. It has a more scalable processes increase.
In the CoreUnlike the implementation on the client, the implementation in the kernel does not need the runtime management;   every process in the same table manages its processes, even if it means higher cost in resources and processing time machine. One of the most important advantages is that it requires blocking calls to the system.

More Related Content

PPT
9 fault-tolerance
PDF
Analysis of mutual exclusion algorithms with the significance and need of ele...
PPTX
Communication And Synchronization In Distributed Systems
DOCX
Distributed System
PPT
Process Management-Process Migration
PPTX
Distributed Mutual Exclusion and Distributed Deadlock Detection
PPT
Remote Procedure Call
PDF
5. Distributed Operating Systems
9 fault-tolerance
Analysis of mutual exclusion algorithms with the significance and need of ele...
Communication And Synchronization In Distributed Systems
Distributed System
Process Management-Process Migration
Distributed Mutual Exclusion and Distributed Deadlock Detection
Remote Procedure Call
5. Distributed Operating Systems

What's hot (19)

PPT
Communications is distributed systems
PPSX
Data Replication in Distributed System
PPT
Mutual Exclusion Election (Distributed computing)
PDF
Cs556 section2
PPT
Chapter 18 - Distributed Coordination
PDF
Distributed process and scheduling
DOC
Distributed Mutual exclusion algorithms
PDF
CS6601 DISTRIBUTED SYSTEMS
PPT
334839757 task-assignment
PPT
2.communcation in distributed system
PPTX
Message Passing Systems
PPT
Types of Load distributing algorithm in Distributed System
PPTX
Load Balancing In Distributed Computing
PDF
Voting protocol
PDF
8. mutual exclusion in Distributed Operating Systems
PDF
Agreement Protocols, distributed File Systems, Distributed Shared Memory
PPT
Communication primitives
PPT
resource management
PPT
Communications is distributed systems
Data Replication in Distributed System
Mutual Exclusion Election (Distributed computing)
Cs556 section2
Chapter 18 - Distributed Coordination
Distributed process and scheduling
Distributed Mutual exclusion algorithms
CS6601 DISTRIBUTED SYSTEMS
334839757 task-assignment
2.communcation in distributed system
Message Passing Systems
Types of Load distributing algorithm in Distributed System
Load Balancing In Distributed Computing
Voting protocol
8. mutual exclusion in Distributed Operating Systems
Agreement Protocols, distributed File Systems, Distributed Shared Memory
Communication primitives
resource management
Ad

Similar to Distributed Systems (20)

DOCX
Operating System- INTERPROCESS COMMUNICATION.docx
PDF
Client Server Model and Distributed Computing
PPT
Module-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.ppt
PPTX
Lecture 3 Inter Process Communication.pptx
DOCX
Basic features of distributed system
PPT
Ch4 OS
 
PPT
Process
PPT
PDF
Task communication
PPTX
Chapter 6 Concurrency: Deadlock and Starvation
PDF
A fault tolerant tokenbased atomic broadcast algorithm relying on responsive ...
DOCX
Availability tactics
PPTX
Message Passing, Remote Procedure Calls and Distributed Shared Memory as Com...
PDF
CS9222 ADVANCED OPERATING SYSTEMS
PPT
Process Management.ppt
PPT
24. Advanced Transaction Processing in DBMS
PPT
characteristicsofdistributedsystem-121004123308-phpapp02.ppt
PPTX
Chap-3- Process.pptx distributive system
PPTX
OPERATING SYSTEMS PRESENTATION.pptx
Operating System- INTERPROCESS COMMUNICATION.docx
Client Server Model and Distributed Computing
Module-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.ppt
Lecture 3 Inter Process Communication.pptx
Basic features of distributed system
Ch4 OS
 
Process
Task communication
Chapter 6 Concurrency: Deadlock and Starvation
A fault tolerant tokenbased atomic broadcast algorithm relying on responsive ...
Availability tactics
Message Passing, Remote Procedure Calls and Distributed Shared Memory as Com...
CS9222 ADVANCED OPERATING SYSTEMS
Process Management.ppt
24. Advanced Transaction Processing in DBMS
characteristicsofdistributedsystem-121004123308-phpapp02.ppt
Chap-3- Process.pptx distributive system
OPERATING SYSTEMS PRESENTATION.pptx
Ad

Recently uploaded (20)

PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PPTX
TLE Review Electricity (Electricity).pptx
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PPTX
Tartificialntelligence_presentation.pptx
PDF
DP Operators-handbook-extract for the Mautical Institute
PPTX
A Presentation on Artificial Intelligence
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PPTX
cloud_computing_Infrastucture_as_cloud_p
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
NewMind AI Weekly Chronicles - August'25-Week II
A comparative study of natural language inference in Swahili using monolingua...
A novel scalable deep ensemble learning framework for big data classification...
Encapsulation_ Review paper, used for researhc scholars
MIND Revenue Release Quarter 2 2025 Press Release
Group 1 Presentation -Planning and Decision Making .pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Programs and apps: productivity, graphics, security and other tools
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
TLE Review Electricity (Electricity).pptx
Heart disease approach using modified random forest and particle swarm optimi...
Tartificialntelligence_presentation.pptx
DP Operators-handbook-extract for the Mautical Institute
A Presentation on Artificial Intelligence
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
Accuracy of neural networks in brain wave diagnosis of schizophrenia
cloud_computing_Infrastucture_as_cloud_p
SOPHOS-XG Firewall Administrator PPT.pptx

Distributed Systems

  • 1. Communication and synchronization in distributed systems Johanna OrtizDiego NiñoAlejandro Velandia
  • 2. Communication in distributed systemsIn a distributed system there is no shared memory and thus the whole nature of the communication between processes should be reconsidered. The processes, to communicate, must adhere to rules known as protocols. For distributed systems in a wide area, these protocols often take the form of several layers and each layer has its own goals and rules. Messages are exchanged in various ways, and there are many design options in this regard, an important option is the "remote procedure call. It is also important to consider the possibilities of communication between groups of processes, not only between two processes.
  • 3. client-server modelTwo different roles in the interactionClient: requesting service. Request: Operation + dataServer: provides service. response: result
  • 4. RPCThe model client - server is a convenient way of structuring a S. O. distributed, but has a flaw The essential paradigm that is built around communication is the input / output. The procedures send / receive are reserved for conducting e / s. A different option was raised by Birrell and Nelson: Allow programs that call procedures located on other machines. When a process in the machine "A" calls a procedure in the machine "B": The process that makes the call is suspended. The implementation of the procedure is done in "B". The information can be transported from one place to another using the parameters and can return in the subsequent procedure. The programmer does not worry about a transfer of messages or e / s. This method is called Remote Procedure Call or RPC. The procedure makes the call and the person receiving it run on different machines, ie using different address spaces.
  • 5. proxy or cache modelthree different roles in the interaction:Client: requesting service Server: provides service Proxy: agent
  • 6. Multilayer modelserver can be a client of another server typical web applications:presentation + business logic + Data access
  • 7. Peer-to-peer model Dialogue protocol:chordin entities among themselvesthe end of each stage entities synchronize and exchange information
  • 8. communication characteristicsblocking or non-blocking operation modeShipping:blocking: the sender is blocked until it has been successfully sent to destination.non-blocking: the sender stores the data in a kernel buffer and resumes executionreception: reception non-blocking: If there is available data is read by the receiver, otherwise indicates that no message had.blocking reception: if there is no available data the receiver is blocked
  • 9. ReliabilityIssues related to reliability of communication; - Ensuring that the message was received on node (s) target (s) - Maintenance of order in the delivery of messages - Flow control to avoid "flooding" the receiving node - Fragmentation of the messages to eliminate limitations on size Maximum messages If the communication system does not guarantee some of these aspects, it must send the application
  • 10. Communication in groups:Destination of message is a group of processes:   Multicast Possible applications in distributed systems - Using multiple updates replicated data. - Use of replicated services. - Collective operations in parallel computation. Implementation depends on whether the network provides multicast - If no, is implemented by sending N messages A process can belong to several groups There is a group address EI group usually has a dynamic nature - You can add and remove group processes - Management of membership must be coordinated with the communication
  • 11. Com design aspects group· Models of groups:- Open Group. · External process can send message to group - Typically used to replicate data or services - Group closed. - Only group processes can send messages. · Is commonly used in parallel processing (model peer-to-peer · Atomicity - Or get the message all processes or none
  • 12. Order reception of messages:three choices: - Order FIFO: Messages from one source reach each receiver in the order they are sent. · There are no guarantees on messages from different issuers - Causal ordering: If the messages sent between two emitting a possible relationship "cause and effect, all group process first receive the message "cause" and then message "effect." - If no connection, no guarantee any delivery order - Definition of "causality" is discussed in "Synchronization" - Total Management: All messages (various sources) sent a group are received in the same order for all items.
  • 14. SYNCHRONIZATION OF DISTRIBUTED SYSTEMSAlgorithms forclocksynchronizationCristian'salgorithmBerkeley Algorithm Algorithm with AverageAlgorithms for Mutual ExclusionCentralizationDistributed
  • 15. Cristian's algorithmThis algorithm is based on the use of coordinated universal time (acronym in English, UTC) which is received by a computer within the distributed system. This team, called receptor UTC, in turn receives periodic requests from the time of other machines to each system of which sends a reply in the shortest possible time UTC requested report, which all machines update their system time and keeping it synchronized across the system. The receiver receives the UTC time through various means available, including mention the airwaves, the Internet, among others. A major problem in this algorithm is that time can run backwards: The UTC time of the receiver can not be shorter than the time of the machine that requested time. The UTC server must process requests for time with the concept of disruption, which affects the attention span. The range of transmission of the request and its response must be taken into account for synchronization. The propagation time adds to the time server to synchronize the sender when it receives the response.
  • 16. Berkeley AlgorithmA distributed system based on the Berkeley algorithm has no coordinated universal time (UTC) instead, the system manages its own time. For synchronizing the time in the system, there is also a time server that unlike Cristian algorithm behaves proactively. This server performs periodic sampling of the time have some of the machines in the system, which calculates an average time, which is sent to all machines in the system to synchronize.
  • 17. Algorithm with Average This algorithm does not have a server to control, centralize and maintain time synchronization in the system. In contrast, each machine in the system informs its local time with each message you send requires another machine or machines of the system. From that moment, each machine locally initialize a timer, whose duration is fixed interval and length. From that moment, each machine averages your local time using the hours you report the rest of the machines that interact with it.
  • 18. Algorithms for Mutual ExclusionThese algorithms are defined to ensure compliance of mutual exclusion between processes that require access to a critical region of the system. Centralized: This algorithm simulates the operating philosophy of mutual exclusion used in uniprocessor systems. To do this, there is a machine in the distributed system which is responsible for controlling access to the various critical sections, which is called the coordinator. Each system process that requires access to a critical section, you must request access to the coordinator, which is awarded in the event that the critical section is available, otherwise placed in a queue to process applicant. When a process received access to the critical section completes its task, inform equally to the coordinator to enable it to grant access to a next requesting process or that is in queue. This algorithm presents a major constraint, namely that the coordinator represents a single point of control for access to the various critical sections of the distributed system, which becomes a bottleneck that can affect the efficiency of processes running in the system. Similarly, any failure to present the result in the cessation coordinator processes.
  • 19. DistributedThis algorithm was developed to eliminate the latent problem in the centralized algorithm. Therefore, its approach is based on not having a single coordinator to control access to critical sections of the distributed system. In this sense, each process that requires access to a critical section, send your request to all processes in the system, identifying themselves as the critical section who wish to access. Each receiving process sends its response to the process requesting   indicating one of the following possible answers: Critical section not in use by the receiving process. Response Message: OK. Critical section used by the receiving process.   Response Message: Not applicable, place the transmitter in process queue. Critical section in use but not requested by the receiving process. Response Message: OK, if the application is earlier than the receiver. Response Message: Not applicable, if the request is downstream of the receptor,   sender puts the process in queue. However, this algorithm also contains a problem, namely that if a process has a failure can not send its response to a request from a sender process, so this will be interpreted as a denial of access, blocking all processes requesting access to any critical section.
  • 20. Ring (Token Ring)This algorithm establishes a logical ring of processes, software-controlled through the which circulates a token or control (token) between each process. When a process receives the tab, you can enter a   critical section if required, to process all tasks, leaving the critical section and deliver the card to the next process of the ring. This process is repeated continuously in the ring of processes. When a process receives the information and does not require entering a critical section, it passes the tab immediately to the next process. This algorithm contains a weakness associated with the possible loss of the card for access control to critical sections. If this occurs, the system processes assume that the card is in use by some process that is in the critical section.
  • 21. ElectionThese algorithms are designed to elect a coordinator process. In the same ensures that once the   coordinator election process, it concluded with the agreement of all the system processes the election of a new coordinator. The big fella (Garcia Molina) This algorithm is initiated when a process determines that there is any response to requests made to the process coordinator. At this time, this process sends to all processes older than him a message of electing a new coordinator, which can lead to the following scenarios: A process with a number greater than the sender of the message process, answer OK, which was elected as coordinator of the system. No process responds election message, which the sender process is elected as the coordinator process. Ring This algorithm operates similarly to the algorithm of the big fella, with the difference that in this method has the following variants: The election message is circulated to all system processes, and processes not only larger than the issuer. Each part of the message process identification. Once the complete message back to the ring and sending process, who sets the new coordinator to process the larger number. It circulates through the ring a new message indicating who is the coordinator of the system.
  • 22. Atomic TransactionsSynchronization method is a high level, unlike the revised methods so far, does not hold the developer on issues of mutual exclusion, prevent crashes and failover.   On the contrary, this method guides the developer's effort to real problems and substance of the synchronization of distributed systems. The concept of atomic transactions is to ensure that all processes that make a transaction should   implemented fully and satisfactorily. Of a breakdown in one of the processes, the entire transaction fails,   reversed the same and proceeded to restart.
  • 23. Threads, Processes (Threads)Today's operating systems can support multiple threads of control within a process.   Two notable features in the processes is that threads share a single address space,   and in turn, simulate a multi-ordered, as if it were separate processes in parallel. Only in a multiprocessor machine with may actually run parallel processes. The wires can be placed in four states: Running, when the process is running. Locked, when a process depends on a critical resource. Usually, when it can be used again. Over, when the task ends. Implementation of a Yarn Package There are two ways to implement threads: In the user When performing the installation of user-level packages, the core must not know of its existence, so the kernel will handle only a single thread. The threads are executed in the runtime system in groups of procedures. In the event the system or procedure required to suspend a thread in its handling, the thread stores the records in a table, look for unlocked and reload the machine registers with initial values. Its main advantages are: Each process or thread can have its own algorithm or process planning. The exchange is faster, and identifiers used in the core. It has a more scalable processes increase.
  • 24. In the CoreUnlike the implementation on the client, the implementation in the kernel does not need the runtime management;   every process in the same table manages its processes, even if it means higher cost in resources and processing time machine. One of the most important advantages is that it requires blocking calls to the system.