2. Packet Scheduling Algorithms
• Introduction
• What is Packet Scheduling?
• Why is Packet Scheduling Important?
• Packet Scheduling in Networks
• Types of Packet Scheduling Algorithms
• Key Requirements for Packet Scheduling
• Real-World Applications
• Future Trends
3. Introduction
• Packet scheduling refers to the process of
managing the order and timing with which
packets are transmitted from queues in a
network device (router, switch).
Goals of Packet Scheduling:
• Maintain Quality of Service (QoS)
• Prevent congestion
• Ensure fairness among data flows
• Optimize network utilization
4. • What is Packet Scheduling?
• Packet scheduling is the process used by
network devices to decide the order in which
packets are transmitted.
• It determines:
– Which packet to send next
– How resources (bandwidth, buffer) are shared
– Delay and jitter performance
5. Why is Packet Scheduling Important?
• Ensures Quality of Service (QoS)
• Prevents congestion
• Achieves fairness among multiple users/flows
• Meets real-time delivery requirements (e.g.,
VoIP, gaming)
• Efficient use of limited network resources
6. Packet Scheduling in Networks
• Context:
• Packets arriving at a router are placed in queues.
• Packet scheduler selects which packet to send
next, and from which queue.
• Diagram: Show a router with input queues,
packet scheduler, and outgoing link.
7. • Challenges:
• Bandwidth sharing among users
• Delays for time-sensitive applications
• Limited buffer space
8. Types of Packet Scheduling Algorithms
• FIFO (First-In-First-Out)
• Priority Queuing (PQ)
• Round Robin (RR)
• Weighted Fair Queuing (WFQ)
• Deficit Round Robin (DRR)
• Earliest Deadline First (EDF) (optional)
• Custom scheduling (e.g., AI/ML-based)
9. FIFO – First-In, First-Out
• Simplest scheduling method
• Processes packets in the order they arrive
• No priority handling
Pros:
• Easy to implement
• Fair in low-load scenarios
Cons:
• No support for QoS
• Time-sensitive packets may face delays
Use Case: Best-effort delivery networks
11. Priority Queuing (PQ)
• Packets assigned different priority levels
• High-priority packets are served first
Pros:
• Good for real-time traffic (VoIP, video)
Cons:
• Lower-priority packets can starve
• Not inherently fair
Use Case: Voice over IP, emergency systems
13. Round Robin (RR)
• Packets served from each queue in turn
• All flows treated equally
Pros:
• Simple and fair
• Reduces starvation
Cons:
• Doesn’t account for flow priority or bandwidth
needs
• Doesn’t work well with variable packet sizes
15. Weighted Fair Queuing (WFQ)
• Assigns weights to each flow/queue
• Allocates bandwidth proportionally
• Suitable for variable-length packets
Pros:
• Fair bandwidth allocation
• QoS-friendly
Cons:
• More complex to implement
Use Case: Multimedia streaming, MPLS, ATM networks
18. Key Requirements for Packet Scheduling
• Fairness – All flows get appropriate share of bandwidth
• Delay Sensitivity – Real-time packets need low delay and
jitter
• Throughput – Maximize data transmitted successfully
• Scalability – Must work on high-speed networks
• Simplicity – Should not require excessive computation
• QoS Support – Must differentiate traffic based on
priority
• Adaptability – Should adjust to network conditions
dynamically
19. Real-World Applications
• Routers – Scheduling outgoing packets for
multiple flows
• Data Centers – Managing tenant-level QoS
• 5G Networks – Ensuring latency for URLLC
services
• Streaming & Gaming – Bandwidth guarantees
for media flows
20. Future Trends
• Future Trends
• AI/ML-based dynamic schedulers
• Software-defined networking (SDN) for
centralized control
• Adaptive, predictive scheduling based on flow
behavior
21. Scheduling for Guaranteed Services Connections
• What Are Guaranteed Services?
• Key Requirements
• Algorithms Used for Guaranteed Services
• Example: WFQ for Guaranteed Bandwidth
• Integrated Services (IntServ) Model
• Limitations
• Real-world Use Cases
22. What Are Guaranteed Services?
• Guaranteed services provide strict delay and
bandwidth assurances.
• Often used in Integrated Services (IntServ)
architecture, where flows are reserved in
advance.
• Example: VoIP, remote surgery, financial
trading.
23. Key Requirements
• To support guaranteed services, the
scheduling algorithm must:
• Bound delay and jitter precisely
• Reserve bandwidth per flow
• Avoid packet loss due to queue overflow
• Enforce admission control
24. Algorithms Used for Guaranteed Services
Algorithm How It Helps
WFQ (Weighted Fair Queuing) Assigns bandwidth weights to ensure
predictable delivery
CBQ (Class-Based Queuing)
Supports traffic shaping for reserved classes
PQ + WFQ Hybrid Combines strict priority for real-time with
fair sharing for others
EDF (Earliest Deadline First) Schedules packets based on deadlines,
suitable for hard real-time systems
Token Bucket + Leaky Bucket Used for traffic policing before scheduling
25. Example: WFQ for Guaranteed Bandwidth
• Each flow gets a weight corresponding to its
reserved bandwidth.
• Packet timestamps are calculated to simulate
virtual finish time.
• Scheduler picks the packet with the earliest
finish time.
• Guarantees low jitter and bounded latency
when properly configured.
26. Integrated Services (IntServ) Model
• Resource Reservation Protocol (RSVP) is used
to reserve resources.
• Each router along the path enforces the
reservation using WFQ or similar.
• Guaranteed Service Class provides:
– Delay bound
– Minimal packet loss
– Bandwidth guarantee
27. Limitations
• High complexity and state maintenance for
each flow
• Scalability issues in large networks
• Supplanted in modern networks by
Differentiated Services (DiffServ) with simpler
enforcement
28. Real-world Use Cases
• Healthcare IoT: Real-time monitoring with
delay sensitivity
• Video Streaming: Premium user streams with
guaranteed resolution & frame rates
• Industrial Automation: Machine-to-machine
communication with strict deadlines
29. GPS, WFQ, and Rate-Proportional
Scheduling Algorithms
30. Agenda
• Introduction to Scheduling Algorithms
• Generalized Processor Sharing (GPS)
• Weighted Fair Queuing (WFQ)
• Rate-Proportional Scheduling Algorithms
• Comparison of the Algorithms
• Use Cases and Applications
• Conclusion
31. Introduction to Scheduling Algorithms
• Scheduling is essential in networking and
operating systems to efficiently share resources.
• Types of scheduling:
– Fair sharing of resources
– Balancing performance
– Ensuring QoS (Quality of Service)
• Focus: GPS, WFQ, and Rate-Proportional
Scheduling.
32. Generalized Processor Sharing (GPS)
• Definition: GPS is a theoretical scheduling algorithm that divides
CPU time among processes in a way that each process receives a
fraction of the CPU based on its weight.
• Key Characteristics:
• Ideal for network scheduling.
• Processes are served based on their weight (i.e., rate at which
they receive resources).
• Real-time systems or network routers use GPS for fairness and
efficiency.
• Formula: Rate of service = WeightofProcessTotalWeight
frac{Weight of Process}{Total Weight}TotalWeightWeightofProcess
33. Advantages of GPS
• Provides guaranteed bandwidth for each flow.
• Fair distribution of resources.
• No starvation for processes.
• Works well in high-speed networks.
34. Weighted Fair Queuing (WFQ)
• Definition: WFQ is a practical implementation of
GPS in packet-switching networks, used to provide
fair bandwidth distribution.
• Key Characteristics:
– Fairness: Ensures that each flow gets bandwidth
proportional to its weight.
– Implementation: Uses virtual time to track the service
rate of each flow.
– Suitable for network traffic scheduling in routers and
switches.
35. • WFQ in Action:
• Packets are assigned a virtual finish time based on
their weight and arrival time.
• Packets with lower finish times are transmitted first.
• Guaranteed throughput for each flow.
• Advantages of WFQ:
• Efficient use of network resources.
• Fair allocation of bandwidth among multiple traffic
sources.
• Low delay for high-priority traffic.
• Ensures quality of service (QoS).
36. Rate-Proportional Scheduling
• Definition: A family of scheduling algorithms
where processes are served based on their rate or
share of resources, proportional to their allocated
weight.
• Key Characteristics:
– Similar to GPS, but implemented with different
algorithms.
– Each flow gets bandwidth proportional to its rate.
– Dynamic adjustment of rates based on traffic load and
priority.
37. Comparison of GPS, WFQ, and Rate-
Proportional Scheduling
Feature GPS WFQ Rate-Proportional
Scheduling
Fairness
High High
Varies based on
implementation
Complexity
High (theoretical)
Moderate
Low to moderate
Use Case
Theoretical, ideal in
networks
Practical in
routers/switches Common in general
network scheduling
Implementation
Hard to implement
in real-time
Easier to implement Easier to implement
38. Applications of Scheduling Algorithms
GPS:
• Theoretical model for internet traffic scheduling.
• Used in quality of service (QoS) designs.
WFQ:
• Routers and network switches for traffic management.
• Voice over IP (VoIP), streaming, and real-time data
applications.
Rate-Proportional Scheduling:
• Used in fair allocation of system resources in both
network and OS-level scheduling
39. Theory of latency rate servers and delay
bounds in packet switched networks for LBAP
traffic
41. LBAP Traffic Model
• LBAP: Leaky Bucket Arrival Process.
• Constrained by:
– Rate (r) – average traffic rate
– Burst (b) – maximum burst allowed
• Arrival curve: A(t)≤r t+bA(t) leq r cdot t +
⋅
bA(t)≤r t+b
⋅
42. Latency Rate Servers (LR Servers)
‑ ‑
• Guarantees a minimum service rate R after a
fixed latency T.
• Service curve: S(t)=R (t−T)+S(t) = R cdot (t -
⋅
T)^+S(t)=R (t−T)+
⋅
• Many schedulers (e.g. WFQ, DRR) can be
modeled this way.
43. Delay Bound for LBAP over LR Server
• Delay bound:
• D≤T+bRD leq T + frac{b}{R}D≤T+RbMeaning:
worst-case delay is due to burst and latency.
44. Cascading LR Servers
• In a network of NNN LR servers:
– Delay bound = ∑Ti+∑biRisum T_i + sum frac{b_i}
{R_i}∑Ti
+∑Ri
bi
• Traffic may become reshaped at each node.
45. Application in Deterministic Networks
(DetNet/TSN)
• Standards like AVB/DetNet use similar
models.
• Delay bounds are calculated per hop using LR
server analysis.
• Ensures bounded latency for time-sensitive
traffic.
47. What is Active Queue Management (AQM)?
• AQM proactively manages packet queues in
routers to avoid congestion.
• It drops or marks packets before queues are
full.
• Goals:
• Reduce latency
• Prevent bufferbloat
• Improve fairness and throughput
48. RED (Random Early Detection)
• Drops packets probabilistically as average
queue size grows.
• Key Parameters:
– min_thresh: start dropping
– max_thresh: full drop probability
– max_p: max drop probability
• Helps avoid global synchronization of TCP
flows.
49. How RED Works
• Maintains average queue size using
exponential moving average.
• If avg_queue < min_thresh → no drop
• If avg_queue > max_thresh → drop all
• Between thresholds → random drop with
increasing probability
50. WRED (Weighted RED)
• Extension of RED for QoS support.
• Packets are classified by priority or DSCP
values.
• Each class gets its own RED parameters.
• Higher-priority traffic → lower drop
probability
51. Benefits of WRED
• Provides differentiated services
• Avoids dropping high-priority traffic early
• Used in DiffServ and enterprise routers for
traffic shaping
52. Virtual Clock Algorithm
• Rate-based scheduling algorithm
• Assigns each flow a virtual finish time for
packets
• Packets are scheduled in order of their finish
times
• Ensures fair bandwidth allocation among
flows
53. How Virtual Clock Works
• Each flow has a reserved rate rir_iri
• Virtual time increases with real time
• For each arriving packet:
• Fi
=max(V,Fi−1
)+Li/ri
• V: current virtual time
• Li
: packet length
• Fi
: finish time
54. Comparison Table
Feature
RED
WRED
Virtual Clock
Type Probabilistic
Class-based RED
Scheduling
Algorithm
Purpose
Early drop QoS + drop Fair queuing
Delay control
Moderate
Good Tight control
Complexity Low Medium High