SlideShare a Scribd company logo
Carrier Strategies for Backbone Traffic Engineering and QoS Dr. Vishal Sharma  President & Principal Consultant Metanoia, Inc. Voice: +1 408 394 6321 Email:  [email_address]   Web:  http://www.metanoia-inc.com   Metanoia, Inc. Critical Systems Thinking™ ©  Copyright 2004 All Rights Reserved
Agenda Traffic engineering techniques & approaches Global Crossing Sprint Backbone traffic characterization for QoS via capacity management [Joint work with Thomas Telkamp (Global Crossing), Arman Maghbouleh (Cariden Technologies), Stephen Gordon (SAIC, former C&W)]
Basic Service Provider Goals The  two fundamental  tasks before any service provider: Deploy a physical topology that meets customers’ needs Map customer traffic flows on to the physical topology Earlier (1990s) the mapping task was uncontrolled! By-product of shortest-path IGP routing  Often handled by over-provisioning
Two Paths to TE in IP Networks With increase in traffic, emergence of ATM, and higher-speed SONET, two approaches emerged Use a Layer 2 (ATM) network Build ATM backbone Deploy complete PVC mesh, bypass use of IP metrics TE at ATM layer With time, evolve ATM to MPLS-based backbone Use only Layer 3 (IP) network Build SONET infrastructure Rely on SONET for resilience Run IP directly on SONET (POS) Use metrics (systematically) to control flow of traffic
Global Crossing IP Backbone Network 100,000 route miles 27 countries 250 major cities 5 continents 200+ POPs Courtesy: Thomas Telkamp, GBLX
Global Crossing IP Network OC-48c/STM-16c (2.5Gbps) IP backbone Selected 10Gbps links operational (e.g. Atlantic) Services offered Internet access & Transit services IP VPNs -- Layer 3 and Layer 2 MPLS and DiffServ deployed globally
Global Crossing:  Network Design Philosophy Ensure there are  no bottlenecks  in normal state On handling congestion Prevent  via MPLS-TE  Manage  via Diffserv Over-provisioning Well traffic engineered  network can handle all traffic Can withstand failure of even the most critical link(s) Avoid excessive complexity & features Makes the network unreliable/unstable
Global Crossing’s Approach: Big Picture
TE in the US IP Network: Deployment Strategy Decision to adopt MPLS for traffic engineering & VPNs Y2000: 50+ POPs, 300 routers; Y2002: 200+ POPs Initially, hierarchical MPLS system     2 levels of LSPs  Later, a flat MPLS LSP full mesh  only  between core routers Started w/ 9 regions -- 10-50 LSRs/region    100-2500 LSPs/region Within regions: Routers fully-meshed Across regions: Core routers fully-meshed Intra-region traffic ~Mb/s to Gb/s, Inter-region traffic ~ Gb/s Source [Xiao00]
Design Principles: Statistics Collection Statistics on individual LSPs can be used to build matrices Using packets, we do not know traffic individually to B & C
Design Principles: LSP Control & Management Manually move traffic away from potential congestion via ERO Adding new LSPs with a configured load splitting ratio
Global Crossing’s Current LSP Layout and Traffic Routing
Global Crossing:  Advanced Network Technologies MPLS Fast Reroute (FRR) Localizes impact of failures Local to router detecting failure Head-end establishes new e2e LSP Per-class traffic engineering Diffserv-aware TE Avoids concentrating real-time traffic on any one link Limits the bandwidth used per class, useful during FRR IGP Convergence Tune network for fast IS-IS convergence, few seconds Use L2 failure detection and timers to achieve goal
SprintLink TM   IP Backbone Network 19+ countries 30+ major intl. cities 5 continents (reach S. America as well) 400+ POPs Courtesy: Jeff Chaltas Sprint Public Relations Represents connectivity only (not to scale) 110,000+ route miles  (common with Sprint LD network)
SprintLink TM  IP Network Tier-1 Internet backbone Customers: corporations, Tier-2 ISPs, univs., ... Native IP -over-DWDM using SONET framing 4F-BLSR infrastructure (425 SONET rings in network) Backbone US: OC-48/STM-16 (2.5 Gb/s) links Europe: OC-192/STM-64 (10 Gb/s) links DWDM with 8-40   ’s/fiber Equipment Core: Cisco GSR 12000/12416 (bbone), 10720 metro edge router  Edge: Cisco 75xxx series  Optical: Ciena Sentry 4000, Ciena CoreDirector
SprintLink TM  IP Design Philosophy Large networks exhibit arch., design & engg. (ADE) non-linearities not seen at smaller scales Even small things can & do cause huge effects ( amplification ) More simultaneous events mean greater likelihood of interaction ( coupling ) Simplicity Principle: simple n/wks are easier to operate & scale Complexity prohibits efficient scaling, driving up CAPEX  and  OPEX! Confine  intelligence  at edges No state in the network core/backbone Fastest forwarding of packets in core  Ensure packets encounter minimal queueing
SprintLink TM  Deployment Strategy
SprintLink TM  Design Principles Great value on  traffic measurement & monitoring Use it for Design, operations,  management Dimensioning, provisioning SLAs, pricing Minimizing the extent of complex TE & QoS in the core
Sprint’s Monitoring Methodology Adapted from [Diot99] Analysis platform located at Sprint ATL, Burlingame, CA
Sprint Approach to TE Aim: Thoroughly understand backbone traffic dynamics Answer questions such as: Composition of traffic? Origin of traffic? Between any pair of POPs What is the traffic demand? Volume of traffic? Traffic patterns? (In time? In space?) How is this demand routed? How does one design traffic matrics optimally?
Obtaining Traffic Matrices between POPs
A Peek at a Row of a Traffic Matrix Adapted from [Bhattacharya02] Summary of Data Collected Distribution of aggregate access traffic across other POPs in the Sprint backbone Peer 1 Peer 2 Web 2 Web 1 ISP To Backbone Sprint POP under study
Applications of Traffic Matrices Traffic engineering Verify BGP peering Intra-domain routing SLA drafting Customer reports
Routing of Demands in the Sprint Backbone Matrices provide insight into aggregate traffic behavior Do not show the  paths  demands follow over the backbone In reality IS-IS link weights hand-crafted by network ops. experts Weights chosen to restrict traffic b/ween an ingress-egress POP pair to only a  few paths  through the backbone  Intra-POP link weights heavily influence backbone paths Result: Despite several alternate paths between POPs Many remain underutilized Few have v. high utilization
Link Utilization Across the Sprint IP Backbone Almost 50% of   the links have utilization under 15%! 8% of the links are 60% utilized Observe Extent of link underutilization Disparity in utilization levels Need better load balancing rules  Require a systematic, policy-based approach to do so Source [Bhattacharya02]
Techniques for Aggregate Load Balancing Effective load balancing across backbone ... Knowing how to split traffic over multiple alternate paths Criteria used depend on purpose Different service levels    use TOS byte or protocol field Backbone routing    use destination address (DA) as basis Gather inter-POP traffic into  streams  per DA-based  prefixes   E.g. An N-bit prefix produces a  pN stream Assign  streams  to different paths to balance network load
Observations on Aggregate  Streams Examine traffic volume & stability of streams over interval for which load balancing is to be performed Findings Elephants  and  mice  ... Few very high-vol. streams, many low-vol. streams Ranking of streams stable over large timescales Phenomenon is recursive E.g.  p8  elephant sub-divided into  p16  streams also has elephants & mice! Result Engineering network for elephants alone gives practically all of the benefits of TE! (good for scalability as well)
Actual Behavior of  Streams  in the Sprint Backbone Elephants retain a large share of the bandwidth & maintain their ordering Source [Bhattacharya02] Time of day variation of elephants & mice to a busy egress POP Elephants Mice Decreasing Traffic Volume Distribution of traffic from  p8  streams of POP under study to 3 egress POPs Less than 10 of the largest streams account for up to 90% of the traffic
Agenda Traffic engineering techniques & approaches Global Crossing Sprint Backbone traffic characterization for QoS via capacity management [Joint work with Thomas Telkamp (Global Crossing), Arman Maghbouleh (Cariden Technologies), Stephen Gordon (SAIC, former C&W)]
QoS for Backbone IP Networks QoS  – nature of packet delivery service realized in the network Characterized by  achieved : bandwidth, delay, jitter, loss For backbone networks No link oversubscription    achieved b/w ~ desired b/w Controlled O/P queue size    bounded packet delays  Bounded packet delays     Bounded jitter    No packet loss    Backbone QoS    Latency characteristics of traffic   (Packet delay and jitter)
Relevant Timescales Long-term: > 5 minutes Short-term: < 5 minutes 100ms 1sec 1h 0 10sec 1min Aggregate Flows Intra-Flow Users/Applications TCP (RTT) Flow Sizes/Durations Diurnal variation Timescale Dynamics Characteristics
Timescales Critical for QoS Some of the most stringent QoS requirements for IP traffic arise when carrying voice (e.g. ITU G.114) Requirements include: Packet delay (one-way) < 150 ms End-to-end jitter < 20 ms (for toll-quality voice)     Need resolution at  millisecond  timescales to understand Trajectory of individual packets Queueing behavior in the core Good performance at ms extends naturally to larger time-scales
Short-term Traffic Characterization Investigate burstiness  within  5-minute intervals Measure at timescale critical for queueing E.g.,  1 ms, 5 ms, or 10 ms Analyze statistical properties Variance, autocorrelation, … Done one-time at specific locations, as it involves Complex setup Voluminous data collection
Data Collection and Measurement 12 traces, 30 seconds each Collected over a month Different times and days Mean b/w 126 – 290 Mbps (<< 1 Gbps)    No queueing/shaping on O/P interface Trace utilizations uniformly < 1Gbps over  any  1 ms interval Shomiti Fiber Tap Tap Analyzer GbE backbone link Measurement PC
Raw Results 30 sec of data, 1ms scale Mean =  950 Mbps Max. = 2033 Mbps Min. =  509 Mbps 95-percentile: 1183 Mbps   5-percentile:  737 Mbps ~250 packets per interval Mean rate over 30 sec Output queue rate (available link bandwidth)
Traffic Distribution Histogram (1ms scale) Fits  normal  probability distribution well  (Std. dev. = 138 Mbps) No heavy-tails Suggests small over-provisioning factor
Autocorrelation Lag Plot (1ms scale) Scatter plot for consecutive samples of time-series Are periods of high usage followed by other periods of high usage? Autocorrelation at 1ms is 0.13 (=uncorrelated)    High bandwidth bursts  do not  line up to cause marked queueing High autocorrelation  Points concentrated along 45 °  line Clearly  not  the case here 45 °
Poisson versus Self-Similar Traffic Scale Invariant! Refs. [Liljenstolpe01], [Lothberg01] Ref. [Tekinay99] Markovian Process Self-Similar Process
Internet Traffic: Variance versus Timescale Random variable X Var(X (m) ) = σ 2  m -1 Self-similar process, with Hurst parameter H Var(X (m) ) = σ 2 m 2H-2 Long range dependence (LRD)    0.5 < H < 1     Var(X (m) ) converges to zero  slower  than a rate m -1 150 ms Note: m = sample size, σ 2  = Var(X) Variance decreases in proportion to timescale Variance decreases slower    self-similarity Slope = -1    Poisson
Traffic: Summary Long-term well behaved traffic Short-term uncorrelated traffic
IP Capacity Allocation Measurement data 5-min average utilization Performance goals, e.g.  Packet loss < 1% Jitter < 10 ms End-to-end delay < 20 ms But … we have no “Erlang formulas” for IP traffic … Model traffic, fit parameters, evaluate parametric solution Two approaches  to a solution Empirically derive guidelines by characterizing observed traffic Approach in this work
Queuing Simulation: Methodology Feed multiplexed, sampled traffic into a FIFO queue Measure amount of traffic that violates set delay bound FIFO Queue Sampled Traffic Fixed Service Rate Monitor Queuing Delay Sampled Traffic Sampled Traffic Output Link under study 622 Mbps 572 Mbps 126 Mbps 240 Mbps 206 Mbps Example: 92% Utilization
Queuing Simulation: Results 89% 93% +   Simulation 622 Mbps +  Simulation 1000 Mbps ----  M/M/1 622 Mbps ----  M/M/1 1000 Mbps
Multi-hop Queueing: 8 hops P99.9 delay: Hop 1 = 2 ms, Hop 8 = 5.2 ms (increase not linear!) P99.9 = 2ms P99.9 = 5.2ms
Queueing: Summary Queueing simulation Backbone link (GbE) Over-provisioning ~7.5% to bound delay/hop to under 2 ms Higher speeds (2.5G/10G) Over-provisioning factor becomes very small Lower speeds (< 0.622G) Over-provisioning factor is significant P99.9 multi-hop delay/jitter is  not  additive
Applications to Network Planning QoS targets    “Headroom” (over-provisioning %) Derived experimentally by characterizing short-term traffic Traffic matrix Derivable from the stable, well-behaved, long-term traffic Determine minimum capacity deployment required to meet objectives under  normal  and  failure  conditions How to use this for planning? Trending – study impact of growth over time Failure analysis – impact of failures on loading Derived experimentally by characterizing short-term traffic Optimization – LSP routing, IGP metrics
Acknowledgements Thomas Telkamp,  Global Crossing Robert J. Rockell, Jeff Chaltas, Ananth Nagarajan,  Sprint Steve Gordon,  SAIC (former C&W) Jennifer Rexford, Albert Greenberg, Carsten Lund,  AT&T Research Wai-Sum Lai,  AT&T Fang Wu,  NTT America Arman Maghbouleh, Alan Gous,  Cariden Technologies Yufei Wang,  VPI Systems Susan Cole,  OPNET Technologies
References [Bhattacharya02] S. Bhattacharya, et al, “POP-Level and Access-Link Level Traffic Dynamics in a Tier-1 POP,”  Proc. ACM  SIGCOMM   Internet Measurement Workshop , November 2001.    [Diot99] C. Diot, “Tier-1 IP Backbone Network: Architecture and Performance,”Sprint Advanced Technology Labs., 1999. Available at:  http://www. sprintlabs .com/Department/IP- Interworking /Monitor/ [Liljenstolpe01]  Chris Liljenstolpe,  Design Issues in Next Generation Carrier Networks , Proc.  MPLS 2001, Washington, D.C., 7-9 October, 2001. [Lothberg01]  Peter Lothberg,  A View of the Future: The IP-Only Internet , NANOG 22, Scottsdale, AZ, 20-22 May 2001,  http://www. nanog .org/ mtg -0105/ lothberg .html
References [Morris00]  Robert Morris and Dong Lin,  Variance of Aggregated WebTraffic , IEEE Infocom’00, Tel Aviv, Israel, March 2000, pp. 360-366. [Tekinay99]  Zafer Sahinoglu and Sirin Tekinay,  On Multimedia Networks: Self-Similar Traffic and Network Performance , IEEE Commun. Mag., vol. 37, no. 1, January 1999, pp. 48-53. [Xiao00] X. Xiao et al, “Traffic Engineering with MPLS in the Internet,”  IEEE Network , March/April 2000, vol. 14, no. 2, pp. 28-33.

More Related Content

PPT
A Survey of Recent Advances in Network Planning/Traffic Engineering (TE) Tools
PPT
Network Planning &amp; Design: An Art or a Science?
PPT
DOCX
PDF
From GMPLS to OpenFlow Control & Monitoring of Optical Networks
PDF
GMPLS (generalized mpls)
PDF
Inop presentation dec sgb 2017
PPT
A Survey of Recent Advances in Network Planning/Traffic Engineering (TE) Tools
Network Planning &amp; Design: An Art or a Science?
From GMPLS to OpenFlow Control & Monitoring of Optical Networks
GMPLS (generalized mpls)
Inop presentation dec sgb 2017

What's hot (16)

DOC
Anoop_RF_
PPTX
Link_NwkingforDevOps
PPT
T sel3 g optimization process and guideline ver01
PDF
Lte kpis
DOCX
IEEE 2014 JAVA NETWORKING PROJECTS Cost effective resource allocation of over...
PDF
Wcdma radio-network-optimization-guide
PDF
Radio network planning fundamentalsnew
PDF
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
DOCX
Akash Tandon - Resume
PDF
Converted Mobile Offload Architectures
PDF
49823859 gsm-gprs-evaluatioin-and-optimization
PPTX
Kpi info
DOC
AMIT MODI
PDF
LTE KPIs and Formulae
DOC
CV_ANUPAM MONDAL
PPT
3 Gpp Beijing Workshop Courau Tsg Ran Chairman
 
Anoop_RF_
Link_NwkingforDevOps
T sel3 g optimization process and guideline ver01
Lte kpis
IEEE 2014 JAVA NETWORKING PROJECTS Cost effective resource allocation of over...
Wcdma radio-network-optimization-guide
Radio network planning fundamentalsnew
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
Akash Tandon - Resume
Converted Mobile Offload Architectures
49823859 gsm-gprs-evaluatioin-and-optimization
Kpi info
AMIT MODI
LTE KPIs and Formulae
CV_ANUPAM MONDAL
3 Gpp Beijing Workshop Courau Tsg Ran Chairman
 
Ad

Viewers also liked (19)

PPTX
2014 Interns Prototypes vFinal
PPTX
Gaining Support for Hadoop in a Large Corporate Environment
PPSX
Sprint - Cloud Services
PDF
DPDK Summit 2015 - Sprint - Arun Rajagopal
PDF
Migrating to netcool precision for ip networks --best practices for migrating...
PPTX
David_Amzallag _NFV and the future of the OSS - TMF2013
PPTX
StartPoint - Sprint 1
PPT
Network Vision Sprint Direct Connect
PDF
Dimensioning of IP Backbone
PDF
Building A Winning Strategy For Open Source Company Beijing Nov2009
PDF
Sprint 48 review
PDF
Rws 120032 final
PDF
The State of Open Source BI Adoption
PDF
Sprint 38 review
PDF
Case Studies & Network Planning Tools
PPTX
NFV management and orchestration framework architecture
PPTX
Case Study: Sprint Monitors Its Mega-Network for Voice/Video/Data Service Ass...
PDF
Computer network Report
PPTX
Latest trends in information technology
2014 Interns Prototypes vFinal
Gaining Support for Hadoop in a Large Corporate Environment
Sprint - Cloud Services
DPDK Summit 2015 - Sprint - Arun Rajagopal
Migrating to netcool precision for ip networks --best practices for migrating...
David_Amzallag _NFV and the future of the OSS - TMF2013
StartPoint - Sprint 1
Network Vision Sprint Direct Connect
Dimensioning of IP Backbone
Building A Winning Strategy For Open Source Company Beijing Nov2009
Sprint 48 review
Rws 120032 final
The State of Open Source BI Adoption
Sprint 38 review
Case Studies & Network Planning Tools
NFV management and orchestration framework architecture
Case Study: Sprint Monitors Its Mega-Network for Voice/Video/Data Service Ass...
Computer network Report
Latest trends in information technology
Ad

Similar to Carrier Strategies for Backbone Traffic Engineering and QoS (20)

PPT
(Powerpoint slides)
PDF
ISP Network Design workshops how to design networks
PDF
1-Isp-Network-Design-1
PPT
Sl3c3
PDF
Traffic Engineering in LinkedIn Backbone
PDF
2000402 en juniper good
PPTX
Routing, Network Performance, and Role of Analytics
PDF
Traffic Engineering and Quality of Experience in MPLS Network by Fuzzy Logic ...
PDF
Engineering The New IP Transport
PDF
Ccie service provider preparation guide
PPTX
PPTX
Analyzing and optimizing mpls technology at Reliance Jio
PPT
Video Traffic Management
PDF
【EPN Seminar Nov.10. 2015】 パネルディスカッション その2: BGP Peering Engineering Automatio...
PPTX
Mr. Ottmar Krauss' presentation at QITCOM 2011
PPTX
Cloud interconnection networks basic .pptx
PPTX
Keeping the Internet Fast and Resilient for You and Your Customers
PDF
PLNOG 6: Bart van der Sloot - Technology trends in terrestrial and subsea net...
PPTX
PPTX
98 366 mva slides lesson 7
(Powerpoint slides)
ISP Network Design workshops how to design networks
1-Isp-Network-Design-1
Sl3c3
Traffic Engineering in LinkedIn Backbone
2000402 en juniper good
Routing, Network Performance, and Role of Analytics
Traffic Engineering and Quality of Experience in MPLS Network by Fuzzy Logic ...
Engineering The New IP Transport
Ccie service provider preparation guide
Analyzing and optimizing mpls technology at Reliance Jio
Video Traffic Management
【EPN Seminar Nov.10. 2015】 パネルディスカッション その2: BGP Peering Engineering Automatio...
Mr. Ottmar Krauss' presentation at QITCOM 2011
Cloud interconnection networks basic .pptx
Keeping the Internet Fast and Resilient for You and Your Customers
PLNOG 6: Bart van der Sloot - Technology trends in terrestrial and subsea net...
98 366 mva slides lesson 7

More from Vishal Sharma, Ph.D. (20)

PPTX
Intellectual Property Challenges and IoT
PPT
Network Infrastructure Security in Cellular Data Networks: An Initial Invest...
PPT
7 Keys to Accelerate Profits by Partnering with Metanoia, Inc.
PDF
A New Analysis for Wavelength Translation in Regular WDM Networks
PPT
Architectural Options for Metro Carrier-Ethernet Network Buildout: Analysis &...
PDF
Nanog panel carrier-network-health_vishal_8-5-12
PDF
Capacity Planning Panel - Operator and Eco-System Player Discourse
PDF
Illuminating Optical Ethernet Networks!
PPT
Internet Routing Protocols: Fundamental Concepts of Distance-Vector and Link-...
PPT
Modern Carrier Strategies for Traffic Engineering
PPT
Approaches to Designing a High-Performance Switch Router
PPT
Multi-Protocol Lambda Switching: The Role of IP Technologies in Controlling a...
PDF
Design Considerations for Converged Optical Ethernet Networks
PPT
Elements of Cross-Layer System & Network Design for QoS-Enabled Wi-Max Networks
PPT
Metro ethernet metanoiainc-next-gen-workshop_2007-07-17
PDF
Pbt article packet-optical-integration_vishal_05-08-12
PDF
Packet-Optical Integration: The Key to Evolving Towards Packet Enabled Agile ...
PPT
Understanding Intelligent Military-Grade Optical Ethernet Networks: A Versati...
PDF
Demystifying optical ethernet networks
PPT
Multi-Protocol Label Switching: Basics and Applications
Intellectual Property Challenges and IoT
Network Infrastructure Security in Cellular Data Networks: An Initial Invest...
7 Keys to Accelerate Profits by Partnering with Metanoia, Inc.
A New Analysis for Wavelength Translation in Regular WDM Networks
Architectural Options for Metro Carrier-Ethernet Network Buildout: Analysis &...
Nanog panel carrier-network-health_vishal_8-5-12
Capacity Planning Panel - Operator and Eco-System Player Discourse
Illuminating Optical Ethernet Networks!
Internet Routing Protocols: Fundamental Concepts of Distance-Vector and Link-...
Modern Carrier Strategies for Traffic Engineering
Approaches to Designing a High-Performance Switch Router
Multi-Protocol Lambda Switching: The Role of IP Technologies in Controlling a...
Design Considerations for Converged Optical Ethernet Networks
Elements of Cross-Layer System & Network Design for QoS-Enabled Wi-Max Networks
Metro ethernet metanoiainc-next-gen-workshop_2007-07-17
Pbt article packet-optical-integration_vishal_05-08-12
Packet-Optical Integration: The Key to Evolving Towards Packet Enabled Agile ...
Understanding Intelligent Military-Grade Optical Ethernet Networks: A Versati...
Demystifying optical ethernet networks
Multi-Protocol Label Switching: Basics and Applications

Recently uploaded (20)

PDF
WRN_Investor_Presentation_August 2025.pdf
PPT
Chapter four Project-Preparation material
DOCX
Business Management - unit 1 and 2
PDF
IFRS Notes in your pocket for study all the time
PDF
Types of control:Qualitative vs Quantitative
PPTX
Belch_12e_PPT_Ch18_Accessible_university.pptx
PDF
Roadmap Map-digital Banking feature MB,IB,AB
PDF
Ôn tập tiếng anh trong kinh doanh nâng cao
PPTX
CkgxkgxydkydyldylydlydyldlyddolydyoyyU2.pptx
PPTX
Dragon_Fruit_Cultivation_in Nepal ppt.pptx
PDF
How to Get Funding for Your Trucking Business
PDF
COST SHEET- Tender and Quotation unit 2.pdf
PPT
Data mining for business intelligence ch04 sharda
PDF
MSPs in 10 Words - Created by US MSP Network
PDF
Power and position in leadershipDOC-20250808-WA0011..pdf
PPTX
Principles of Marketing, Industrial, Consumers,
PDF
Solara Labs: Empowering Health through Innovative Nutraceutical Solutions
PDF
kom-180-proposal-for-a-directive-amending-directive-2014-45-eu-and-directive-...
DOCX
unit 1 COST ACCOUNTING AND COST SHEET
PDF
Unit 1 Cost Accounting - Cost sheet
WRN_Investor_Presentation_August 2025.pdf
Chapter four Project-Preparation material
Business Management - unit 1 and 2
IFRS Notes in your pocket for study all the time
Types of control:Qualitative vs Quantitative
Belch_12e_PPT_Ch18_Accessible_university.pptx
Roadmap Map-digital Banking feature MB,IB,AB
Ôn tập tiếng anh trong kinh doanh nâng cao
CkgxkgxydkydyldylydlydyldlyddolydyoyyU2.pptx
Dragon_Fruit_Cultivation_in Nepal ppt.pptx
How to Get Funding for Your Trucking Business
COST SHEET- Tender and Quotation unit 2.pdf
Data mining for business intelligence ch04 sharda
MSPs in 10 Words - Created by US MSP Network
Power and position in leadershipDOC-20250808-WA0011..pdf
Principles of Marketing, Industrial, Consumers,
Solara Labs: Empowering Health through Innovative Nutraceutical Solutions
kom-180-proposal-for-a-directive-amending-directive-2014-45-eu-and-directive-...
unit 1 COST ACCOUNTING AND COST SHEET
Unit 1 Cost Accounting - Cost sheet

Carrier Strategies for Backbone Traffic Engineering and QoS

  • 1. Carrier Strategies for Backbone Traffic Engineering and QoS Dr. Vishal Sharma President & Principal Consultant Metanoia, Inc. Voice: +1 408 394 6321 Email: [email_address] Web: http://www.metanoia-inc.com Metanoia, Inc. Critical Systems Thinking™ © Copyright 2004 All Rights Reserved
  • 2. Agenda Traffic engineering techniques & approaches Global Crossing Sprint Backbone traffic characterization for QoS via capacity management [Joint work with Thomas Telkamp (Global Crossing), Arman Maghbouleh (Cariden Technologies), Stephen Gordon (SAIC, former C&W)]
  • 3. Basic Service Provider Goals The two fundamental tasks before any service provider: Deploy a physical topology that meets customers’ needs Map customer traffic flows on to the physical topology Earlier (1990s) the mapping task was uncontrolled! By-product of shortest-path IGP routing Often handled by over-provisioning
  • 4. Two Paths to TE in IP Networks With increase in traffic, emergence of ATM, and higher-speed SONET, two approaches emerged Use a Layer 2 (ATM) network Build ATM backbone Deploy complete PVC mesh, bypass use of IP metrics TE at ATM layer With time, evolve ATM to MPLS-based backbone Use only Layer 3 (IP) network Build SONET infrastructure Rely on SONET for resilience Run IP directly on SONET (POS) Use metrics (systematically) to control flow of traffic
  • 5. Global Crossing IP Backbone Network 100,000 route miles 27 countries 250 major cities 5 continents 200+ POPs Courtesy: Thomas Telkamp, GBLX
  • 6. Global Crossing IP Network OC-48c/STM-16c (2.5Gbps) IP backbone Selected 10Gbps links operational (e.g. Atlantic) Services offered Internet access & Transit services IP VPNs -- Layer 3 and Layer 2 MPLS and DiffServ deployed globally
  • 7. Global Crossing: Network Design Philosophy Ensure there are no bottlenecks in normal state On handling congestion Prevent via MPLS-TE Manage via Diffserv Over-provisioning Well traffic engineered network can handle all traffic Can withstand failure of even the most critical link(s) Avoid excessive complexity & features Makes the network unreliable/unstable
  • 9. TE in the US IP Network: Deployment Strategy Decision to adopt MPLS for traffic engineering & VPNs Y2000: 50+ POPs, 300 routers; Y2002: 200+ POPs Initially, hierarchical MPLS system  2 levels of LSPs Later, a flat MPLS LSP full mesh only between core routers Started w/ 9 regions -- 10-50 LSRs/region  100-2500 LSPs/region Within regions: Routers fully-meshed Across regions: Core routers fully-meshed Intra-region traffic ~Mb/s to Gb/s, Inter-region traffic ~ Gb/s Source [Xiao00]
  • 10. Design Principles: Statistics Collection Statistics on individual LSPs can be used to build matrices Using packets, we do not know traffic individually to B & C
  • 11. Design Principles: LSP Control & Management Manually move traffic away from potential congestion via ERO Adding new LSPs with a configured load splitting ratio
  • 12. Global Crossing’s Current LSP Layout and Traffic Routing
  • 13. Global Crossing: Advanced Network Technologies MPLS Fast Reroute (FRR) Localizes impact of failures Local to router detecting failure Head-end establishes new e2e LSP Per-class traffic engineering Diffserv-aware TE Avoids concentrating real-time traffic on any one link Limits the bandwidth used per class, useful during FRR IGP Convergence Tune network for fast IS-IS convergence, few seconds Use L2 failure detection and timers to achieve goal
  • 14. SprintLink TM IP Backbone Network 19+ countries 30+ major intl. cities 5 continents (reach S. America as well) 400+ POPs Courtesy: Jeff Chaltas Sprint Public Relations Represents connectivity only (not to scale) 110,000+ route miles (common with Sprint LD network)
  • 15. SprintLink TM IP Network Tier-1 Internet backbone Customers: corporations, Tier-2 ISPs, univs., ... Native IP -over-DWDM using SONET framing 4F-BLSR infrastructure (425 SONET rings in network) Backbone US: OC-48/STM-16 (2.5 Gb/s) links Europe: OC-192/STM-64 (10 Gb/s) links DWDM with 8-40  ’s/fiber Equipment Core: Cisco GSR 12000/12416 (bbone), 10720 metro edge router Edge: Cisco 75xxx series Optical: Ciena Sentry 4000, Ciena CoreDirector
  • 16. SprintLink TM IP Design Philosophy Large networks exhibit arch., design & engg. (ADE) non-linearities not seen at smaller scales Even small things can & do cause huge effects ( amplification ) More simultaneous events mean greater likelihood of interaction ( coupling ) Simplicity Principle: simple n/wks are easier to operate & scale Complexity prohibits efficient scaling, driving up CAPEX and OPEX! Confine intelligence at edges No state in the network core/backbone Fastest forwarding of packets in core Ensure packets encounter minimal queueing
  • 17. SprintLink TM Deployment Strategy
  • 18. SprintLink TM Design Principles Great value on traffic measurement & monitoring Use it for Design, operations, management Dimensioning, provisioning SLAs, pricing Minimizing the extent of complex TE & QoS in the core
  • 19. Sprint’s Monitoring Methodology Adapted from [Diot99] Analysis platform located at Sprint ATL, Burlingame, CA
  • 20. Sprint Approach to TE Aim: Thoroughly understand backbone traffic dynamics Answer questions such as: Composition of traffic? Origin of traffic? Between any pair of POPs What is the traffic demand? Volume of traffic? Traffic patterns? (In time? In space?) How is this demand routed? How does one design traffic matrics optimally?
  • 22. A Peek at a Row of a Traffic Matrix Adapted from [Bhattacharya02] Summary of Data Collected Distribution of aggregate access traffic across other POPs in the Sprint backbone Peer 1 Peer 2 Web 2 Web 1 ISP To Backbone Sprint POP under study
  • 23. Applications of Traffic Matrices Traffic engineering Verify BGP peering Intra-domain routing SLA drafting Customer reports
  • 24. Routing of Demands in the Sprint Backbone Matrices provide insight into aggregate traffic behavior Do not show the paths demands follow over the backbone In reality IS-IS link weights hand-crafted by network ops. experts Weights chosen to restrict traffic b/ween an ingress-egress POP pair to only a few paths through the backbone Intra-POP link weights heavily influence backbone paths Result: Despite several alternate paths between POPs Many remain underutilized Few have v. high utilization
  • 25. Link Utilization Across the Sprint IP Backbone Almost 50% of the links have utilization under 15%! 8% of the links are 60% utilized Observe Extent of link underutilization Disparity in utilization levels Need better load balancing rules Require a systematic, policy-based approach to do so Source [Bhattacharya02]
  • 26. Techniques for Aggregate Load Balancing Effective load balancing across backbone ... Knowing how to split traffic over multiple alternate paths Criteria used depend on purpose Different service levels  use TOS byte or protocol field Backbone routing  use destination address (DA) as basis Gather inter-POP traffic into streams per DA-based prefixes E.g. An N-bit prefix produces a pN stream Assign streams to different paths to balance network load
  • 27. Observations on Aggregate Streams Examine traffic volume & stability of streams over interval for which load balancing is to be performed Findings Elephants and mice ... Few very high-vol. streams, many low-vol. streams Ranking of streams stable over large timescales Phenomenon is recursive E.g. p8 elephant sub-divided into p16 streams also has elephants & mice! Result Engineering network for elephants alone gives practically all of the benefits of TE! (good for scalability as well)
  • 28. Actual Behavior of Streams in the Sprint Backbone Elephants retain a large share of the bandwidth & maintain their ordering Source [Bhattacharya02] Time of day variation of elephants & mice to a busy egress POP Elephants Mice Decreasing Traffic Volume Distribution of traffic from p8 streams of POP under study to 3 egress POPs Less than 10 of the largest streams account for up to 90% of the traffic
  • 29. Agenda Traffic engineering techniques & approaches Global Crossing Sprint Backbone traffic characterization for QoS via capacity management [Joint work with Thomas Telkamp (Global Crossing), Arman Maghbouleh (Cariden Technologies), Stephen Gordon (SAIC, former C&W)]
  • 30. QoS for Backbone IP Networks QoS – nature of packet delivery service realized in the network Characterized by achieved : bandwidth, delay, jitter, loss For backbone networks No link oversubscription  achieved b/w ~ desired b/w Controlled O/P queue size  bounded packet delays Bounded packet delays  Bounded jitter  No packet loss  Backbone QoS  Latency characteristics of traffic (Packet delay and jitter)
  • 31. Relevant Timescales Long-term: > 5 minutes Short-term: < 5 minutes 100ms 1sec 1h 0 10sec 1min Aggregate Flows Intra-Flow Users/Applications TCP (RTT) Flow Sizes/Durations Diurnal variation Timescale Dynamics Characteristics
  • 32. Timescales Critical for QoS Some of the most stringent QoS requirements for IP traffic arise when carrying voice (e.g. ITU G.114) Requirements include: Packet delay (one-way) < 150 ms End-to-end jitter < 20 ms (for toll-quality voice)  Need resolution at millisecond timescales to understand Trajectory of individual packets Queueing behavior in the core Good performance at ms extends naturally to larger time-scales
  • 33. Short-term Traffic Characterization Investigate burstiness within 5-minute intervals Measure at timescale critical for queueing E.g., 1 ms, 5 ms, or 10 ms Analyze statistical properties Variance, autocorrelation, … Done one-time at specific locations, as it involves Complex setup Voluminous data collection
  • 34. Data Collection and Measurement 12 traces, 30 seconds each Collected over a month Different times and days Mean b/w 126 – 290 Mbps (<< 1 Gbps)  No queueing/shaping on O/P interface Trace utilizations uniformly < 1Gbps over any 1 ms interval Shomiti Fiber Tap Tap Analyzer GbE backbone link Measurement PC
  • 35. Raw Results 30 sec of data, 1ms scale Mean = 950 Mbps Max. = 2033 Mbps Min. = 509 Mbps 95-percentile: 1183 Mbps 5-percentile: 737 Mbps ~250 packets per interval Mean rate over 30 sec Output queue rate (available link bandwidth)
  • 36. Traffic Distribution Histogram (1ms scale) Fits normal probability distribution well (Std. dev. = 138 Mbps) No heavy-tails Suggests small over-provisioning factor
  • 37. Autocorrelation Lag Plot (1ms scale) Scatter plot for consecutive samples of time-series Are periods of high usage followed by other periods of high usage? Autocorrelation at 1ms is 0.13 (=uncorrelated)  High bandwidth bursts do not line up to cause marked queueing High autocorrelation Points concentrated along 45 ° line Clearly not the case here 45 °
  • 38. Poisson versus Self-Similar Traffic Scale Invariant! Refs. [Liljenstolpe01], [Lothberg01] Ref. [Tekinay99] Markovian Process Self-Similar Process
  • 39. Internet Traffic: Variance versus Timescale Random variable X Var(X (m) ) = σ 2 m -1 Self-similar process, with Hurst parameter H Var(X (m) ) = σ 2 m 2H-2 Long range dependence (LRD)  0.5 < H < 1  Var(X (m) ) converges to zero slower than a rate m -1 150 ms Note: m = sample size, σ 2 = Var(X) Variance decreases in proportion to timescale Variance decreases slower  self-similarity Slope = -1  Poisson
  • 40. Traffic: Summary Long-term well behaved traffic Short-term uncorrelated traffic
  • 41. IP Capacity Allocation Measurement data 5-min average utilization Performance goals, e.g. Packet loss < 1% Jitter < 10 ms End-to-end delay < 20 ms But … we have no “Erlang formulas” for IP traffic … Model traffic, fit parameters, evaluate parametric solution Two approaches to a solution Empirically derive guidelines by characterizing observed traffic Approach in this work
  • 42. Queuing Simulation: Methodology Feed multiplexed, sampled traffic into a FIFO queue Measure amount of traffic that violates set delay bound FIFO Queue Sampled Traffic Fixed Service Rate Monitor Queuing Delay Sampled Traffic Sampled Traffic Output Link under study 622 Mbps 572 Mbps 126 Mbps 240 Mbps 206 Mbps Example: 92% Utilization
  • 43. Queuing Simulation: Results 89% 93% + Simulation 622 Mbps + Simulation 1000 Mbps ---- M/M/1 622 Mbps ---- M/M/1 1000 Mbps
  • 44. Multi-hop Queueing: 8 hops P99.9 delay: Hop 1 = 2 ms, Hop 8 = 5.2 ms (increase not linear!) P99.9 = 2ms P99.9 = 5.2ms
  • 45. Queueing: Summary Queueing simulation Backbone link (GbE) Over-provisioning ~7.5% to bound delay/hop to under 2 ms Higher speeds (2.5G/10G) Over-provisioning factor becomes very small Lower speeds (< 0.622G) Over-provisioning factor is significant P99.9 multi-hop delay/jitter is not additive
  • 46. Applications to Network Planning QoS targets  “Headroom” (over-provisioning %) Derived experimentally by characterizing short-term traffic Traffic matrix Derivable from the stable, well-behaved, long-term traffic Determine minimum capacity deployment required to meet objectives under normal and failure conditions How to use this for planning? Trending – study impact of growth over time Failure analysis – impact of failures on loading Derived experimentally by characterizing short-term traffic Optimization – LSP routing, IGP metrics
  • 47. Acknowledgements Thomas Telkamp, Global Crossing Robert J. Rockell, Jeff Chaltas, Ananth Nagarajan, Sprint Steve Gordon, SAIC (former C&W) Jennifer Rexford, Albert Greenberg, Carsten Lund, AT&T Research Wai-Sum Lai, AT&T Fang Wu, NTT America Arman Maghbouleh, Alan Gous, Cariden Technologies Yufei Wang, VPI Systems Susan Cole, OPNET Technologies
  • 48. References [Bhattacharya02] S. Bhattacharya, et al, “POP-Level and Access-Link Level Traffic Dynamics in a Tier-1 POP,” Proc. ACM SIGCOMM Internet Measurement Workshop , November 2001.   [Diot99] C. Diot, “Tier-1 IP Backbone Network: Architecture and Performance,”Sprint Advanced Technology Labs., 1999. Available at: http://www. sprintlabs .com/Department/IP- Interworking /Monitor/ [Liljenstolpe01] Chris Liljenstolpe, Design Issues in Next Generation Carrier Networks , Proc. MPLS 2001, Washington, D.C., 7-9 October, 2001. [Lothberg01] Peter Lothberg, A View of the Future: The IP-Only Internet , NANOG 22, Scottsdale, AZ, 20-22 May 2001, http://www. nanog .org/ mtg -0105/ lothberg .html
  • 49. References [Morris00] Robert Morris and Dong Lin, Variance of Aggregated WebTraffic , IEEE Infocom’00, Tel Aviv, Israel, March 2000, pp. 360-366. [Tekinay99] Zafer Sahinoglu and Sirin Tekinay, On Multimedia Networks: Self-Similar Traffic and Network Performance , IEEE Commun. Mag., vol. 37, no. 1, January 1999, pp. 48-53. [Xiao00] X. Xiao et al, “Traffic Engineering with MPLS in the Internet,” IEEE Network , March/April 2000, vol. 14, no. 2, pp. 28-33.

Editor's Notes

  • #3: Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
  • #4: So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
  • #5: So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
  • #20: Get up to 1TB of data per day per POP! Timestamp have 2us accuracy, header has 44 bytes.
  • #21: Where does traffic come from or which sources/links/customers contribute to traffic and how much? POPs: What is the variaton of traffic per time of day? What is the distribution of traffic across aggregate flows? That is, what information on routing and traffic flow between POPs. Obtain information for traffic in both time and space. Matrix design: Is there a better way to spread the traffic across the paths between POPs? At what granularity should this be done. We look at this in the techniques lecture.
  • #22: Transmit time through a router is critical, since it is Critical for delay-sensitive application Adds to e2e delay Is useful to control QoS
  • #23: Observations: This histogram shows that the most common assumption that traffic from a source is uniformly distributed to all destinations does not match Internet behavior at all! This is because: Some POPs sink larger traffic than others – based simply on geography, based on where international trunks terminate, etc. The traffic distribution between POPs exhibits a significant degree of variation – the vol. Of traffic that an egress POP receives depends on the number and type of customers attached to the egress POP. Likewise, the amount of traffic an ingress POP generates depends on the no. and type of customers, access links, their speeds etc.
  • #24: TE: If a new POP/link is added, can they predict where in the network they need to add new bandwidth? Conversely, where do they need an additional POP/link to tackle congestion or growing traffic demands? BGP peering: Are we carrying unwanted IP traffic? Are our peers’ announcements consistent with our BGP announcements? Intra-domain routing: verify load balancing? Design adaptive policies SLAs: Can use info. on how much traffic is exchanged between peers and how it varies to see what guarantees can be offered for delay, throughput, etc. Reports: Can use to generate reports for customers that verify that customer traffic is being correctly and consistently routed
  • #25: Where does traffic come from or which sources/links/customers contribute to traffic and how much? POPs: What is the variaton of traffic per time of day? What is the distribution of traffic across aggregate flows? That is, what information on routing and traffic flow between POPs. Obtain information for traffic in both time and space. Matrix design: Is there a better way to spread the traffic across the paths between POPs? At what granularity should this be done. We look at this in the techniques lecture.
  • #26: The better load balancing requires one to deviate from shortest path routing. Thus, need to ensure that significant delays are not introduced by this process. This is not likely because Backbone is highly meshes, so most alternate paths between an ingress-egress POP pair are only 1-2 hops longer than the shortest path. Average delay through routers is only a few ms, so additional delay due to a few extra hops will not be significant.
  • #30: Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
  • #31: Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
  • #32: Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
  • #33: Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
  • #34: So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
  • #35: So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
  • #41: So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
  • #46: So, in this first lecture, I’ll begin by look at circuit and packet switching. Of course this will be very familiar to everyone here. My goal is simply to recap some salient points that we’d want to keep at the back of our minds during the course. I’ll then highlight some fundamental switching notions. These are important because we’ll see that a lot of the effort in the design of architectures and algorithms for switch/routers is directed at addressing these basic notions. Finally, I’ll look at the basic architectural components of a packet router and a circuit switch or TDM cross-connect
  • #47: Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
  • #48: I’ll now highlight a few switching phenomena that one must contend with in both circuit and packet switching. The reason for considering them here is that all architectures are ultimately designed to overcome these phenomena. The first of these is output contention, which occurs when the sources transmit at rates whose aggregate exceeds the capacity of one or more outputs. Circuit and packet switches handle output contention differently. In circuit switching of course no new circuit can be setup on a link that is full. So the moment there is output contention, one must reject any new circuit. In packet switching, the contention handling differs depending on the nature of the contention. For example, short-term congestion can be tackled by buffering data and transmitting it a short while later when resources become available. Long-term or sustained congestion can be handled in one of three ways: dropping excess data (the question here is whom to drop), by applying admission control at the source (the question here is whom to throttle), or by using flow control and sending feedback to the source (the question here is whom to reduce and by how much). The sizing of the buffers at various points in a switch/router is critically related to the nature and type of contention the switch is designed to handle.
  • #49: Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.
  • #50: Good afternoon! And welcome to the course on next-generation high-performance switch architectures. Thank you for coming. Over these two days my goal is to explore some details of this subject that will lead to a deeper understanding of the operation of canonical high-speed switch architectures. Before we begin, I’d like to give you a quick overview of the course, and of the sequence in which we’ll cover the material. The material is organized into 6 parts, half of which we’ll cover today. Today, we’ll begin with an overview of some basic switching notions and look at the essential architectural components of switches and cross-connects. We’ll also look at the generic data path processing that occurs within each. We will then look at a taxonomy of switch architectures and switching fabrics. Here we’ll cover the evolution of switch/routers over several generations, and examine the properties and features of different types of switching fabrics. We’ll also review the properties of input and output queueing. Having developed an overall understanding of the architectures of switches and routers, we’ll delve next into tracing the data path through an IP router, a TDM cross-connect, and a hybrid TDM/IP switch, and look at two examples in detail – the Cisco Catalyst switch and the Juniper M Series routers. Starting tomorrow, we will start dissecting each of the three main processing steps in a switch/router--- input processing, scheduling across the switch fabric, and output queueing. We’ll look at methods, algorithms, and techniques for each with a focus on hardware complexity and implementation issues. I have factored in time for discussions, so I hope you’ll ask questions freely at any time during these lectures. This will enable me to adjust my presentations to best help you. It will also make these lectures more interesting for me. If you have additional questions, please feel free to contact me after May 6 th . My contact information is on the title slide.