SlideShare a Scribd company logo
A Scalable, Commodity Data
Center Network Architecture
Mohammad AI-Fares , Alexander Loukissas , Amin Vahdat
Overview
• Background of Current DCN Architectures
• Desired properties in a DC Architecture
• Fat tree based solution
• Evaluation
• Conclusion
Common DC Topology
Internet
Servers
Layer-2 switchAccess
Data Center
Layer-2/3 switchAggregation
Layer-3 routerCore
Background Cont.
 Oversubscription:
 Ratio of the worst-case achievable aggregate bandwidth among
the end hosts to the total bisection bandwidth of a particular
communication topology
(1:1 indicates that all hosts may communicate with any arbitrary
host at full BW. Ex) 1Gb/s for commodity Ethernet)
 Lower the total cost of the design
 Typical designs: factor of 2.5:1 (400 Mbps)to 8:1(125 Mbps)
 Cost:
 Edge: $7,000 for each 48-port GigE switch
 Aggregation and core: $700,000 for 128-port 10GigE switches
 Cabling costs are not considered!
Current DC Network Architectures
2High level choices for building comm fabric for large scale clusters:
 Leverages specialized hardware and communication protocols,
such as InfiniBand, Myrinet.
– These solutions can scale to clusters of thousands of nodes with high
bandwidth
– Expensive infrastructure, incompatible with TCP/IP applications
 Leverages commodity Ethernet switches and routers to
interconnect cluster machines
– Backwards compatible with existing infrastructures, low-cost
– Aggregate cluster bandwidth scales poorly with cluster size, and achieving
the highest levels of bandwidth incurs non-linear cost increase with cluster
size
Problem With common DC topology
• Single point of failure
• Over subscript of links higher up in the
topology
Properties of the solution(FAT)
• Backwards compatible with existing
infrastructure
– No changes in applications.
– Support of layer 2 (Ethernet)
• Cost effective
– Low power consumption & heat emission
– Cheap infrastructure
• Allows host communication at line speed.
Clos Networks/Fat-Trees
• Adopt a special instance of a Clos topology:
“K-ary FAT tree”
• Similar trends in telephone switches led to
designing a topology with high bandwidth by
interconnecting smaller commodity switches.
Fat-Tree Based DC Architecture
• Inter-connect racks (of servers) using a fat-tree topology
K-ary fat tree: three-layer topology (edge, aggregation and core)
– each pod consists of (k/2)2 servers & 2 layers of k/2 k-port switches
– each edge switch connects to k/2 servers & k/2 aggr. switches
– each aggr. switch connects to k/2 edge & k/2 core switches
– (k/2)2 core switches: each connects to k pods
Fat-tree with
K=4
Fat-Tree Based Topology
• Why Fat-Tree?
– Fat tree has identical bandwidth at any bisections
– Each layer has the same aggregated bandwidth
• Can be built using cheap devices with uniform capacity
– Each port supports same speed as end host
– All devices can transmit at line speed if packets are distributed uniform along
available paths
• Great scalability: k-port switch supports k3/4 servers/hosts:
(k/2 hosts/switch * k/2 switches/pod * k pods)
Fat tree network with K = 6 supporting 54 hosts
FATtree DC meets the following goals:
• 1. Scalable interconnection bandwidth:
• Achieve the full bisetion bandwidth of clusters consisting
of tens of thousands of nodes.
• 2. Backward compatibility. No changes to end
hosts.
• 3. Cost saving.
• Enforce a special (IP) addressing scheme in
DC
– unused.PodNumber.switchnumber.Endhost
– Allows host attached to same switch to route
only through that switch
– Allows inter-pod traffic to stay within pod
• Exising routing protocals such as OSPF,
OSPF-ECMP are unavailable.
– Concentrate on traffic.
– Avoid congestion
– Fast Switch must be able to recognize!
FAT-Tree (Addressing)
FAT-Tree (2-level Look-ups)
• Use two level look-ups to distribute traffic
and maintain packet ordering:
– First level is prefix lookup
• used to route down the topology to hosts.
– Second level is a suffix lookup
• used to route up towards core
• maintain packet ordering by using same ports for
same server.
• Diffuses and spreads out traffic.
More on Fat-Tree DC Architecture
Diffusion Optimizations (optional dynamic
routing techniques)
1. Flow classification, Define a flow as a sequence
of packets; pod switches forward subsequent packets
of the same flow to same outgoing port. And
periodically reassign a minimal number of output
ports
– Eliminates local congestion
– Assign traffic to ports on a per-flow basis
instead of a per-host basis, Ensure fair
distribution on flows
More on Fat-Tree DC Architecture
2. Flow scheduling, Pay attention to routing large flows,
edge switches detect any outgoing flow whose size grows
above a predefined threshold, and then send notification to a
central scheduler. The central scheduler tries to assign non-
conflicting paths for these large flows.
– Eliminates global congestion
– Prevent long lived flows from sharing the same
links
– Assign long lived flows to different links
Fault-Tolerance
In this scheme, each switch in the network maintains a BFD
(Bidirectional Forwarding Detection) session with each of its
neighbors to determine when a link or neighboring switch fails. 2
kinds of failures:
 Failure between upper layer and core switches
 Outgoing inter-pod traffic: local routing table marks
the affected link as unavailable and chooses another
core switch
 Incoming inter-pod traffic, core switch broadcasts a tag
to upper switches directly connected signifying its
inability to carry traffic to that entire pod, then upper
switches avoid that core switch when assigning flows
destined to that pod
Fault-Tolerance
• Failure b/w lower and upper layer switches:
– Outgoing inter- and intra pod traffic from lower-layer:
– the local flow classifier sets the cost to infinity and
does not assign it any new flows, chooses another
upper layer switch
– Intra-pod traffic using upper layer switch as intermediary:
– Switch broadcasts a tag notifying all lower level
switches, these would check when assigning new
flows and avoid it
– Inter-pod traffic coming into upper layer switch:
– Tag to all its core switches signifying its ability to carry
traffic, core switches mirror this tag to all upper layer
switches, then upper switches avoid affected core
switch when assigning new flaws
Packing
Drawback: No of cables
needed.
Proposed packaging
solution:
• Incremental modeling
• Simplifies cluster
mgmt.
• Reduces cable size
• Reduces cost.
Evaluation
• Benchmark suite of communication mappings
to evaluate the performance of the 4-port fat-
tree using the TwoLevelTable switches,
FlowClassifier ans the FlowScheduler and
compare to hirarchical tree with 3.6:1
oversubscription ratio
Results: Network Utilization
Results: Heat & Power Consumption
Conclusion
Bandwidth is the scalability bottleneck in large
scale clusters
Existing solutions are expensive and limit cluster
size
Fat-tree topology with scalable routing and
backward compatibility with TCP/IP and Ethernet
Large number of commodity switches have the
potential of displacing high end switches in DC
the same way clusters of commodity PCs have
displaced supercomputers for high end
computing environments
Critical Analysis of Fat-Tree Topology
9/22/2013 COMP6611A In-class Presentation 23
Draw Backs
• Scalability
• Unable to support different traffic types
efficiently
• Simple load balancing
• Specialized hardware
• No inherent support for VLan traffic
• Ignored connectivity to the Internet
• Waste of address space
9/22/2013 COMP6611A In-class Presentation 24
Scalability
9/22/2013 COMP6611A In-class Presentation 25
• The size of the data center is fixed.
– For commonly available 24, 36, 48 and 64 ports
commodity switches, such a Fat-tree structure is
limited to sizes 3456, 8192, 27648 and 65536,
respectively.
• The cost of incremental deployment is
significant.
Unable to support different traffic
types efficiently
9/22/2013 COMP6611A In-class Presentation 26
• The proposed structure is switch-
oriented, which might not be sufficient to
support 1-to-x traffic.
• A flow can be setup between 2 any pairs of
servers with the help of the switching
fabric, but it is difficult to setup 1-to-many and
1-to-all flows efficiently using the proposed
algorithms.
Specialized hardware
9/22/2013 COMP6611A In-class Presentation 27
• This architecture still relies on a customized
routing primitive that does not exist in
commodity switches.
• To implement the 2-table lookup, the lookup
routine in a router have to be modified.
– The authors achieves this by modifying a NetFPGA
router.
– This modification is yet to be incorporated into
current commodity switches.
No inherent support for VLan traffic
9/22/2013 COMP6611A In-class Presentation 28
• Agility
– The capacity to assign any server to any services
• Performance Isolation
– Services should not affected each othe
Waste of address space
9/22/2013 COMP6611A In-class Presentation 29
• Semantic information in the IP address of
servers and switches.
– e.g. 10.pod.switch.0/24, port
– A large portion of the IP address is wasted.
• NAT is required at the border.
Possible Future Research
9/22/2013 COMP6611A In-class Presentation 30
• Flexible incremental expandability in data
center networks.
• Support Valiant Load Balancing or other load
balancing techniques in the network.
• Reduce cabling, or increase tolerance to mis-
wiring
References:
• A scalable Commodity DCN Archi, dept. of CSE- Univ of California
• pages.cs.wisc.edu/~akella/CS740/F08/DataCenters.ppt
• http://networks.cs.northwestern.edu/EECS495-
s11/prez/DataCenters-defence.ppt
• University of Hong Kong:
http://www.cse.ust.hk/~kaichen/courses/spring2013/comp6611/

More Related Content

PPTX
Dijkstra & flooding ppt(Routing algorithm)
PDF
Operating systems system structures
PPTX
Lecture 3 threads
PPTX
Multi threading models
PPTX
PPTX
Unit 5 lect-3-multiprocessor
PPTX
Eucalyptus, Nimbus & OpenNebula
PPT
Network topology
Dijkstra & flooding ppt(Routing algorithm)
Operating systems system structures
Lecture 3 threads
Multi threading models
Unit 5 lect-3-multiprocessor
Eucalyptus, Nimbus & OpenNebula
Network topology

What's hot (20)

PPTX
Virtual Machine provisioning and migration services
PPT
Communication primitives
PPTX
Query processing
PPTX
System interconnect architecture
PPTX
Page replacement
PPT
advanced computer architesture-conditions of parallelism
PPTX
Breast cancer Detection using MATLAB
PPTX
Superscalar Architecture_AIUB
PPTX
Pram model
PPTX
Mining single dimensional boolean association rules from transactional
PPTX
Diabetic Retinopathy.pptx
PPTX
graphics processing unit ppt
PPTX
Principal source of optimization in compiler design
PPT
Gsm radio-interface
PPT
system interconnect architectures in ACA
PPTX
Simple Mail Transfer Protocol
PDF
Edge computing
PDF
OIT552 Cloud Computing - Question Bank
PPTX
Principal Sources of Optimization in compiler design
PPTX
Diabetic Retinopathy
Virtual Machine provisioning and migration services
Communication primitives
Query processing
System interconnect architecture
Page replacement
advanced computer architesture-conditions of parallelism
Breast cancer Detection using MATLAB
Superscalar Architecture_AIUB
Pram model
Mining single dimensional boolean association rules from transactional
Diabetic Retinopathy.pptx
graphics processing unit ppt
Principal source of optimization in compiler design
Gsm radio-interface
system interconnect architectures in ACA
Simple Mail Transfer Protocol
Edge computing
OIT552 Cloud Computing - Question Bank
Principal Sources of Optimization in compiler design
Diabetic Retinopathy
Ad

Similar to FATTREE: A scalable Commodity Data Center Network Architecture (20)

PPT
Theo's slides
PPT
Theo's slides
PPT
A Scalable, Commodity Data Center Network Architecture
PPTX
Introduction to Data Center Network Architecture
PDF
Dcnintroduction 141010054657-conversion-gate01
PPTX
Cloud interconnection networks basic .pptx
PPTX
Data Center Networks
PDF
Tendencias de Uso y Diseño de Redes de Interconexión en Computadores Paralel...
PDF
Topic 15: Datacenter Design and Networking
PPTX
DC Moving I migracion a otro datacenterd
PDF
Graphs are at the Heart of the Cloud
PPTX
PortLand.pptx
PDF
Data Center Network Topologies
PPTX
Data Center Network Multipathing
PDF
5G-USA-Telemetry
PDF
LinkedIn OpenFabric Project - Interop 2017
PPTX
Scalable and Cost Effective interconnection of Data-Center Servers Using Dua...
DOCX
Enterprise Data Center Networking (with citations)
PPTX
Lecture notes - Data Centers________.pptx
PPTX
Data center network reference architecture with hpe flex fabric
Theo's slides
Theo's slides
A Scalable, Commodity Data Center Network Architecture
Introduction to Data Center Network Architecture
Dcnintroduction 141010054657-conversion-gate01
Cloud interconnection networks basic .pptx
Data Center Networks
Tendencias de Uso y Diseño de Redes de Interconexión en Computadores Paralel...
Topic 15: Datacenter Design and Networking
DC Moving I migracion a otro datacenterd
Graphs are at the Heart of the Cloud
PortLand.pptx
Data Center Network Topologies
Data Center Network Multipathing
5G-USA-Telemetry
LinkedIn OpenFabric Project - Interop 2017
Scalable and Cost Effective interconnection of Data-Center Servers Using Dua...
Enterprise Data Center Networking (with citations)
Lecture notes - Data Centers________.pptx
Data center network reference architecture with hpe flex fabric
Ad

More from Ankita Mahajan (7)

PPTX
Eye training
PPSX
Rest api standards and best practices
PPSX
Understanding Goods & Services Tax (GST), India
PPTX
Virtualization in 4-4 1-4 Data Center Network.
PDF
IPv6: Internet Protocol version 6
PPTX
Introduction to SDN: Software Defined Networking
PPTX
VL2: A scalable and flexible Data Center Network
Eye training
Rest api standards and best practices
Understanding Goods & Services Tax (GST), India
Virtualization in 4-4 1-4 Data Center Network.
IPv6: Internet Protocol version 6
Introduction to SDN: Software Defined Networking
VL2: A scalable and flexible Data Center Network

Recently uploaded (20)

PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Encapsulation theory and applications.pdf
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Modernizing your data center with Dell and AMD
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
Network Security Unit 5.pdf for BCA BBA.
Encapsulation_ Review paper, used for researhc scholars
The Rise and Fall of 3GPP – Time for a Sabbatical?
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Per capita expenditure prediction using model stacking based on satellite ima...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Big Data Technologies - Introduction.pptx
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Encapsulation theory and applications.pdf
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
MYSQL Presentation for SQL database connectivity
Diabetes mellitus diagnosis method based random forest with bat algorithm
Reach Out and Touch Someone: Haptics and Empathic Computing
Review of recent advances in non-invasive hemoglobin estimation
Modernizing your data center with Dell and AMD
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Dropbox Q2 2025 Financial Results & Investor Presentation

FATTREE: A scalable Commodity Data Center Network Architecture

  • 1. A Scalable, Commodity Data Center Network Architecture Mohammad AI-Fares , Alexander Loukissas , Amin Vahdat
  • 2. Overview • Background of Current DCN Architectures • Desired properties in a DC Architecture • Fat tree based solution • Evaluation • Conclusion
  • 3. Common DC Topology Internet Servers Layer-2 switchAccess Data Center Layer-2/3 switchAggregation Layer-3 routerCore
  • 4. Background Cont.  Oversubscription:  Ratio of the worst-case achievable aggregate bandwidth among the end hosts to the total bisection bandwidth of a particular communication topology (1:1 indicates that all hosts may communicate with any arbitrary host at full BW. Ex) 1Gb/s for commodity Ethernet)  Lower the total cost of the design  Typical designs: factor of 2.5:1 (400 Mbps)to 8:1(125 Mbps)  Cost:  Edge: $7,000 for each 48-port GigE switch  Aggregation and core: $700,000 for 128-port 10GigE switches  Cabling costs are not considered!
  • 5. Current DC Network Architectures 2High level choices for building comm fabric for large scale clusters:  Leverages specialized hardware and communication protocols, such as InfiniBand, Myrinet. – These solutions can scale to clusters of thousands of nodes with high bandwidth – Expensive infrastructure, incompatible with TCP/IP applications  Leverages commodity Ethernet switches and routers to interconnect cluster machines – Backwards compatible with existing infrastructures, low-cost – Aggregate cluster bandwidth scales poorly with cluster size, and achieving the highest levels of bandwidth incurs non-linear cost increase with cluster size
  • 6. Problem With common DC topology • Single point of failure • Over subscript of links higher up in the topology
  • 7. Properties of the solution(FAT) • Backwards compatible with existing infrastructure – No changes in applications. – Support of layer 2 (Ethernet) • Cost effective – Low power consumption & heat emission – Cheap infrastructure • Allows host communication at line speed.
  • 8. Clos Networks/Fat-Trees • Adopt a special instance of a Clos topology: “K-ary FAT tree” • Similar trends in telephone switches led to designing a topology with high bandwidth by interconnecting smaller commodity switches.
  • 9. Fat-Tree Based DC Architecture • Inter-connect racks (of servers) using a fat-tree topology K-ary fat tree: three-layer topology (edge, aggregation and core) – each pod consists of (k/2)2 servers & 2 layers of k/2 k-port switches – each edge switch connects to k/2 servers & k/2 aggr. switches – each aggr. switch connects to k/2 edge & k/2 core switches – (k/2)2 core switches: each connects to k pods Fat-tree with K=4
  • 10. Fat-Tree Based Topology • Why Fat-Tree? – Fat tree has identical bandwidth at any bisections – Each layer has the same aggregated bandwidth • Can be built using cheap devices with uniform capacity – Each port supports same speed as end host – All devices can transmit at line speed if packets are distributed uniform along available paths • Great scalability: k-port switch supports k3/4 servers/hosts: (k/2 hosts/switch * k/2 switches/pod * k pods) Fat tree network with K = 6 supporting 54 hosts
  • 11. FATtree DC meets the following goals: • 1. Scalable interconnection bandwidth: • Achieve the full bisetion bandwidth of clusters consisting of tens of thousands of nodes. • 2. Backward compatibility. No changes to end hosts. • 3. Cost saving.
  • 12. • Enforce a special (IP) addressing scheme in DC – unused.PodNumber.switchnumber.Endhost – Allows host attached to same switch to route only through that switch – Allows inter-pod traffic to stay within pod • Exising routing protocals such as OSPF, OSPF-ECMP are unavailable. – Concentrate on traffic. – Avoid congestion – Fast Switch must be able to recognize! FAT-Tree (Addressing)
  • 13. FAT-Tree (2-level Look-ups) • Use two level look-ups to distribute traffic and maintain packet ordering: – First level is prefix lookup • used to route down the topology to hosts. – Second level is a suffix lookup • used to route up towards core • maintain packet ordering by using same ports for same server. • Diffuses and spreads out traffic.
  • 14. More on Fat-Tree DC Architecture Diffusion Optimizations (optional dynamic routing techniques) 1. Flow classification, Define a flow as a sequence of packets; pod switches forward subsequent packets of the same flow to same outgoing port. And periodically reassign a minimal number of output ports – Eliminates local congestion – Assign traffic to ports on a per-flow basis instead of a per-host basis, Ensure fair distribution on flows
  • 15. More on Fat-Tree DC Architecture 2. Flow scheduling, Pay attention to routing large flows, edge switches detect any outgoing flow whose size grows above a predefined threshold, and then send notification to a central scheduler. The central scheduler tries to assign non- conflicting paths for these large flows. – Eliminates global congestion – Prevent long lived flows from sharing the same links – Assign long lived flows to different links
  • 16. Fault-Tolerance In this scheme, each switch in the network maintains a BFD (Bidirectional Forwarding Detection) session with each of its neighbors to determine when a link or neighboring switch fails. 2 kinds of failures:  Failure between upper layer and core switches  Outgoing inter-pod traffic: local routing table marks the affected link as unavailable and chooses another core switch  Incoming inter-pod traffic, core switch broadcasts a tag to upper switches directly connected signifying its inability to carry traffic to that entire pod, then upper switches avoid that core switch when assigning flows destined to that pod
  • 17. Fault-Tolerance • Failure b/w lower and upper layer switches: – Outgoing inter- and intra pod traffic from lower-layer: – the local flow classifier sets the cost to infinity and does not assign it any new flows, chooses another upper layer switch – Intra-pod traffic using upper layer switch as intermediary: – Switch broadcasts a tag notifying all lower level switches, these would check when assigning new flows and avoid it – Inter-pod traffic coming into upper layer switch: – Tag to all its core switches signifying its ability to carry traffic, core switches mirror this tag to all upper layer switches, then upper switches avoid affected core switch when assigning new flaws
  • 18. Packing Drawback: No of cables needed. Proposed packaging solution: • Incremental modeling • Simplifies cluster mgmt. • Reduces cable size • Reduces cost.
  • 19. Evaluation • Benchmark suite of communication mappings to evaluate the performance of the 4-port fat- tree using the TwoLevelTable switches, FlowClassifier ans the FlowScheduler and compare to hirarchical tree with 3.6:1 oversubscription ratio
  • 21. Results: Heat & Power Consumption
  • 22. Conclusion Bandwidth is the scalability bottleneck in large scale clusters Existing solutions are expensive and limit cluster size Fat-tree topology with scalable routing and backward compatibility with TCP/IP and Ethernet Large number of commodity switches have the potential of displacing high end switches in DC the same way clusters of commodity PCs have displaced supercomputers for high end computing environments
  • 23. Critical Analysis of Fat-Tree Topology 9/22/2013 COMP6611A In-class Presentation 23
  • 24. Draw Backs • Scalability • Unable to support different traffic types efficiently • Simple load balancing • Specialized hardware • No inherent support for VLan traffic • Ignored connectivity to the Internet • Waste of address space 9/22/2013 COMP6611A In-class Presentation 24
  • 25. Scalability 9/22/2013 COMP6611A In-class Presentation 25 • The size of the data center is fixed. – For commonly available 24, 36, 48 and 64 ports commodity switches, such a Fat-tree structure is limited to sizes 3456, 8192, 27648 and 65536, respectively. • The cost of incremental deployment is significant.
  • 26. Unable to support different traffic types efficiently 9/22/2013 COMP6611A In-class Presentation 26 • The proposed structure is switch- oriented, which might not be sufficient to support 1-to-x traffic. • A flow can be setup between 2 any pairs of servers with the help of the switching fabric, but it is difficult to setup 1-to-many and 1-to-all flows efficiently using the proposed algorithms.
  • 27. Specialized hardware 9/22/2013 COMP6611A In-class Presentation 27 • This architecture still relies on a customized routing primitive that does not exist in commodity switches. • To implement the 2-table lookup, the lookup routine in a router have to be modified. – The authors achieves this by modifying a NetFPGA router. – This modification is yet to be incorporated into current commodity switches.
  • 28. No inherent support for VLan traffic 9/22/2013 COMP6611A In-class Presentation 28 • Agility – The capacity to assign any server to any services • Performance Isolation – Services should not affected each othe
  • 29. Waste of address space 9/22/2013 COMP6611A In-class Presentation 29 • Semantic information in the IP address of servers and switches. – e.g. 10.pod.switch.0/24, port – A large portion of the IP address is wasted. • NAT is required at the border.
  • 30. Possible Future Research 9/22/2013 COMP6611A In-class Presentation 30 • Flexible incremental expandability in data center networks. • Support Valiant Load Balancing or other load balancing techniques in the network. • Reduce cabling, or increase tolerance to mis- wiring
  • 31. References: • A scalable Commodity DCN Archi, dept. of CSE- Univ of California • pages.cs.wisc.edu/~akella/CS740/F08/DataCenters.ppt • http://networks.cs.northwestern.edu/EECS495- s11/prez/DataCenters-defence.ppt • University of Hong Kong: http://www.cse.ust.hk/~kaichen/courses/spring2013/comp6611/