SlideShare a Scribd company logo
DWDM 
RAM 
Data@LIGHTspeed 
A Platform for Large-Scale Grid Data 
Service on Dynamic High-Performance 
# 1 
Networks 
T. Lavian, D. B. Hoang, J. Mambretti, S. Figueira, S. Naiksatam, N. Kaushik, 
I. Monga, R. Durairaj, D. Cutrell, S. Merrill, H. Cohen, P. Daspit, F. Travostino 
Presented by Tal Lavian 
NNTTOONNCC 
National Transparent Optical 
Network Consortium 
Defense Advanced Research 
Projects Agency BUSINESS WITHOUT BOUNDARIES
# 2 
Topics 
• Limitations of Current IP Networks 
• Why Dynamic High-Performance Networks 
and DWDM-RAM? 
• DWDM-RAM Architecture 
• An Application Scenario 
• Testbed and DWDM-RAM Implementation 
• Experimental Results 
• Simulation Results 
• Conclusion
Limitations of Current Network Infrastructures 
# 3 
Packet-Switched Limitation 
• Packet switching is NOT appropriate for data 
intensive applications => substantial overhead, 
delays, CapEx, OpEx 
• Limited control and isolation of Network 
Bandwidth 
Grid Infrastructure Limitation 
• Difficulty in encapsulating network resources 
• Notion of Network resources as scheduled Grid 
services.
# 4 
Why Dynamic High-Performance 
Networks? 
• Support data-intensive Grid applications 
• Gives adequate and uncontested bandwidth to an 
application’s burst 
• Employs circuit-switching of large flows of data to 
avoid overheads in breaking flows into small 
packets and delays routing 
• Is capable of automatic end-to-end path 
provisioning 
• Is capable of automatic wavelength switching 
• Provides a set of protocols for managing 
dynamically provisioned wavelengths
# 5 
Why DWDM-RAM ? 
• New platform for data intensive (Grid) applications 
– Encapsulates “optical network resources” into a service 
framework to support dynamically provisioned and 
advanced data-intensive transport services 
– Offers network resources as Grid services for Grid 
computing 
– Allows cooperation of distributed resources 
– Provides a generalized framework for high performance 
applications over next generation networks, not 
necessary optical end-to-end 
– Yields good overall utilization of network resources
# 6 
DWDM-RAM 
• The generic middleware architecture consists of two 
planes over an underlying dynamic optical network 
– Data Grid Plane 
– Network Grid Plane 
• The middleware architecture modularizes components 
into services with well-defined interfaces 
• DWDM-RAM separates services into 2 principal service 
layers 
– Application Middleware Layer: Data Transfer Service, 
Workflow Service, etc. 
– Network Resource Middleware Layer: Network Resource 
Service, Data Handler Service, etc. 
• And a Dynamic Lambda Grid Service over a Dynamic 
Optical Network
# 7 
DWDM-RAM Architecture 
Data-Intensive Applications 
Data 
Transfer 
Service 
Network Resource Service 
Basic Network 
Resource 
Service 
Dynamic Lambda, Optical Burst, etc., 
Data 
Center 
l1 
ln 
Grid services 
l1 
ln 
Data 
Center 
Network 
Resource 
Scheduler 
Data 
Handler 
Service 
Information Service 
DTS API 
Application 
Middleware 
Layer 
NRS Grid Service API 
Network Resource 
Middleware 
Layer 
l  OGSI-ification API 
Connectivity and 
Fabric Layers 
Optical path control
DWDM-RAM vs. Layered Grid Architecture 
Layered DWDM-RAM Layered Grid 
Application 
Connectivity 
“Controlling things locally”: Fabric 
Access to, & control of, 
resources 
# 8 
“Talking to things”: communication 
(Internet protocols) & security 
Resource 
“Sharing single resources”: 
negotiating access, controlling use 
Collective 
“Coordinating multiple resources”: 
ubiquitous infrastructure services, 
app-specific distributed services 
Application 
Data Transfer 
Service 
Network 
Resource 
Service 
Data Lambda Grid 
Service 
Optical Control 
Plane 
l’s 
DTS API 
Application 
Middleware 
Layer 
NRS Grid Service API 
Network Resource 
Middleware 
Layer 
l  OGSI-ification API 
Connectivity & 
Fabric Layer
# 9 
Data Transfer Service Layer 
• Presents an OGSI interface 
between an application and a 
system – receives high-level 
requests, policy-and-access 
filtered, to transfer named 
blocks of data 
• Reserves and coordinates 
necessary resources: network, 
processing, and storage 
• Provides Data Transfer 
Scheduler Service (DTS) 
• Uses OGSI calls to request 
network resources 
Data Receiver λ Data Source 
FTP client FTP server 
DTS NRS 
Client App
# 10 
Network Resource Service Layer 
• Provides an OGSI-based interface to network 
resources 
• Provides an abstraction of “communication 
channels” as a network service 
• Provides an explicit representation of network 
resources scheduling model 
• Enables capabilities for dynamic on-demand 
provisioning and advance scheduling 
• Maintains schedules and provisions resources in 
accordance with the schedule
# 11 
The Network Resource Service 
• On Demand 
– Constrained window 
– Under-constrained window 
• Advance Reservation 
– Constrained window 
• Tight window, fits the transference time closely 
– Under-constrained window 
• Large window, fits the transference time loosely 
• Allows flexibility in the scheduling
# 12 
Dynamic Lambda Grid Service 
• Presents an OGSI interface between the network 
resource service and the network resources of the 
underlying network 
• Establishes, controls, and deallocates complete 
paths across both optical and electronic domains 
• Operates over a dynamic optical network
# 13 
An Application Scenario 
A High Energy Physics group may wish to move 100 
Terabytes data block from a particular run or set of events 
at an accelerator facility to its local or remote 
computational machine farm for extensive analysis 
• Client requests: “Copy data X to the local store on 
machine Y after 1:00 and before 3:00.” 
• Client receives a “ticket” which describes the resultant 
scheduling and provides a method for modifying and 
monitoring the scheduled job
# 14 
An Application Scenario (cont’d) 
• At application level: Data Transfer Scheduler Service creates a 
tentative plan for data transfers that satisfies multiple requests 
over multiple network resources distributed at various sites 
• At middleware level: A network resource schedule is formed 
based on the understanding of the dynamical lightpath 
provisioning capability of the underlying network and its 
topology and connectivity 
• At resource provisioning level: Actual physical optical network 
resources are provisioned and allocated at the appropriate time 
for a transfer operation 
• Data Handler Service on the receiving node is contacted to 
initiate the transfer 
• At the end of the data transfer process, the network resources 
are de-allocated and returned to the pool
# 15 
NRS Interface and Functionality 
// Bind to an NRS service: 
NRS = lookupNRS(address); 
//Request cost function evaluation 
request = {pathEndpointOneAddress, 
pathEndpointTwoAddress, 
duration, 
startAfterDate, 
endBeforeDate}; 
ticket = NRS.requestReservation(request); 
// Inspect the ticket to determine success, and to find 
the currently scheduled time: 
ticket.display(); 
// The ticket may now be persisted and used 
from another location 
NRS.updateTicket(ticket); 
// Inspect the ticket to see if the reservation’s scheduled time has changed, or 
verify that the job completed, with any relevant status information: 
ticket.display();
# 16 
Testbed and Experiments 
• Experiments have been performed on the OMNInet 
– End-to-end FTP transfer over a 1Gbps link 
GRID Service 
Request 
Network Service Request 
ODIN OmniNet Control Plane 
Optical 
Control 
Network 
Optical 
Control 
Network 
UNI-N 
Data Transmission Plane 
ODIN 
UNI-N 
Connection 
Control 
L3 router 
L2 switch 
Service 
Control 
Data 
Path 
Control 
Data 
storage 
switch 
Data 
Path 
Control 
DDAATTAA G GRRIDID S SEERRVVICICEE P PLLAANNEE 
l1 ln 
l1 
ln 
l1 
ln 
Data 
Path 
Data 
Center 
Service 
Control 
NNEETTWWOORRKK S SEERRVVICICEE P PLLAANNEE 
Data 
Center
# 17 
10/100/ 
GE 
OMNInet Testbed 
W Taylor Sheridan 
Lake Shore 
10 GE 
5200 OFA 
l 
2 
3 
Optera 5200 OFA 
Photonic 
Node 
S. Federal 
5200 OFA 
Photonic 
Node 
Photonic 
Node 10/100/ 
GE 
10/100/ 
GE 
10/100/ 
GE 
Optera 
5200 
10Gb/s 
TSPR 
Photonic 
Node 
l4 
10 GE 
PP 
8600 
1 
l 
l 
l 
2 
3 
4 
l 
l 
l 
1 
Optera 
5200 
10Gb/s 
TSPR 
10 GE 
Optera 
5200 
10Gb/ 
s 
TSPR 
1 
l 
l 
l 
2 
3 
4 
l 
Optera 
5200 
10Gb/s 
TSPR 
1 
l 
l 
l 
2 
3 
4 
l 
WAN PHY interfaces 
1310 nm 10 GbE 
10 GE 
PP 
8600 
… 
EVL/UIC 
OM5200 
LAC/UIC 
OM5200 
StarLight 
Interconnect 
with other 
research 
networks 
10GE LAN PHY (Oct 04) 
TECH/NU 
OM5200 
10 l 
Optera Metro 5200 OFA 
#5 – 24 km 
#6 – 24 km 
#2 – 10.3 km 
#4 – 7.2 km 
#9 – 5.3 km 
5200 OFA 
• 8x8x8l Scalable 
photonic switch 
• Trunk side – 10G DWDM 
• OFA on all trunks 
• ASTN control plane 
Grid 
Clusters 
Grid Storage 
10 l 
#8 – 6.7 km 
PP 
8600 
PP 
8600 
2 x gigE
The Network Resource Scheduler Service 
W 
3:30 4:00 4:30 5:00 5:30 
X 
3:30 4:00 4:30 5:00 5:30 
W 
X 
3:30 4:00 4:30 5:00 5:30 
# 18 
Under-constrained window 
• Request for 1/2 hour between 4:00 and 
5:30 on Segment D granted to User W at 
4:00 
• New request from User X for same 
segment for 1 hour between 3:30 and 
5:00 
• Reschedule user W to 4:30; user X to 
3:30. Everyone is happy. 
Route allocated for a time slot; new request comes in; 1st route can be 
rescheduled for a later slot within window to accommodate new request
# 19 
20GB File Transfer
noit acoll A ht aP 
t seuqer 
r evr eS NI DO 
gni ssecor P 
DI ht aP 
denr ut er 
r ef snar T at aD 
# 20 
Initial Performance measure: 
End-to-End Transfer Time 
r ef snart eli F 
0.5s 3.6s 0.5s 174s 0.3s 11s 
BG 02 
r evr eS NI DO 
gni ssecor P 
t seuqer 
sevirr a 
r ef snart eli F 
ht ap , enod 
desael er 
acoll aeD ht aP 
t seuqer 
25s 
kr owt eN 
noit ar ugif nocer 
0.14s 
put es PTF 
e mit
Transaction Demonstration Time Line 
#1 Transfer 
6 minute cycle time 
Customer #2 Transaction Accumulation 
#2 Transfer #2 Transfer 
Customer #1 Transaction Accumulation 
-30 0 30 60 90 120 150 180 210 240 270 300 330 360 390 420 450 480 510 540 570 600 630 660 
# 21 
allocate path de-allocate path 
#1 Transfer 
time (sec) 
# 22 
Conclusion 
• The DWDM platform forges close cooperation between 
data intensive Grid applications and network resources 
• The DWDM-RAM architecture yields Data Intensive 
Services that best exploit Dynamic Optical Networks 
• Network resources become actively managed, scheduled 
services 
• This approach maximizes the satisfaction of high-capacity 
users while yielding good overall utilization of resources 
• The service-centric approach is a foundation for new types of 
services
# 23 
Back up slides
# 24 
DWDM-RAM Prototype 
Implementation 
DWDM-RAM October 
2003 
Applications 
… 
DHS DTS NRS 
ftp, 
GridFTP, 
Sabul 
Fast, Etc. 
l’s 
Replication, 
Disk, 
Accounting 
Authentication, 
Etc. 
ODIN 
OMNInet 
Other 
DWDM 
l’s
DWDM-RAM Service Control Architecture 
# 25 
GRID Service 
Request 
Network Service Request 
ODIN OmniNet Control Plane 
Optical 
Control 
Network 
Optical 
Control 
Network 
UNI-N 
Data Transmission Plane 
ODIN 
UNI-N 
Connection 
Control 
L3 router 
L2 switch 
Service 
Control 
Data 
Path 
Control 
Data 
storage 
switch 
Data 
Path 
Control 
DDAATTAA G GRRIDID S SEERRVVICICEE P PLLAANNEE 
l1 ln 
l1 
ln 
l1 
ln 
Data 
Path 
Data 
Center 
Service 
Control 
NNEETTWWOORRKK S SEERRVVICICEE P PLLAANNEE 
Data 
Center
# 26 
Application Level Measurements 
File size: 20 GB 
Path allocation: 29.7 secs 
Data transfer setup time: 0.141 secs 
FTP transfer time: 174 secs 
Maximum transfer rate: 935 Mbits/sec 
Path tear down time: 11.3 secs 
Effective transfer rate: 762 Mbits/sec
The Network Resource Service (NRS) 
• Provides an OGSI-based interface to network 
resources 
• Request parameters 
– Network addresses of the hosts to be connected 
– Window of time for the allocation 
– Duration of the allocation 
– Minimum and maximum acceptable bandwidth 
# 27 
(future)
# 28 
The Network Resource Service 
• Provides the network resource 
– On demand 
– By advance reservation 
• Network is requested within a window 
– Constrained 
– Under-constrained
# 29 
OMNInet Testbed 
• Four-node multi-site optical metro testbed network in Chicago 
-- the first 10GigE service trial when installed in 2001 
• Nodes are interconnected as a partial mesh with lightpaths 
provisioned with DWDM on dedicated fiber. 
• Each node includes a MEMs-based WDM photonic switch, 
Optical Fiber Amplifier (OFA), optical transponders, and high-performance 
Ethernet switch. 
• The switches are configured with four ports capable of 
supporting 10GigE. 
• Application cluster and compute node access is provided by 
Passport 8600 L2/L3 switches, which are provisioned with 
10/100/1000 Ethernet user ports, and a 10GigE LAN port. 
• Partners: SBC, Nortel Networks, iCAIR/Northwestern 
University
Optical Dynamic Intelligent Network Services 
# 30 
(ODIN) 
• Software suite that controls the OMNInet through 
lower-level API calls 
• Designed for high-performance, long-term flow 
with flexible and fine grained control 
• Stateless server, which includes an API to provide 
path provisioning and monitoring to the higher 
layers
# 31 
Blocking probability 
Under-constrained requests 
0.8 
0.6 
0.4 
0.2 
0 
1 2 3 4 5 6 
experiment number 
blocking probability 
0% 
50% 
100% 
low er-bound
# 32 
Overheads - Amortization 
When dealing with data-intensive 
applications, overhead is insignificant! 
Setup time = 48 sec, Bandwidth=920 Mbps 
100% 
90% 
80% 
70% 
60% 
50% 
40% 
30% 
20% 
10% 
0% 
100 1000 10000 100000 1000000 10000000 
File Size (MBytes) 
Setup time / Total Transfer 
Time 
500GB
Grids urged us to think End-to-End Solutions 
Look past boxes, feeds, and speeds 
Apps such as Grids call for a complex mix of: 
Bit-blasting 
Finesse (granularity of control) 
Virtualization (access to diverse knobs) 
Resource bundling (network AND …) 
Multi-Domain Security (AAA to start) 
Freedom from GUIs, human intervention 
# 33 
++ 
++ 
++ 
++ 
++ 
Our recipe is a software-rich symbiosis 
of Packet and Optical products 
! ERA WTFOS
# 34 
Optical Abundant Bandwidth Meets Grid 
The Data Intensive App Challenge: 
Emerging data intensive applications in the field of HEP, 
astro-physics, astronomy, bioinformatics, computational 
chemistry, etc., require extremely high performance and 
long term data flows, scalability for huge data volume, 
global reach, adjustability to unpredictable traffic 
behavior, and integration with multiple Grid resources. 
Response: DWDM-RAM 
An architecture for data intensive Grids enabled by next 
generation dynamic optical networks, incorporating 
new methods for lightpath provisioning. DWDM-RAM 
is designed to meet the networking challenges of 
extremely large scale Grid applications. Traditional 
network infrastructure cannot meet these demands, 
especially, requirements for intensive data flows 
PBs Storage 
Data-Intensive Applications 
DWDM-RAM 
Abundant Optical Bandwidth 
Tbs on single fiber strand

More Related Content

PPT
DWDM-RAM:Enabling Grid Services with Dynamic Optical Networks
PPT
A Platform for Data Intensive Services Enabled by Next Generation Dynamic Opt...
PPT
Business Models for Dynamically Provisioned Optical Networks
PPT
An Architecture for Data Intensive Service Enabled by Next Generation Optical...
PPT
Quality of service
PPTX
Qos Quality of services
PPTX
Quality of Service
PPT
Traffic Characterization
DWDM-RAM:Enabling Grid Services with Dynamic Optical Networks
A Platform for Data Intensive Services Enabled by Next Generation Dynamic Opt...
Business Models for Dynamically Provisioned Optical Networks
An Architecture for Data Intensive Service Enabled by Next Generation Optical...
Quality of service
Qos Quality of services
Quality of Service
Traffic Characterization

What's hot (20)

PPT
An Energy Aware QOS Routing Protocol
PPT
Dynamic Classification in a Silicon-Based Forwarding Engine
PPTX
High performance browser networking ch1,2,3
PPT
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advan...
PPTX
Quality of service(qos) by M.BILAL.SATTI
PPT
Application-engaged Dynamic Orchestration of Optical Network Resources
PPT
22 circuits
PDF
One of the Ways How to Make RIB Distributed
PDF
Bg4101335337
PPT
Quality of service
PPTX
Quality of Service
PPT
Qo s 09-integrated and red
PPT
integrated and diffrentiated services
PPTX
Virtualization in 4-4 1-4 Data Center Network.
PPTX
QoS (quality of service)
PDF
Linac Coherent Light Source (LCLS) Data Transfer Requirements
PPT
Enabling Active Flow Manipulation (AFM) in Silicon-based Network Forwarding E...
PPTX
Link_NwkingforDevOps
PPT
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advan...
PPTX
Techniques of achieving google quality of service
An Energy Aware QOS Routing Protocol
Dynamic Classification in a Silicon-Based Forwarding Engine
High performance browser networking ch1,2,3
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advan...
Quality of service(qos) by M.BILAL.SATTI
Application-engaged Dynamic Orchestration of Optical Network Resources
22 circuits
One of the Ways How to Make RIB Distributed
Bg4101335337
Quality of service
Quality of Service
Qo s 09-integrated and red
integrated and diffrentiated services
Virtualization in 4-4 1-4 Data Center Network.
QoS (quality of service)
Linac Coherent Light Source (LCLS) Data Transfer Requirements
Enabling Active Flow Manipulation (AFM) in Silicon-based Network Forwarding E...
Link_NwkingforDevOps
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advan...
Techniques of achieving google quality of service
Ad

Similar to A Platform for Large-Scale Grid Data Service on Dynamic High-Performance Networks (20)

PPT
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advan...
PPT
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advan...
PPT
A Platform for Data Intensive Services Enabled by Next Generation Dynamic Opt...
PDF
DWDM-RAM: a data intensive Grid service architecture enabled by dynamic optic...
PPT
Lambda Data Grid
PPT
Business Model Concepts for Dynamically Provisioned Optical Networks
PDF
DWDM-RAM: Enabling Grid Services with Dynamic Optical Networks
PDF
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
PDF
dimitra.pdf
PDF
Network-aware Data Management for High Throughput Flows Akamai, Cambridge, ...
PDF
A Platform for Large-Scale Grid Data Service on Dynamic High-Performance Netw...
PPT
Optical Networks Infrastructure
PDF
Strategy briefing: network technologies 7 March 2013
PPTX
Cloud interconnection networks basic .pptx
PPT
Optical Networking Services
PPTX
Transport SDN Overview and Standards Update: Industry Perspectives
PPT
Building a resilience infrastructure for Content Distribution
PDF
A Research Framework for the Clean-Slate Design of Next-Generation Optical Ac...
PDF
Software defined optical communication
PPTX
opencdn_iecco18.pptx
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advan...
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advan...
A Platform for Data Intensive Services Enabled by Next Generation Dynamic Opt...
DWDM-RAM: a data intensive Grid service architecture enabled by dynamic optic...
Lambda Data Grid
Business Model Concepts for Dynamically Provisioned Optical Networks
DWDM-RAM: Enabling Grid Services with Dynamic Optical Networks
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generati...
dimitra.pdf
Network-aware Data Management for High Throughput Flows Akamai, Cambridge, ...
A Platform for Large-Scale Grid Data Service on Dynamic High-Performance Netw...
Optical Networks Infrastructure
Strategy briefing: network technologies 7 March 2013
Cloud interconnection networks basic .pptx
Optical Networking Services
Transport SDN Overview and Standards Update: Industry Perspectives
Building a resilience infrastructure for Content Distribution
A Research Framework for the Clean-Slate Design of Next-Generation Optical Ac...
Software defined optical communication
opencdn_iecco18.pptx
Ad

More from Tal Lavian Ph.D. (20)

PDF
Ultra low phase noise frequency synthesizer
PDF
Ultra low phase noise frequency synthesizer
PDF
Photonic line sharing for high-speed routers
PDF
Systems and methods to support sharing and exchanging in a network
PDF
Systems and methods for visual presentation and selection of IVR menu
PDF
Grid proxy architecture for network resources
PDF
Ultra low phase noise frequency synthesizer
PDF
Systems and methods for electronic communications
PDF
Ultra low phase noise frequency synthesizer
PDF
Ultra low phase noise frequency synthesizer
PDF
Radar target detection system for autonomous vehicles with ultra-low phase no...
PDF
Grid proxy architecture for network resources
PDF
Method and apparatus for scheduling resources on a switched underlay network
PDF
Dynamic assignment of traffic classes to a priority queue in a packet forward...
PDF
Method and apparatus for using a command design pattern to access and configu...
PDF
Reliable rating system and method thereof
PDF
Time variant rating system and method thereof
PDF
Systems and methods for visual presentation and selection of ivr menu
PDF
Ultra low phase noise frequency synthesizer
PDF
Ultra low phase noise frequency synthesizer
Ultra low phase noise frequency synthesizer
Ultra low phase noise frequency synthesizer
Photonic line sharing for high-speed routers
Systems and methods to support sharing and exchanging in a network
Systems and methods for visual presentation and selection of IVR menu
Grid proxy architecture for network resources
Ultra low phase noise frequency synthesizer
Systems and methods for electronic communications
Ultra low phase noise frequency synthesizer
Ultra low phase noise frequency synthesizer
Radar target detection system for autonomous vehicles with ultra-low phase no...
Grid proxy architecture for network resources
Method and apparatus for scheduling resources on a switched underlay network
Dynamic assignment of traffic classes to a priority queue in a packet forward...
Method and apparatus for using a command design pattern to access and configu...
Reliable rating system and method thereof
Time variant rating system and method thereof
Systems and methods for visual presentation and selection of ivr menu
Ultra low phase noise frequency synthesizer
Ultra low phase noise frequency synthesizer

Recently uploaded (20)

PDF
How NGOs Save Costs with Affordable IT Rentals
PPTX
"Fundamentals of Digital Image Processing: A Visual Approach"
PPTX
1.pptxsadafqefeqfeqfeffeqfqeqfeqefqfeqfqeffqe
PPT
FABRICATION OF MOS FET BJT DEVICES IN NANOMETER
PPTX
Sem-8 project ppt fortvfvmat uyyjhuj.pptx
PPTX
Prograce_Present.....ggation_Simple.pptx
PDF
Smarter Security: How Door Access Control Works with Alarms & CCTV
PPT
Lines and angles cbse class 9 math chemistry
PPTX
Wireless and Mobile Backhaul Market.pptx
PPTX
STEEL- intro-1.pptxhejwjenwnwnenemwmwmwm
PPTX
Presentacion compuuuuuuuuuuuuuuuuuuuuuuu
PPTX
INFERTILITY (FEMALE FACTORS).pptxgvcghhfcg
PPTX
02fdgfhfhfhghghhhhhhhhhhhhhhhhhhhhh.pptx
PPTX
DEATH AUDIT MAY 2025.pptxurjrjejektjtjyjjy
PPTX
Entre CHtzyshshshshshshshzhhzzhhz 4MSt.pptx
PPTX
quadraticequations-111211090004-phpapp02.pptx
PDF
Dynamic Checkweighers and Automatic Weighing Machine Solutions
PDF
-DIGITAL-INDIA.pdf one of the most prominent
PPTX
KVL KCL ppt electrical electronics eee tiet
PPTX
material for studying about lift elevators escalation
How NGOs Save Costs with Affordable IT Rentals
"Fundamentals of Digital Image Processing: A Visual Approach"
1.pptxsadafqefeqfeqfeffeqfqeqfeqefqfeqfqeffqe
FABRICATION OF MOS FET BJT DEVICES IN NANOMETER
Sem-8 project ppt fortvfvmat uyyjhuj.pptx
Prograce_Present.....ggation_Simple.pptx
Smarter Security: How Door Access Control Works with Alarms & CCTV
Lines and angles cbse class 9 math chemistry
Wireless and Mobile Backhaul Market.pptx
STEEL- intro-1.pptxhejwjenwnwnenemwmwmwm
Presentacion compuuuuuuuuuuuuuuuuuuuuuuu
INFERTILITY (FEMALE FACTORS).pptxgvcghhfcg
02fdgfhfhfhghghhhhhhhhhhhhhhhhhhhhh.pptx
DEATH AUDIT MAY 2025.pptxurjrjejektjtjyjjy
Entre CHtzyshshshshshshshzhhzzhhz 4MSt.pptx
quadraticequations-111211090004-phpapp02.pptx
Dynamic Checkweighers and Automatic Weighing Machine Solutions
-DIGITAL-INDIA.pdf one of the most prominent
KVL KCL ppt electrical electronics eee tiet
material for studying about lift elevators escalation

A Platform for Large-Scale Grid Data Service on Dynamic High-Performance Networks

  • 1. DWDM RAM Data@LIGHTspeed A Platform for Large-Scale Grid Data Service on Dynamic High-Performance # 1 Networks T. Lavian, D. B. Hoang, J. Mambretti, S. Figueira, S. Naiksatam, N. Kaushik, I. Monga, R. Durairaj, D. Cutrell, S. Merrill, H. Cohen, P. Daspit, F. Travostino Presented by Tal Lavian NNTTOONNCC National Transparent Optical Network Consortium Defense Advanced Research Projects Agency BUSINESS WITHOUT BOUNDARIES
  • 2. # 2 Topics • Limitations of Current IP Networks • Why Dynamic High-Performance Networks and DWDM-RAM? • DWDM-RAM Architecture • An Application Scenario • Testbed and DWDM-RAM Implementation • Experimental Results • Simulation Results • Conclusion
  • 3. Limitations of Current Network Infrastructures # 3 Packet-Switched Limitation • Packet switching is NOT appropriate for data intensive applications => substantial overhead, delays, CapEx, OpEx • Limited control and isolation of Network Bandwidth Grid Infrastructure Limitation • Difficulty in encapsulating network resources • Notion of Network resources as scheduled Grid services.
  • 4. # 4 Why Dynamic High-Performance Networks? • Support data-intensive Grid applications • Gives adequate and uncontested bandwidth to an application’s burst • Employs circuit-switching of large flows of data to avoid overheads in breaking flows into small packets and delays routing • Is capable of automatic end-to-end path provisioning • Is capable of automatic wavelength switching • Provides a set of protocols for managing dynamically provisioned wavelengths
  • 5. # 5 Why DWDM-RAM ? • New platform for data intensive (Grid) applications – Encapsulates “optical network resources” into a service framework to support dynamically provisioned and advanced data-intensive transport services – Offers network resources as Grid services for Grid computing – Allows cooperation of distributed resources – Provides a generalized framework for high performance applications over next generation networks, not necessary optical end-to-end – Yields good overall utilization of network resources
  • 6. # 6 DWDM-RAM • The generic middleware architecture consists of two planes over an underlying dynamic optical network – Data Grid Plane – Network Grid Plane • The middleware architecture modularizes components into services with well-defined interfaces • DWDM-RAM separates services into 2 principal service layers – Application Middleware Layer: Data Transfer Service, Workflow Service, etc. – Network Resource Middleware Layer: Network Resource Service, Data Handler Service, etc. • And a Dynamic Lambda Grid Service over a Dynamic Optical Network
  • 7. # 7 DWDM-RAM Architecture Data-Intensive Applications Data Transfer Service Network Resource Service Basic Network Resource Service Dynamic Lambda, Optical Burst, etc., Data Center l1 ln Grid services l1 ln Data Center Network Resource Scheduler Data Handler Service Information Service DTS API Application Middleware Layer NRS Grid Service API Network Resource Middleware Layer l OGSI-ification API Connectivity and Fabric Layers Optical path control
  • 8. DWDM-RAM vs. Layered Grid Architecture Layered DWDM-RAM Layered Grid Application Connectivity “Controlling things locally”: Fabric Access to, & control of, resources # 8 “Talking to things”: communication (Internet protocols) & security Resource “Sharing single resources”: negotiating access, controlling use Collective “Coordinating multiple resources”: ubiquitous infrastructure services, app-specific distributed services Application Data Transfer Service Network Resource Service Data Lambda Grid Service Optical Control Plane l’s DTS API Application Middleware Layer NRS Grid Service API Network Resource Middleware Layer l OGSI-ification API Connectivity & Fabric Layer
  • 9. # 9 Data Transfer Service Layer • Presents an OGSI interface between an application and a system – receives high-level requests, policy-and-access filtered, to transfer named blocks of data • Reserves and coordinates necessary resources: network, processing, and storage • Provides Data Transfer Scheduler Service (DTS) • Uses OGSI calls to request network resources Data Receiver λ Data Source FTP client FTP server DTS NRS Client App
  • 10. # 10 Network Resource Service Layer • Provides an OGSI-based interface to network resources • Provides an abstraction of “communication channels” as a network service • Provides an explicit representation of network resources scheduling model • Enables capabilities for dynamic on-demand provisioning and advance scheduling • Maintains schedules and provisions resources in accordance with the schedule
  • 11. # 11 The Network Resource Service • On Demand – Constrained window – Under-constrained window • Advance Reservation – Constrained window • Tight window, fits the transference time closely – Under-constrained window • Large window, fits the transference time loosely • Allows flexibility in the scheduling
  • 12. # 12 Dynamic Lambda Grid Service • Presents an OGSI interface between the network resource service and the network resources of the underlying network • Establishes, controls, and deallocates complete paths across both optical and electronic domains • Operates over a dynamic optical network
  • 13. # 13 An Application Scenario A High Energy Physics group may wish to move 100 Terabytes data block from a particular run or set of events at an accelerator facility to its local or remote computational machine farm for extensive analysis • Client requests: “Copy data X to the local store on machine Y after 1:00 and before 3:00.” • Client receives a “ticket” which describes the resultant scheduling and provides a method for modifying and monitoring the scheduled job
  • 14. # 14 An Application Scenario (cont’d) • At application level: Data Transfer Scheduler Service creates a tentative plan for data transfers that satisfies multiple requests over multiple network resources distributed at various sites • At middleware level: A network resource schedule is formed based on the understanding of the dynamical lightpath provisioning capability of the underlying network and its topology and connectivity • At resource provisioning level: Actual physical optical network resources are provisioned and allocated at the appropriate time for a transfer operation • Data Handler Service on the receiving node is contacted to initiate the transfer • At the end of the data transfer process, the network resources are de-allocated and returned to the pool
  • 15. # 15 NRS Interface and Functionality // Bind to an NRS service: NRS = lookupNRS(address); //Request cost function evaluation request = {pathEndpointOneAddress, pathEndpointTwoAddress, duration, startAfterDate, endBeforeDate}; ticket = NRS.requestReservation(request); // Inspect the ticket to determine success, and to find the currently scheduled time: ticket.display(); // The ticket may now be persisted and used from another location NRS.updateTicket(ticket); // Inspect the ticket to see if the reservation’s scheduled time has changed, or verify that the job completed, with any relevant status information: ticket.display();
  • 16. # 16 Testbed and Experiments • Experiments have been performed on the OMNInet – End-to-end FTP transfer over a 1Gbps link GRID Service Request Network Service Request ODIN OmniNet Control Plane Optical Control Network Optical Control Network UNI-N Data Transmission Plane ODIN UNI-N Connection Control L3 router L2 switch Service Control Data Path Control Data storage switch Data Path Control DDAATTAA G GRRIDID S SEERRVVICICEE P PLLAANNEE l1 ln l1 ln l1 ln Data Path Data Center Service Control NNEETTWWOORRKK S SEERRVVICICEE P PLLAANNEE Data Center
  • 17. # 17 10/100/ GE OMNInet Testbed W Taylor Sheridan Lake Shore 10 GE 5200 OFA l 2 3 Optera 5200 OFA Photonic Node S. Federal 5200 OFA Photonic Node Photonic Node 10/100/ GE 10/100/ GE 10/100/ GE Optera 5200 10Gb/s TSPR Photonic Node l4 10 GE PP 8600 1 l l l 2 3 4 l l l 1 Optera 5200 10Gb/s TSPR 10 GE Optera 5200 10Gb/ s TSPR 1 l l l 2 3 4 l Optera 5200 10Gb/s TSPR 1 l l l 2 3 4 l WAN PHY interfaces 1310 nm 10 GbE 10 GE PP 8600 … EVL/UIC OM5200 LAC/UIC OM5200 StarLight Interconnect with other research networks 10GE LAN PHY (Oct 04) TECH/NU OM5200 10 l Optera Metro 5200 OFA #5 – 24 km #6 – 24 km #2 – 10.3 km #4 – 7.2 km #9 – 5.3 km 5200 OFA • 8x8x8l Scalable photonic switch • Trunk side – 10G DWDM • OFA on all trunks • ASTN control plane Grid Clusters Grid Storage 10 l #8 – 6.7 km PP 8600 PP 8600 2 x gigE
  • 18. The Network Resource Scheduler Service W 3:30 4:00 4:30 5:00 5:30 X 3:30 4:00 4:30 5:00 5:30 W X 3:30 4:00 4:30 5:00 5:30 # 18 Under-constrained window • Request for 1/2 hour between 4:00 and 5:30 on Segment D granted to User W at 4:00 • New request from User X for same segment for 1 hour between 3:30 and 5:00 • Reschedule user W to 4:30; user X to 3:30. Everyone is happy. Route allocated for a time slot; new request comes in; 1st route can be rescheduled for a later slot within window to accommodate new request
  • 19. # 19 20GB File Transfer
  • 20. noit acoll A ht aP t seuqer r evr eS NI DO gni ssecor P DI ht aP denr ut er r ef snar T at aD # 20 Initial Performance measure: End-to-End Transfer Time r ef snart eli F 0.5s 3.6s 0.5s 174s 0.3s 11s BG 02 r evr eS NI DO gni ssecor P t seuqer sevirr a r ef snart eli F ht ap , enod desael er acoll aeD ht aP t seuqer 25s kr owt eN noit ar ugif nocer 0.14s put es PTF e mit
  • 21. Transaction Demonstration Time Line #1 Transfer 6 minute cycle time Customer #2 Transaction Accumulation #2 Transfer #2 Transfer Customer #1 Transaction Accumulation -30 0 30 60 90 120 150 180 210 240 270 300 330 360 390 420 450 480 510 540 570 600 630 660 # 21 allocate path de-allocate path #1 Transfer time (sec) 
  • 22. # 22 Conclusion • The DWDM platform forges close cooperation between data intensive Grid applications and network resources • The DWDM-RAM architecture yields Data Intensive Services that best exploit Dynamic Optical Networks • Network resources become actively managed, scheduled services • This approach maximizes the satisfaction of high-capacity users while yielding good overall utilization of resources • The service-centric approach is a foundation for new types of services
  • 23. # 23 Back up slides
  • 24. # 24 DWDM-RAM Prototype Implementation DWDM-RAM October 2003 Applications … DHS DTS NRS ftp, GridFTP, Sabul Fast, Etc. l’s Replication, Disk, Accounting Authentication, Etc. ODIN OMNInet Other DWDM l’s
  • 25. DWDM-RAM Service Control Architecture # 25 GRID Service Request Network Service Request ODIN OmniNet Control Plane Optical Control Network Optical Control Network UNI-N Data Transmission Plane ODIN UNI-N Connection Control L3 router L2 switch Service Control Data Path Control Data storage switch Data Path Control DDAATTAA G GRRIDID S SEERRVVICICEE P PLLAANNEE l1 ln l1 ln l1 ln Data Path Data Center Service Control NNEETTWWOORRKK S SEERRVVICICEE P PLLAANNEE Data Center
  • 26. # 26 Application Level Measurements File size: 20 GB Path allocation: 29.7 secs Data transfer setup time: 0.141 secs FTP transfer time: 174 secs Maximum transfer rate: 935 Mbits/sec Path tear down time: 11.3 secs Effective transfer rate: 762 Mbits/sec
  • 27. The Network Resource Service (NRS) • Provides an OGSI-based interface to network resources • Request parameters – Network addresses of the hosts to be connected – Window of time for the allocation – Duration of the allocation – Minimum and maximum acceptable bandwidth # 27 (future)
  • 28. # 28 The Network Resource Service • Provides the network resource – On demand – By advance reservation • Network is requested within a window – Constrained – Under-constrained
  • 29. # 29 OMNInet Testbed • Four-node multi-site optical metro testbed network in Chicago -- the first 10GigE service trial when installed in 2001 • Nodes are interconnected as a partial mesh with lightpaths provisioned with DWDM on dedicated fiber. • Each node includes a MEMs-based WDM photonic switch, Optical Fiber Amplifier (OFA), optical transponders, and high-performance Ethernet switch. • The switches are configured with four ports capable of supporting 10GigE. • Application cluster and compute node access is provided by Passport 8600 L2/L3 switches, which are provisioned with 10/100/1000 Ethernet user ports, and a 10GigE LAN port. • Partners: SBC, Nortel Networks, iCAIR/Northwestern University
  • 30. Optical Dynamic Intelligent Network Services # 30 (ODIN) • Software suite that controls the OMNInet through lower-level API calls • Designed for high-performance, long-term flow with flexible and fine grained control • Stateless server, which includes an API to provide path provisioning and monitoring to the higher layers
  • 31. # 31 Blocking probability Under-constrained requests 0.8 0.6 0.4 0.2 0 1 2 3 4 5 6 experiment number blocking probability 0% 50% 100% low er-bound
  • 32. # 32 Overheads - Amortization When dealing with data-intensive applications, overhead is insignificant! Setup time = 48 sec, Bandwidth=920 Mbps 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 100 1000 10000 100000 1000000 10000000 File Size (MBytes) Setup time / Total Transfer Time 500GB
  • 33. Grids urged us to think End-to-End Solutions Look past boxes, feeds, and speeds Apps such as Grids call for a complex mix of: Bit-blasting Finesse (granularity of control) Virtualization (access to diverse knobs) Resource bundling (network AND …) Multi-Domain Security (AAA to start) Freedom from GUIs, human intervention # 33 ++ ++ ++ ++ ++ Our recipe is a software-rich symbiosis of Packet and Optical products ! ERA WTFOS
  • 34. # 34 Optical Abundant Bandwidth Meets Grid The Data Intensive App Challenge: Emerging data intensive applications in the field of HEP, astro-physics, astronomy, bioinformatics, computational chemistry, etc., require extremely high performance and long term data flows, scalability for huge data volume, global reach, adjustability to unpredictable traffic behavior, and integration with multiple Grid resources. Response: DWDM-RAM An architecture for data intensive Grids enabled by next generation dynamic optical networks, incorporating new methods for lightpath provisioning. DWDM-RAM is designed to meet the networking challenges of extremely large scale Grid applications. Traditional network infrastructure cannot meet these demands, especially, requirements for intensive data flows PBs Storage Data-Intensive Applications DWDM-RAM Abundant Optical Bandwidth Tbs on single fiber strand

Editor's Notes

  • #9: We define Grid architecture in terms of a layered collection of protocols. Fabric layer includes the protocols and interfaces that provide access to the resources that are being shared, including computers, storage systems, datasets, programs, and networks. This layer is a logical view rather then a physical view. For example, the view of a cluster with a local resource manager is defined by the local resource manger, and not the cluster hardware. Likewise, the fabric provided by a storage system is defined by the file system that is available on that system, not the raw disk or tapes. The connectivity layer defines core protocols required for Grid-specific network transactions. This layer includes the IP protocol stack (system level application protocols [e.g. DNS, RSVP, Routing], transport and internet layers), as well as core Grid security protocols for authentication and authorization. Resource layer defines protocols to initiate and control sharing of (local) resources. Services defined at this level are gatekeeper, GRIS, along with some user oriented application protocols from the Internet protocol suite, such as file-transfer. Collective layer defines protocols that provide system oriented capabilities that are expected to be wide scale in deployment and generic in function. This includes GIIS, bandwidth brokers, resource brokers,…. Application layer defines protocols and services that are parochial in nature, targeted towards a specific application domain or class of applications. These are are are … arrgh