SlideShare a Scribd company logo
The Missing Link: Dedicated End-to-End
               10Gbps Optical Lightpaths
             for Clusters, Grids, and Clouds
                         Invited Keynote Presentation
                  11th IEEE/ACM International Symposium
                  on Cluster, Cloud, and Grid Computing
                              Newport Beach, CA
                                 May 24, 2011


                                    Dr. Larry Smarr
Director, California Institute for Telecommunications and Information Technology
                              Harry E. Gruber Professor,
                    Dept. of Computer Science and Engineering
                        Jacobs School of Engineering, UCSD
                                                                             1
                             Follow me on Twitter: lsmarr
Abstract
Today we are living in a data-dominated world where distributed scientific
instruments, as well as clusters, generate terabytes to petabytes of data which are
stored increasingly in specialized campus facilities or in the Cloud. It was in response to
this challenge that the NSF funded the OptIPuter project to research how user-controlled
10Gbps dedicated lightpaths (or "lambdas") could transform the Grid into a LambdaGrid.
This provides direct access to global data repositories, scientific instruments, and
computational resources from "OptIPortals," PC clusters which provide scalable
visualization, computing, and storage in the user's campus laboratory. The use of
dedicated lightpaths over fiber optic cables enables individual researchers to experience
"clear channel" 10,000 megabits/sec, 100-1000 times faster than over today's shared
Internet-a critical capability for data-intensive science. The seven-year OptIPuter
computer science research project is now over, but it stimulated a national and global
build-out of dedicated fiber optic networks. U.S. universities now have access to high
bandwidth lambdas through the National LambdaRail, Internet2's WaveCo, and the
Global Lambda Integrated Facility. A few pioneering campuses are now building on-
campus lightpaths to connect the data-intensive researchers, data generators, and vast
storage systems to each other on campus, as well as to the national network campus
gateways. I will give examples of the application use of this emerging high performance
cyberinfrastructure in genomics, ocean observatories, radio astronomy, and cosmology.
Large Data Challenge: Average Throughput to End User
          on Shared Internet is 10-100 Mbps


                                                                       Tested
                                                                    January 2011




                                                   Transferring 1 TB:
                                                   --50 Mbps = 2 Days
                                                   --10 Gbps = 15 Minutes



           http://ensight.eos.nasa.gov/Missions/terra/index.shtml
OptIPuter Solution:
Give Dedicated Optical Channels to Data-Intensive Users
                                                             (WDM)
                                           10 Gbps per User >100x
                                          Shared Internet Throughput
                                                     c* f
               Source: Steve Wallach, Chiaro Networks




                                                        ―Lambdas‖
           Parallel Lambdas are Driving Optical Networking
         The Way Parallel Processors Drove 1990s Computing
Dedicated 10Gbps Lightpaths Tie Together
 State and Regional Fiber Infrastructure
                      Interconnects
                        Two Dozen
                    State and Regional
                     Optical Networks




                                         Internet2 WaveCo
                                          Circuit Network
                                          Is Now Available
The Global Lambda Integrated Facility--
Creating a Planetary-Scale High Bandwidth Collaboratory
Research Innovation Labs Linked by 10G Dedicated Lambdas




                                 www.glif.is
                                 Created in Reykjavik,
                                 Iceland 2003

                      Visualization courtesy of
                      Bob Patterson, NCSA.
High Resolution Uncompressed HD Streams
              Require Multi-Gigabit/s Lambdas
U. Washington                         Telepresence Using Uncompressed 1.5 Gbps
                                        HDTV Streaming Over IP on Fiber Optics--
                                          75x Home Cable ―HDTV‖ Bandwidth!
                                                                    JGN II Workshop
                                                                     Osaka, Japan
                                                                       Jan 2005
                      Prof. Smarr
 Prof.
Osaka                Prof. Aoyama




                ―I can see every hair on your head!‖—Prof. Aoyama
                            Source: U Washington Research Channel
Borderless Collaboration
Between Global University Research Centers at 10Gbps


          iGrid
         Maxine Brown, Tom DeFanti, Co-Chairs

                                2005
    THE GLOBAL LAMBDA INTEGRATED FACILITY
                 www.igrid2005.org
                                                                September 26-30, 2005
                                               Calit2 @ University of California, San Diego
                California Institute for Telecommunications and Information Technology




      100Gb of Bandwidth into the Calit2@UCSD Building
       More than 150Gb GLIF Transoceanic Bandwidth!
        450 Attendees, 130 Participating Organizations
           20 Countries Driving 49 Demonstrations
                  1- or 10- Gbps Per Demo
Telepresence Meeting
               Using Digital Cinema 4k Streams
                  4k = 4000x2000 Pixels = 4xHD    Streaming 4k
        100 Times                                  with JPEG
     the Resolution                                   2000
       of YouTube!                                Compression
                                                   ½ Gbit/sec
                                                                   Lays
                                                                 Technical
                                                                 Basis for
                                                                  Global
                                                 Keio University  Digital
                                                 President Anzai Cinema

                                                                   Sony
                                                    UCSD           NTT
                                                 Chancellor Fox     SGI




Calit2@UCSD Auditorium
iGrid Lambda High Performance Computing Services:
       Distributing AMR Cosmology Simulations
• Uses ENZO Computational
  Cosmology Code
   – Grid-Based Adaptive Mesh
     Refinement Simulation Code
   – Developed by Mike Norman, UCSD
• Can One Distribute the Computing?
   – iGrid2005 to Chicago to Amsterdam
• Distributing Code Using Layer 3
  Routers Fails
• Instead Using Layer 2, Essentially
  Same Performance as Running on
  Single Supercomputer
   – Using Dynamic Lightpath
     Provisioning


                    Source: Joe Mambretti, Northwestern U
iGrid Lambda Control Services: Transform Batch to
 Real-Time Global e-Very Long Baseline Interferometry




• Goal: Real-Time VLBI Radio Telescope Data Correlation
• Achieved 512Mb Transfers from USA and Sweden to MIT
• Results Streamed to iGrid2005 in San Diego
         Optical Connections Dynamically Managed Using the
         DRAGON Control Plane and Internet2 HOPI Network
                  Source: Jerry Sobieski, DRAGON
The OptIPuter Project: Creating High Resolution Portals
Over Dedicated Optical Channels to Global Science Data
                                                                              Scalable
                                OptIPortal                                    Adaptive
                                                                              Graphics
                                                                              Environment
                                                                              (SAGE)




                                                                              Picture
                                                                              Source:
                                                                              Mark
                                                                              Ellisman,
                                                                              David Lee,
                                                                              Jason Leigh


       Calit2 (UCSD, UCI), SDSC, and UIC Leads—Larry Smarr PI
       Univ. Partners: NCSA, USC, SDSU, NW, TA&M, UvA, SARA, KISTI, AIST
       Industry: IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent
What is the
                           OptIPuter?

• Applications Drivers  Interactive Analysis of Large Data Sets

• OptIPuter Nodes  Scalable PC Clusters with Graphics Cards

• IP over Lambda Connectivity Predictable Backplane

• Open Source LambdaGrid Middleware Network is Reservable

• Data Retrieval and Mining  Lambda Attached Data Servers

• High Defn. Vis., Collab. SW  High Performance Collaboratory
                         www.optiputer.net
                See Nov 2003 Communications of the ACM
                  for Articles on OptIPuter Technologies
OptIPuter Software Architecture--a Service-Oriented
  Architecture Integrating Lambdas Into the Grid
 Distributed Applications/ Web Services
                                                      Visualization
      Telescience                                              SAGE              JuxtaView
                             Data Services

                                             LambdaRAM                Vol-a-Tile

 Distributed Virtual Computer (DVC) API
                  DVC Configuration                 DVC Runtime Library

                                                      DVC
 DVC Services                 DVC Job
                             Scheduling            Communication
   DVC Core Services
      Resource       Namespace          Security            High Speed           Storage
   Identify/Acquire Management        Management           Communication         Services


                    Globus
     PIN/PDC            GRAM                 GSI                XIO              RobuStore
 Discovery
 and Control                                         GTP        XCP        UDT
          Lambdas
                               IP
                                               CEP         LambdaStream    RBUDP
OptIPortals Scale to 1/3 Billion Pixels Enabling Viewing
 of Very Large Images or Many Simultaneous Images




               Spitzer Space Telescope (Infrared)



                                                      NASA Earth
                                                    Satellite Images
                                                       Bushfires
                                                     October 2007
                                                       San Diego




               Source: Falko Kuester, Calit2@UCSD
The Latest OptIPuter Innovation:
 Quickly Deployable Nearly Seamless OptIPortables




                                      Shipping
                                       Case



45 minute setup, 15 minute tear-down with two people (possible with one)
Calit2 3D Immersive StarCAVE OptIPortal

 Connected at 50 Gb/s to Quartzite                                     15 Meyer Sound
                                                                         Speakers +
                                                                         Subwoofer
      30 HD
    Projectors!




  Passive Polarization--
      Optimized the
 Polarization Separation
and Minimized Attenuation                 Source: Tom DeFanti, Greg Dawe, Calit2




           Cluster with 30 Nvidia 5600 cards-60 GB Texture Memory
3D Stereo Head Tracked OptIPortal:
            NexCAVE




  Array of JVC HDTV 3D LCD Screens
    KAUST NexCAVE = 22.5MPixels
 www.calit2.net/newsroom/article.php?id=1584
       Source: Tom DeFanti, Calit2@UCSD
High Definition Video Connected OptIPortals:
Virtual Working Spaces for Data Intensive Research
                                                   2010


                                                              NASA Supports
                                                                Two Virtual
                                                                 Institutes




                                                               LifeSize HD

          Calit2@UCSD 10Gbps Link to
NASA Ames Lunar Science Institute, Mountain View, CA

           Source: Falko Kuester, Kai Doerr Calit2;
           Michael Sims, Larry Edwards, Estelle Dodson NASA
EVL’s SAGE OptIPortal VisualCasting
              Multi-Site OptIPuter Collaboratory
                                   CENIC CalREN-XD Workshop Sept. 15, 2008
           Total Aggregate VisualCasting Bandwidth for Nov. 18, 2008
EVL-UI Chicago
           Sustained 10,000-20,000 Mbps!
             At Supercomputing 2008 Austin, Texas
             November, 2008                                        Streaming 4k

             SC08 Bandwidth Challenge Entry

                                       Remote:
 On site:
                                       U of Michigan
 SARA (Amsterdam)                      UIC/EVL
U Michigan
 GIST / KISTI (Korea)                  U of Queensland
 Osaka Univ. (Japan)                   Russian Academy of Science
                                       Masaryk Univ. (CZ)
              Requires 10 Gbps Lightpath to Each Site



              Source: Jason Leigh, Luc Renambot, EVL, UI Chicago
Using Supernetworks to Couple End User’s OptIPortal
 to Remote Supercomputers and Visualization Servers
Source: Mike Norman,
Rick Wagner, SDSC                                Argonne NL
                                                               DOE Eureka
                                          100 Dual Quad Core Xeon Servers
                                          200 NVIDIA Quadro FX GPUs in 50
                                              Quadro Plex S4 1U enclosures
                                                               3.2 TB RAM        rendering


                                                   ESnet
 SDSC                                              10 Gb/s fiber optic network
                                                                                     NICS
               visualization
                                                                                     ORNL
 Calit2/SDSC OptIPortal1
 20 30‖ (2560 x 1600 pixel) LCD panels
                                           NSF TeraGrid Kraken   simulation
                                                      Cray XT5
 10 NVIDIA Quadro FX 4600 graphics
                                          8,256 Compute Nodes
 cards > 80 megapixels
                                         99,072 Compute Cores
 10 Gb/s network throughout
                                                   129 TB RAM




                       *ANL * Calit2 * LBNL * NICS * ORNL * SDSC
National-Scale Interactive Remote Rendering
                   of Large Datasets
        SDSC                 ESnet            ALCF
                                 Science Data Network (SDN)
                                 > 10 Gb/s Fiber Optic Network
                                 Dynamic VLANs Configured
                                 Using OSCARS
                                                                  Rendering
     Visualization                                                         Eureka
OptIPortal (40M pixels LCDs)                     100 Dual Quad Core Xeon Servers
10 NVIDIA FX 4600 Cards                                      200 NVIDIA FX GPUs
10 Gb/s Network Throughout                                            3.2 TB RAM
                               Interactive Remote Rendering
  Real-Time Volume Rendering Streamed from ANL to SDSC
        Last Year                                Now
High-Resolution (4K+, 15+ FPS)—But:      Driven by a Simple Web GUI:
• Command-Line Driven                    •Rotate, Pan, Zoom
• Fixed Color Maps, Transfer Functions   •GUI Works from Most Browsers
• Slow Exploration of Data               • Manipulate Colors and Opacity
                                         • Fast Renderer Response Time

                          Source: Rick Wagner, SDSC
NSF OOI is a $400M Program
                  -OOI CI is $34M Part of This




30-40 Software Engineers
Housed at Calit2@UCSD




           Source: Matthew Arrott, Calit2 Program Manager for OOI CI
OOI CI
           is Built on NLR/I2 Optical Infrastructure
              Physical Network Implementation
 Source: John
Orcutt, Matthew
Arrott, SIO/Calit2
Cisco CWave for CineGrid: A New Cyberinfrastructure
       for High Resolution Media Streaming*
                                                     Source: John (JJ) Jamison, Cisco
 PacificWave
 1000 Denny Way
 (Westin Bldg.)
 Seattle
                                                       StarLight
                                                       Northwestern Univ
 Level3                                                Chicago
 1360 Kifer Rd.                                                             McLean
 Sunnyvale                      2007


       Equinix
       818 W. 7th St.
       Los Angeles      CENIC Wave
                                     Cisco Has Built 10 GigE Waves on CENIC, PW,
                                      & NLR and Installed Large 6506 Switches for
             Calit2                          Access Points in San Diego, Los
             San Diego                  Angeles, Sunnyvale, Seattle, Chicago and
          CWave core PoP                                 McLean
                                                  for CineGrid Members
          10GE waves on NLR and CENIC (LA to SD) These Points are also GLIF GOLEs
                                       Some of



             *                                       May 2007
CineGrid 4K Digital Cinema Projects:
               ―Learning by Doing‖
CineGrid @ iGrid 2005                                   CineGrid @ AES 2006




     CineGrid @ Holland Festival 2007              CineGrid @ GLIF 2007
            Laurin Herr, Pacific Interface; Tom DeFanti, Calit2
First Tri-Continental Premier of
 a Streamed 4K Feature Film With Global HD Discussion

   4K Film Director,
     Beto Souza




                                                                July 30, 2009
                       Keio Univ., Japan          Calit2@UCSD



Source:
Sheldon Brown,     San Paulo, Brazil Auditorium
CRCA, Calit2

                      4K Transmission Over 10Gbps--
                   4 HD Projections from One 4K Projector
CineGrid 4K Remote Microscopy Collaboratory:
                     USC to Calit2
Photo: Alan Decker                           December 8, 2009




                     Richard Weinberg, USC
Open Cloud OptIPuter Testbed--Manage and Compute
            Large Datasets Over 10Gbps Lambdas




              CENIC         NLR C-Wave           Dragon


                                                 Open Source SW
                                                  Hadoop
•   9 Racks                        MREN           Sector/Sphere
•   500 Nodes                                     Nebula
•   1000+ Cores                                   Thrift, GPB
•   10+ Gb/s Now                                  Eucalyptus
•   Upgrading Portions to                         Benchmarks
    100 Gb/s in 2010/2011

                                                            29
                  Source: Robert Grossman, UChicago
Terasort on Open Cloud Testbed
Sustains >5 Gbps--Only 5% Distance Penalty!

    Sorting 10 Billion Records (1.2 TB)
           at 4 Sites (120 Nodes)




        Source: Robert Grossman, UChicago
―Blueprint for the Digital University‖--Report of the
   UCSD Research Cyberinfrastructure Design Team
• Focus on Data-Intensive Cyberinfrastructure
                                                                    April 2009

No Data
Bottlenecks
--Design for
Gigabit/s
Data Flows




       research.ucsd.edu/documents/rcidt/RCIDTReportFinal2009.pdf
Campus Preparations Needed
to Accept CENIC CalREN Handoff to Campus




           Source: Jim Dolgonas, CENIC
Current UCSD Prototype Optical Core:
           Bridging End-Users to CENIC L1, L2, L3 Services
                                      Quartzite Communications
  To 10GigE cluster
   node interfaces
                                             Core Year 3
                                   Enpoints:
                                            Quartzite    Wavelength
                                   >= 60 endpoints at 10 GigE
                                              Core
                                                          Selective
       .....


                                                           Switch

                                   >= 32 Packet switched Lucent                         To 10GigE cluster
                                                                                       node interfaces and
                                   >= 32 Switched wavelengths                               other switches


To cluster nodes
                   .....
                                   >= 300 Connected endpoints
                                                                 Glimmerglass
                                                                                                  To cluster nodes
                                                                                          .....
                                                                  Production
            GigE Switch with
                                                                    OOO
           Dual 10GigE Upliks
                                                                    Switch
To cluster nodes
                                    Approximately 0.5 TBit/s
                                               32 10GigE

                   .....
                                    Arrive at the ―Optical‖                        GigE Switch with
                                                                     Force10      Dual 10GigE Upliks

                                    Center of Campus.
                                                ...

            GigE Switch with
                                    Switching is a Hybrid of:
                                       To              Packet Switch            CalREN-HPR
                                                                                 Research
           Dual 10GigE Upliks
                                    Packet, Lambda, Circuit --
                                       other
                                       nodes
                                                                                   Cloud
    GigE
                                    OOO and Packet Switches
 10GigE
                                                                                Campus Research
  4 GigE
  4 pair fiber
                                                                                    Cloud
                                                       Juniper T320

                                Source: Phil Papadopoulos, SDSC/Calit2
                                (Quartzite PI, OptIPuter co-PI)
                                Quartzite Network MRI #CNS-0421555;
                                OptIPuter #ANI-0225642
Calit2 Sunlight
            Optical Exchange Contains Quartzite




 Maxine
 Brown,
EVL, UIC
OptIPuter
 Project
Manager
UCSD Campus Investment in Fiber Enables
Consolidation of Energy Efficient Computing & Storage
                                         WAN 10Gb:
           N x 10Gb/s                   CENIC, NLR, I2



                         Gordon –
                        HPD System

 Cluster Condo

                                                        DataOasis
                  Triton – Petascale
                                                     (Central) Storage
                    Data Analysis
    Scientific
  Instruments




   GreenLight            Digital Data      Campus Lab           OptIPortal
   Data Center           Collections         Cluster        Tiled Display Wall


                 Source: Philip Papadopoulos, SDSC, UCSD
National Center for Microscopy and Imaging Research:
     Integrated Infrastructure of Shared Resources



                  Shared Infrastructure




 Scientific                                         Local SOM
Instruments                                        Infrastructure




                                End User
                               Workstations
                    Source: Steve Peltier, NCMIR
Community Cyberinfrastructure for Advanced
 Microbial Ecology Research and Analysis


       http://camera.calit2.net/
Calit2 Microbial Metagenomics Cluster-
       Lambda Direct Connect Science Data Server
                                Source: Phil Papadopoulos, SDSC, Calit2




       512 Processors
                                                  ~200TB
         ~5 Teraflops                               Sun
                                   1GbE            X4500
   ~ 200 Terabytes Storage          and           Storage
                                   10GbE
                                 Switched         10GbE
                                 / Routed
                                    Core




   4000 Users
From 90 Countries
Creating CAMERA 2.0 -
Advanced Cyberinfrastructure Service Oriented Architecture




                                                  Source:
                                                CAMERA CTO
                                                Mark Ellisman
OptIPuter Persistent Infrastructure Enables
        Calit2 and U Washington CAMERA Collaboratory
Photo Credit: Alan Decker                             Feb. 29, 2008

                                          Ginger
                                        Armbrust’s
                                         Diatoms:
                                       Micrographs,
                                      Chromosomes,
                                         Genetic
                                        Assembly

    iHDTV: 1500 Mbits/sec Calit2 to
    UW Research Channel Over NLR
NSF Funds a Data-Intensive Track 2 Supercomputer:
        SDSC’s Gordon-Coming Summer 2011
• Data-Intensive Supercomputer Based on
  SSD Flash Memory and Virtual Shared Memory SW
  – Emphasizes MEM and IOPS over FLOPS
  – Supernode has Virtual Shared Memory:
     – 2 TB RAM Aggregate
     – 8 TB SSD Aggregate
     – Total Machine = 32 Supernodes
     – 4 PB Disk Parallel File System >100 GB/s I/O
• System Designed to Accelerate Access
  to Massive Data Bases being Generated in
  Many Fields of Science, Engineering, Medicine,
  and Social Science

            Source: Mike Norman, Allan Snavely SDSC
Rapid Evolution of 10GbE Port Prices
   Makes Campus-Scale 10Gbps CI Affordable
    • Port Pricing is Falling
    • Density is Rising – Dramatically
    • Cost of 10GbE Approaching Cluster HPC Interconnects
$80K/port
Chiaro
(60 Max)



                 $ 5K
                 Force 10
                 (40 max)                                     ~$1000
                                                              (300+ Max)

                                      $ 500
                                      Arista                  $ 400
                                      48 ports                Arista
                                                              48 ports
2005               2007                2009            2010




            Source: Philip Papadopoulos, SDSC/Calit2
Arista Enables SDSC’s Massive Parallel
             10G Switched Data Analysis Resource
10Gbps
            OptIPuter                           UCSD
                                                 RCI                Radical Change Enabled by
                                Co-Lo
                                                                      Arista 7508 10G Switch
                            5                                            384 10G Capable
                                  8                        CENIC/
                                            2
                 32                                         NLR
   Triton                                         4

                                                                               Existing
                                                       8
                                                                              Commodity
  Trestles 32                           2                                      Storage
   100 TF        12                                                             1/3 PB

                                            40128
                  8
   Dash
                                                                            2000 TB
                                            Oasis Procurement (RFP)
                                                                           > 50 GB/s
                      128         • Phase0: > 8GB/s Sustained Today
  Gordon                          • Phase I: > 50 GB/sec for Lustre (May 2011)
                                   :Phase II: >100 GB/s (Feb 2012)

                       Source: Philip Papadopoulos, SDSC/Calit2
Data Oasis – 3 Different Types of Storage
Calit2 CAMERA Automatic Overflows
           into SDSC Triton



                                    @ SDSC

                                     Triton Resource

@ CALIT2
                    Transparently            CAMERA -
                    Sends Jobs to             Managed
                    Submit Portal            Job Submit
                      on Triton              Portal (VM)
                                       10Gbps
                                                     Direct
                                                     Mount
                                    CAMERA             ==
                                     DATA           No Data
                                                    Staging
California and Washington Universities Are Testing
 a 10Gbps Lambda-Connected Commercial Data Cloud
• Amazon Experiment for Big Data
  – Only Available Through CENIC & Pacific NW
    GigaPOP
     – Private 10Gbps Peering Paths
  – Includes Amazon EC2 Computing & S3 Storage
    Services
• Early Experiments Underway
  – Phil Papadopoulos, Calit2/SDSC Rocks
  – Robert Grossman, Open Cloud Consortium
Using Condor and Amazon EC2 on
     Adaptive Poisson-Boltzmann Solver (APBS)
• APBS Rocks Roll (NBCR) + EC2 Roll
  + Condor Roll = Amazon VM
• Cluster extension into Amazon using Condor

                         Local
 Running in Amazon Cloud
                         Cluster       EC2 Cloud
                                       NBCR          NBCR
                                        VM            VM

                                              NBCR
                                               VM




  APBS + EC2 + Condor


          Source: Phil Papadopoulos,
                 SDSC/Calit2
Hybrid Cloud Computing
                   with modENCODE Data
• Computations in Bionimbus Can Span the Community Cloud
  & the Amazon Public Cloud to Form a Hybrid Cloud
• Sector was used to Support the Data Transfer between
  Two Virtual Machines
    – One VM was at UIC and One VM was an Amazon EC2 Instance
• Graph Illustrates How the Throughput between Two Virtual
  Machines in a Wide Area Cloud Depends upon the File Size




Biological data
 (Bionimbus)      Source: Robert Grossman, UChicago
OptIPlanet Collaboratory:
          Enabled by 10Gbps ―End-to-End‖ Lightpaths
                                                                               HD/4k Live Video




                                                                                     HPC
                                Local or Remote
                                  Instruments
           End User
           OptIPortal                     National LambdaRail


                   10G
                   Lightpaths




Campus
Optical Switch



                   Data Repositories & Clusters     HD/4k Video Repositories
You Can Download This Presentation
        at lsmarr.calit2.net

More Related Content

PPT
End-to-end Optical Fiber Cyberinfrastructure for Data-Intensive Research: Imp...
PPT
An End-to-End Campus-Scale High Performance Cyberinfrastructure for Data-Inte...
PPT
New Applications of SuperNetworks and the Implications for Campus Networks
PPTX
The Future of R&E networks and cyber-infrastructure
PDF
The Future Applications of Australia’s National Broadband Network
PPT
High Performance Cyberinfrastructure Enables Data-Driven Science in the Globa...
PPT
High Performance Collaboration – The Jump to Light Speed
PDF
Grid is Dead ? Nimrod on the Cloud
End-to-end Optical Fiber Cyberinfrastructure for Data-Intensive Research: Imp...
An End-to-End Campus-Scale High Performance Cyberinfrastructure for Data-Inte...
New Applications of SuperNetworks and the Implications for Campus Networks
The Future of R&E networks and cyber-infrastructure
The Future Applications of Australia’s National Broadband Network
High Performance Cyberinfrastructure Enables Data-Driven Science in the Globa...
High Performance Collaboration – The Jump to Light Speed
Grid is Dead ? Nimrod on the Cloud

What's hot (19)

PPT
How Global-Scale Personal Lightwaves are Transforming Scientific Research
PPT
Metacomputer Architecture of the Global LambdaGrid: How Personal Light Paths ...
PPT
Metacomputer Architecture of the Global LambdaGrid
PDF
Supercomputer End Users: the OptIPuter Killer Application
PPT
How Global-Scale Personal Lighwaves are Transforming Scientific Research
PPT
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
PDF
Practical Fibre Optics for Engineers and Technicians
PPT
Towards Telepresence
PDF
Practical Fibre Optics & Interfacing Techniques to Industrial Ethernet and Wi...
PPT
A Gigabit in Every Home—The Emergence of True Broadband
PDF
Practical Fibre Optics and Interfacing Techniques to Industrial Ethernet and ...
PPT
Calit2 Projects in Cyberinfrastructure
PDF
Beyond Telepresence
PDF
Virtual classroom overview
PPT
Impact on Society – the Light at the end of the Tunnel
PPT
COMMON IBM Technology leadership and IT futures
PPT
OptIPuter Planetary-Scale Applications Overview
PPTX
Network Based Kernel Density Estimator for Urban Dynamics Analysis
PDF
Eastern Europe Partnership Event - 002 zoran jovanovic
How Global-Scale Personal Lightwaves are Transforming Scientific Research
Metacomputer Architecture of the Global LambdaGrid: How Personal Light Paths ...
Metacomputer Architecture of the Global LambdaGrid
Supercomputer End Users: the OptIPuter Killer Application
How Global-Scale Personal Lighwaves are Transforming Scientific Research
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
Practical Fibre Optics for Engineers and Technicians
Towards Telepresence
Practical Fibre Optics & Interfacing Techniques to Industrial Ethernet and Wi...
A Gigabit in Every Home—The Emergence of True Broadband
Practical Fibre Optics and Interfacing Techniques to Industrial Ethernet and ...
Calit2 Projects in Cyberinfrastructure
Beyond Telepresence
Virtual classroom overview
Impact on Society – the Light at the end of the Tunnel
COMMON IBM Technology leadership and IT futures
OptIPuter Planetary-Scale Applications Overview
Network Based Kernel Density Estimator for Urban Dynamics Analysis
Eastern Europe Partnership Event - 002 zoran jovanovic
Ad

Viewers also liked (20)

PPT
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...
PPT
Toward a Global Coral Reef Observatory
PPT
Genomic Research: The Jump to Light Speed
PPT
Sequencing Genomics: The New Big Data Driver
PPT
The Role of University Energy Efficient Cyberinfrastructure in Slowing Climat...
PPT
Calit2--Helping the University of California Drive Innovation in California
PPT
Predictive & Preventive Personalized Medicine
PPT
Engaging the Private Sector: UCSD Division of Calit2
PPT
Be Your Own Health Detective
PPT
Collaborations Between Calit2, SIO, and the Venter Institute-a Beginning
PPT
Toward Greener Cyberinfrastructure
PPT
Coupling Australia’s Researchers to the Global Innovation Economy
PDF
Project GreenLight: Optimizing Cyberinfrastructure for a Carbon Constrained W...
PPT
Attacking the Driver of Increased Stroke, Heart Disease, and Diabetes
PPT
Introduction to Calit2
PPT
From Digitally Enabled Genomic Medicine to Personalized Healthcare
PPT
Set My Data Free: High-Performance CI for Data-Intensive Research
PPT
Living in the Future
PDF
The OptIPuter and Its Applications
PPT
Building a Global Collaboration System for Data-Intensive Discovery
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...
Toward a Global Coral Reef Observatory
Genomic Research: The Jump to Light Speed
Sequencing Genomics: The New Big Data Driver
The Role of University Energy Efficient Cyberinfrastructure in Slowing Climat...
Calit2--Helping the University of California Drive Innovation in California
Predictive & Preventive Personalized Medicine
Engaging the Private Sector: UCSD Division of Calit2
Be Your Own Health Detective
Collaborations Between Calit2, SIO, and the Venter Institute-a Beginning
Toward Greener Cyberinfrastructure
Coupling Australia’s Researchers to the Global Innovation Economy
Project GreenLight: Optimizing Cyberinfrastructure for a Carbon Constrained W...
Attacking the Driver of Increased Stroke, Heart Disease, and Diabetes
Introduction to Calit2
From Digitally Enabled Genomic Medicine to Personalized Healthcare
Set My Data Free: High-Performance CI for Data-Intensive Research
Living in the Future
The OptIPuter and Its Applications
Building a Global Collaboration System for Data-Intensive Discovery
Ad

Similar to The Missing Link: Dedicated End-to-End 10Gbps Optical Lightpaths for Clusters, Grids, and Clouds (20)

PPTX
High Performance Cyberinfrastructure Enables Data-Driven Science in the Glob...
PDF
Preparing Your Campus for Data Intensive Researchers
PPT
Riding the Light: How Dedicated Optical Circuits are Enabling New Science
PPT
OptIPuter Overview
PPT
How to Terminate the GLIF by Building a Campus Big Data Freeway System
PPT
High Performance Cyberinfrastructure Required for Data Intensive Scientific R...
PPT
The OptIPuter and Its Applications
PDF
A New Global Research Platform – Dedicated 10Gbps Lightpaths
PPT
Calit2
PPT
Bringing 3D, Ultra-Resolution, and Virtual Reality into the Global LambaGrid ...
PPT
From the Shared Internet to Personal Lightwaves: How the OptIPuter is Transfo...
PPTX
21st Century e-Knowledge Requires a High Performance e-Infrastructure
PPT
OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific Applications
PPT
Why Researchers are Using Advanced Networks
PDF
AARNet services including specific Applications & Services
PDF
Coupling Australia’s Researchers to the Global Innovation Economy
PDF
Coupling Australia’s Researchers to the Global Innovation Economy
PPT
Positioning University of California Information Technology for the Future: S...
PDF
Shrinking the Planet: A New Global Research Platform –Dedicated 10Gbps Lightp...
PDF
Coupling Australia’s Researchers to the Global Innovation Economy
High Performance Cyberinfrastructure Enables Data-Driven Science in the Glob...
Preparing Your Campus for Data Intensive Researchers
Riding the Light: How Dedicated Optical Circuits are Enabling New Science
OptIPuter Overview
How to Terminate the GLIF by Building a Campus Big Data Freeway System
High Performance Cyberinfrastructure Required for Data Intensive Scientific R...
The OptIPuter and Its Applications
A New Global Research Platform – Dedicated 10Gbps Lightpaths
Calit2
Bringing 3D, Ultra-Resolution, and Virtual Reality into the Global LambaGrid ...
From the Shared Internet to Personal Lightwaves: How the OptIPuter is Transfo...
21st Century e-Knowledge Requires a High Performance e-Infrastructure
OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific Applications
Why Researchers are Using Advanced Networks
AARNet services including specific Applications & Services
Coupling Australia’s Researchers to the Global Innovation Economy
Coupling Australia’s Researchers to the Global Innovation Economy
Positioning University of California Information Technology for the Future: S...
Shrinking the Planet: A New Global Research Platform –Dedicated 10Gbps Lightp...
Coupling Australia’s Researchers to the Global Innovation Economy

More from Larry Smarr (20)

PPTX
Smart Patients, Big Data, NextGen Primary Care
PPTX
Internet2 and QUILT Initiatives with Regional Networks -6NRP Larry Smarr and ...
PPTX
Internet2 and QUILT Initiatives with Regional Networks -6NRP Larry Smarr and ...
PPTX
National Research Platform: Application Drivers
PPT
From Supercomputing to the Grid - Larry Smarr
PPTX
The CENIC-AI Resource - Los Angeles Community College District (LACCD)
PPT
Redefining Collaboration through Groupware - From Groupware to Societyware
PPT
The Coming of the Grid - September 8-10,1997
PPT
Supercomputers: Directions in Technology, Architecture, and Applications
PPT
High Performance Geographic Information Systems
PPT
Data Intensive Applications at UCSD: Driving a Campus Research Cyberinfrastru...
PPT
Enhanced Telepresence and Green IT — The Next Evolution in the Internet
PPTX
The CENIC AI Resource CENIC AIR - CENIC Retreat 2024
PPTX
The CENIC-AI Resource: The Right Connection
PPTX
The Pacific Research Platform: The First Six Years
PPTX
The NSF Grants Leading Up to CHASE-CI ENS
PPTX
Integrated Optical Fiber/Wireless Systems for Environmental Monitoring
PPTX
Toward a National Research Platform to Enable Data-Intensive Open-Source Sci...
PPTX
Toward a National Research Platform to Enable Data-Intensive Computing
PPTX
Digital Twins of Physical Reality - Future in Review
Smart Patients, Big Data, NextGen Primary Care
Internet2 and QUILT Initiatives with Regional Networks -6NRP Larry Smarr and ...
Internet2 and QUILT Initiatives with Regional Networks -6NRP Larry Smarr and ...
National Research Platform: Application Drivers
From Supercomputing to the Grid - Larry Smarr
The CENIC-AI Resource - Los Angeles Community College District (LACCD)
Redefining Collaboration through Groupware - From Groupware to Societyware
The Coming of the Grid - September 8-10,1997
Supercomputers: Directions in Technology, Architecture, and Applications
High Performance Geographic Information Systems
Data Intensive Applications at UCSD: Driving a Campus Research Cyberinfrastru...
Enhanced Telepresence and Green IT — The Next Evolution in the Internet
The CENIC AI Resource CENIC AIR - CENIC Retreat 2024
The CENIC-AI Resource: The Right Connection
The Pacific Research Platform: The First Six Years
The NSF Grants Leading Up to CHASE-CI ENS
Integrated Optical Fiber/Wireless Systems for Environmental Monitoring
Toward a National Research Platform to Enable Data-Intensive Open-Source Sci...
Toward a National Research Platform to Enable Data-Intensive Computing
Digital Twins of Physical Reality - Future in Review

Recently uploaded (20)

PDF
Basic Mud Logging Guide for educational purpose
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
Business Ethics Teaching Materials for college
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
PDF
TR - Agricultural Crops Production NC III.pdf
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Pre independence Education in Inndia.pdf
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPTX
master seminar digital applications in india
PPTX
PPH.pptx obstetrics and gynecology in nursing
Basic Mud Logging Guide for educational purpose
Supply Chain Operations Speaking Notes -ICLT Program
102 student loan defaulters named and shamed – Is someone you know on the list?
O5-L3 Freight Transport Ops (International) V1.pdf
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Renaissance Architecture: A Journey from Faith to Humanism
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
STATICS OF THE RIGID BODIES Hibbelers.pdf
Business Ethics Teaching Materials for college
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
TR - Agricultural Crops Production NC III.pdf
Microbial diseases, their pathogenesis and prophylaxis
2.FourierTransform-ShortQuestionswithAnswers.pdf
Pre independence Education in Inndia.pdf
Anesthesia in Laparoscopic Surgery in India
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
master seminar digital applications in india
PPH.pptx obstetrics and gynecology in nursing

The Missing Link: Dedicated End-to-End 10Gbps Optical Lightpaths for Clusters, Grids, and Clouds

  • 1. The Missing Link: Dedicated End-to-End 10Gbps Optical Lightpaths for Clusters, Grids, and Clouds Invited Keynote Presentation 11th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing Newport Beach, CA May 24, 2011 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD 1 Follow me on Twitter: lsmarr
  • 2. Abstract Today we are living in a data-dominated world where distributed scientific instruments, as well as clusters, generate terabytes to petabytes of data which are stored increasingly in specialized campus facilities or in the Cloud. It was in response to this challenge that the NSF funded the OptIPuter project to research how user-controlled 10Gbps dedicated lightpaths (or "lambdas") could transform the Grid into a LambdaGrid. This provides direct access to global data repositories, scientific instruments, and computational resources from "OptIPortals," PC clusters which provide scalable visualization, computing, and storage in the user's campus laboratory. The use of dedicated lightpaths over fiber optic cables enables individual researchers to experience "clear channel" 10,000 megabits/sec, 100-1000 times faster than over today's shared Internet-a critical capability for data-intensive science. The seven-year OptIPuter computer science research project is now over, but it stimulated a national and global build-out of dedicated fiber optic networks. U.S. universities now have access to high bandwidth lambdas through the National LambdaRail, Internet2's WaveCo, and the Global Lambda Integrated Facility. A few pioneering campuses are now building on- campus lightpaths to connect the data-intensive researchers, data generators, and vast storage systems to each other on campus, as well as to the national network campus gateways. I will give examples of the application use of this emerging high performance cyberinfrastructure in genomics, ocean observatories, radio astronomy, and cosmology.
  • 3. Large Data Challenge: Average Throughput to End User on Shared Internet is 10-100 Mbps Tested January 2011 Transferring 1 TB: --50 Mbps = 2 Days --10 Gbps = 15 Minutes http://ensight.eos.nasa.gov/Missions/terra/index.shtml
  • 4. OptIPuter Solution: Give Dedicated Optical Channels to Data-Intensive Users (WDM) 10 Gbps per User >100x Shared Internet Throughput c* f Source: Steve Wallach, Chiaro Networks ―Lambdas‖ Parallel Lambdas are Driving Optical Networking The Way Parallel Processors Drove 1990s Computing
  • 5. Dedicated 10Gbps Lightpaths Tie Together State and Regional Fiber Infrastructure Interconnects Two Dozen State and Regional Optical Networks Internet2 WaveCo Circuit Network Is Now Available
  • 6. The Global Lambda Integrated Facility-- Creating a Planetary-Scale High Bandwidth Collaboratory Research Innovation Labs Linked by 10G Dedicated Lambdas www.glif.is Created in Reykjavik, Iceland 2003 Visualization courtesy of Bob Patterson, NCSA.
  • 7. High Resolution Uncompressed HD Streams Require Multi-Gigabit/s Lambdas U. Washington Telepresence Using Uncompressed 1.5 Gbps HDTV Streaming Over IP on Fiber Optics-- 75x Home Cable ―HDTV‖ Bandwidth! JGN II Workshop Osaka, Japan Jan 2005 Prof. Smarr Prof. Osaka Prof. Aoyama ―I can see every hair on your head!‖—Prof. Aoyama Source: U Washington Research Channel
  • 8. Borderless Collaboration Between Global University Research Centers at 10Gbps iGrid Maxine Brown, Tom DeFanti, Co-Chairs 2005 THE GLOBAL LAMBDA INTEGRATED FACILITY www.igrid2005.org September 26-30, 2005 Calit2 @ University of California, San Diego California Institute for Telecommunications and Information Technology 100Gb of Bandwidth into the Calit2@UCSD Building More than 150Gb GLIF Transoceanic Bandwidth! 450 Attendees, 130 Participating Organizations 20 Countries Driving 49 Demonstrations 1- or 10- Gbps Per Demo
  • 9. Telepresence Meeting Using Digital Cinema 4k Streams 4k = 4000x2000 Pixels = 4xHD Streaming 4k 100 Times with JPEG the Resolution 2000 of YouTube! Compression ½ Gbit/sec Lays Technical Basis for Global Keio University Digital President Anzai Cinema Sony UCSD NTT Chancellor Fox SGI Calit2@UCSD Auditorium
  • 10. iGrid Lambda High Performance Computing Services: Distributing AMR Cosmology Simulations • Uses ENZO Computational Cosmology Code – Grid-Based Adaptive Mesh Refinement Simulation Code – Developed by Mike Norman, UCSD • Can One Distribute the Computing? – iGrid2005 to Chicago to Amsterdam • Distributing Code Using Layer 3 Routers Fails • Instead Using Layer 2, Essentially Same Performance as Running on Single Supercomputer – Using Dynamic Lightpath Provisioning Source: Joe Mambretti, Northwestern U
  • 11. iGrid Lambda Control Services: Transform Batch to Real-Time Global e-Very Long Baseline Interferometry • Goal: Real-Time VLBI Radio Telescope Data Correlation • Achieved 512Mb Transfers from USA and Sweden to MIT • Results Streamed to iGrid2005 in San Diego Optical Connections Dynamically Managed Using the DRAGON Control Plane and Internet2 HOPI Network Source: Jerry Sobieski, DRAGON
  • 12. The OptIPuter Project: Creating High Resolution Portals Over Dedicated Optical Channels to Global Science Data Scalable OptIPortal Adaptive Graphics Environment (SAGE) Picture Source: Mark Ellisman, David Lee, Jason Leigh Calit2 (UCSD, UCI), SDSC, and UIC Leads—Larry Smarr PI Univ. Partners: NCSA, USC, SDSU, NW, TA&M, UvA, SARA, KISTI, AIST Industry: IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent
  • 13. What is the OptIPuter? • Applications Drivers  Interactive Analysis of Large Data Sets • OptIPuter Nodes  Scalable PC Clusters with Graphics Cards • IP over Lambda Connectivity Predictable Backplane • Open Source LambdaGrid Middleware Network is Reservable • Data Retrieval and Mining  Lambda Attached Data Servers • High Defn. Vis., Collab. SW  High Performance Collaboratory www.optiputer.net See Nov 2003 Communications of the ACM for Articles on OptIPuter Technologies
  • 14. OptIPuter Software Architecture--a Service-Oriented Architecture Integrating Lambdas Into the Grid Distributed Applications/ Web Services Visualization Telescience SAGE JuxtaView Data Services LambdaRAM Vol-a-Tile Distributed Virtual Computer (DVC) API DVC Configuration DVC Runtime Library DVC DVC Services DVC Job Scheduling Communication DVC Core Services Resource Namespace Security High Speed Storage Identify/Acquire Management Management Communication Services Globus PIN/PDC GRAM GSI XIO RobuStore Discovery and Control GTP XCP UDT Lambdas IP CEP LambdaStream RBUDP
  • 15. OptIPortals Scale to 1/3 Billion Pixels Enabling Viewing of Very Large Images or Many Simultaneous Images Spitzer Space Telescope (Infrared) NASA Earth Satellite Images Bushfires October 2007 San Diego Source: Falko Kuester, Calit2@UCSD
  • 16. The Latest OptIPuter Innovation: Quickly Deployable Nearly Seamless OptIPortables Shipping Case 45 minute setup, 15 minute tear-down with two people (possible with one)
  • 17. Calit2 3D Immersive StarCAVE OptIPortal Connected at 50 Gb/s to Quartzite 15 Meyer Sound Speakers + Subwoofer 30 HD Projectors! Passive Polarization-- Optimized the Polarization Separation and Minimized Attenuation Source: Tom DeFanti, Greg Dawe, Calit2 Cluster with 30 Nvidia 5600 cards-60 GB Texture Memory
  • 18. 3D Stereo Head Tracked OptIPortal: NexCAVE Array of JVC HDTV 3D LCD Screens KAUST NexCAVE = 22.5MPixels www.calit2.net/newsroom/article.php?id=1584 Source: Tom DeFanti, Calit2@UCSD
  • 19. High Definition Video Connected OptIPortals: Virtual Working Spaces for Data Intensive Research 2010 NASA Supports Two Virtual Institutes LifeSize HD Calit2@UCSD 10Gbps Link to NASA Ames Lunar Science Institute, Mountain View, CA Source: Falko Kuester, Kai Doerr Calit2; Michael Sims, Larry Edwards, Estelle Dodson NASA
  • 20. EVL’s SAGE OptIPortal VisualCasting Multi-Site OptIPuter Collaboratory CENIC CalREN-XD Workshop Sept. 15, 2008 Total Aggregate VisualCasting Bandwidth for Nov. 18, 2008 EVL-UI Chicago Sustained 10,000-20,000 Mbps! At Supercomputing 2008 Austin, Texas November, 2008 Streaming 4k SC08 Bandwidth Challenge Entry Remote: On site: U of Michigan SARA (Amsterdam) UIC/EVL U Michigan GIST / KISTI (Korea) U of Queensland Osaka Univ. (Japan) Russian Academy of Science Masaryk Univ. (CZ) Requires 10 Gbps Lightpath to Each Site Source: Jason Leigh, Luc Renambot, EVL, UI Chicago
  • 21. Using Supernetworks to Couple End User’s OptIPortal to Remote Supercomputers and Visualization Servers Source: Mike Norman, Rick Wagner, SDSC Argonne NL DOE Eureka 100 Dual Quad Core Xeon Servers 200 NVIDIA Quadro FX GPUs in 50 Quadro Plex S4 1U enclosures 3.2 TB RAM rendering ESnet SDSC 10 Gb/s fiber optic network NICS visualization ORNL Calit2/SDSC OptIPortal1 20 30‖ (2560 x 1600 pixel) LCD panels NSF TeraGrid Kraken simulation Cray XT5 10 NVIDIA Quadro FX 4600 graphics 8,256 Compute Nodes cards > 80 megapixels 99,072 Compute Cores 10 Gb/s network throughout 129 TB RAM *ANL * Calit2 * LBNL * NICS * ORNL * SDSC
  • 22. National-Scale Interactive Remote Rendering of Large Datasets SDSC ESnet ALCF Science Data Network (SDN) > 10 Gb/s Fiber Optic Network Dynamic VLANs Configured Using OSCARS Rendering Visualization Eureka OptIPortal (40M pixels LCDs) 100 Dual Quad Core Xeon Servers 10 NVIDIA FX 4600 Cards 200 NVIDIA FX GPUs 10 Gb/s Network Throughout 3.2 TB RAM Interactive Remote Rendering Real-Time Volume Rendering Streamed from ANL to SDSC Last Year Now High-Resolution (4K+, 15+ FPS)—But: Driven by a Simple Web GUI: • Command-Line Driven •Rotate, Pan, Zoom • Fixed Color Maps, Transfer Functions •GUI Works from Most Browsers • Slow Exploration of Data • Manipulate Colors and Opacity • Fast Renderer Response Time Source: Rick Wagner, SDSC
  • 23. NSF OOI is a $400M Program -OOI CI is $34M Part of This 30-40 Software Engineers Housed at Calit2@UCSD Source: Matthew Arrott, Calit2 Program Manager for OOI CI
  • 24. OOI CI is Built on NLR/I2 Optical Infrastructure Physical Network Implementation Source: John Orcutt, Matthew Arrott, SIO/Calit2
  • 25. Cisco CWave for CineGrid: A New Cyberinfrastructure for High Resolution Media Streaming* Source: John (JJ) Jamison, Cisco PacificWave 1000 Denny Way (Westin Bldg.) Seattle StarLight Northwestern Univ Level3 Chicago 1360 Kifer Rd. McLean Sunnyvale 2007 Equinix 818 W. 7th St. Los Angeles CENIC Wave Cisco Has Built 10 GigE Waves on CENIC, PW, & NLR and Installed Large 6506 Switches for Calit2 Access Points in San Diego, Los San Diego Angeles, Sunnyvale, Seattle, Chicago and CWave core PoP McLean for CineGrid Members 10GE waves on NLR and CENIC (LA to SD) These Points are also GLIF GOLEs Some of * May 2007
  • 26. CineGrid 4K Digital Cinema Projects: ―Learning by Doing‖ CineGrid @ iGrid 2005 CineGrid @ AES 2006 CineGrid @ Holland Festival 2007 CineGrid @ GLIF 2007 Laurin Herr, Pacific Interface; Tom DeFanti, Calit2
  • 27. First Tri-Continental Premier of a Streamed 4K Feature Film With Global HD Discussion 4K Film Director, Beto Souza July 30, 2009 Keio Univ., Japan Calit2@UCSD Source: Sheldon Brown, San Paulo, Brazil Auditorium CRCA, Calit2 4K Transmission Over 10Gbps-- 4 HD Projections from One 4K Projector
  • 28. CineGrid 4K Remote Microscopy Collaboratory: USC to Calit2 Photo: Alan Decker December 8, 2009 Richard Weinberg, USC
  • 29. Open Cloud OptIPuter Testbed--Manage and Compute Large Datasets Over 10Gbps Lambdas CENIC NLR C-Wave Dragon Open Source SW  Hadoop • 9 Racks MREN  Sector/Sphere • 500 Nodes  Nebula • 1000+ Cores  Thrift, GPB • 10+ Gb/s Now  Eucalyptus • Upgrading Portions to  Benchmarks 100 Gb/s in 2010/2011 29 Source: Robert Grossman, UChicago
  • 30. Terasort on Open Cloud Testbed Sustains >5 Gbps--Only 5% Distance Penalty! Sorting 10 Billion Records (1.2 TB) at 4 Sites (120 Nodes) Source: Robert Grossman, UChicago
  • 31. ―Blueprint for the Digital University‖--Report of the UCSD Research Cyberinfrastructure Design Team • Focus on Data-Intensive Cyberinfrastructure April 2009 No Data Bottlenecks --Design for Gigabit/s Data Flows research.ucsd.edu/documents/rcidt/RCIDTReportFinal2009.pdf
  • 32. Campus Preparations Needed to Accept CENIC CalREN Handoff to Campus Source: Jim Dolgonas, CENIC
  • 33. Current UCSD Prototype Optical Core: Bridging End-Users to CENIC L1, L2, L3 Services Quartzite Communications To 10GigE cluster node interfaces Core Year 3 Enpoints: Quartzite Wavelength >= 60 endpoints at 10 GigE Core Selective ..... Switch >= 32 Packet switched Lucent To 10GigE cluster node interfaces and >= 32 Switched wavelengths other switches To cluster nodes ..... >= 300 Connected endpoints Glimmerglass To cluster nodes ..... Production GigE Switch with OOO Dual 10GigE Upliks Switch To cluster nodes Approximately 0.5 TBit/s 32 10GigE ..... Arrive at the ―Optical‖ GigE Switch with Force10 Dual 10GigE Upliks Center of Campus. ... GigE Switch with Switching is a Hybrid of: To Packet Switch CalREN-HPR Research Dual 10GigE Upliks Packet, Lambda, Circuit -- other nodes Cloud GigE OOO and Packet Switches 10GigE Campus Research 4 GigE 4 pair fiber Cloud Juniper T320 Source: Phil Papadopoulos, SDSC/Calit2 (Quartzite PI, OptIPuter co-PI) Quartzite Network MRI #CNS-0421555; OptIPuter #ANI-0225642
  • 34. Calit2 Sunlight Optical Exchange Contains Quartzite Maxine Brown, EVL, UIC OptIPuter Project Manager
  • 35. UCSD Campus Investment in Fiber Enables Consolidation of Energy Efficient Computing & Storage WAN 10Gb: N x 10Gb/s CENIC, NLR, I2 Gordon – HPD System Cluster Condo DataOasis Triton – Petascale (Central) Storage Data Analysis Scientific Instruments GreenLight Digital Data Campus Lab OptIPortal Data Center Collections Cluster Tiled Display Wall Source: Philip Papadopoulos, SDSC, UCSD
  • 36. National Center for Microscopy and Imaging Research: Integrated Infrastructure of Shared Resources Shared Infrastructure Scientific Local SOM Instruments Infrastructure End User Workstations Source: Steve Peltier, NCMIR
  • 37. Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis http://camera.calit2.net/
  • 38. Calit2 Microbial Metagenomics Cluster- Lambda Direct Connect Science Data Server Source: Phil Papadopoulos, SDSC, Calit2 512 Processors ~200TB ~5 Teraflops Sun 1GbE X4500 ~ 200 Terabytes Storage and Storage 10GbE Switched 10GbE / Routed Core 4000 Users From 90 Countries
  • 39. Creating CAMERA 2.0 - Advanced Cyberinfrastructure Service Oriented Architecture Source: CAMERA CTO Mark Ellisman
  • 40. OptIPuter Persistent Infrastructure Enables Calit2 and U Washington CAMERA Collaboratory Photo Credit: Alan Decker Feb. 29, 2008 Ginger Armbrust’s Diatoms: Micrographs, Chromosomes, Genetic Assembly iHDTV: 1500 Mbits/sec Calit2 to UW Research Channel Over NLR
  • 41. NSF Funds a Data-Intensive Track 2 Supercomputer: SDSC’s Gordon-Coming Summer 2011 • Data-Intensive Supercomputer Based on SSD Flash Memory and Virtual Shared Memory SW – Emphasizes MEM and IOPS over FLOPS – Supernode has Virtual Shared Memory: – 2 TB RAM Aggregate – 8 TB SSD Aggregate – Total Machine = 32 Supernodes – 4 PB Disk Parallel File System >100 GB/s I/O • System Designed to Accelerate Access to Massive Data Bases being Generated in Many Fields of Science, Engineering, Medicine, and Social Science Source: Mike Norman, Allan Snavely SDSC
  • 42. Rapid Evolution of 10GbE Port Prices Makes Campus-Scale 10Gbps CI Affordable • Port Pricing is Falling • Density is Rising – Dramatically • Cost of 10GbE Approaching Cluster HPC Interconnects $80K/port Chiaro (60 Max) $ 5K Force 10 (40 max) ~$1000 (300+ Max) $ 500 Arista $ 400 48 ports Arista 48 ports 2005 2007 2009 2010 Source: Philip Papadopoulos, SDSC/Calit2
  • 43. Arista Enables SDSC’s Massive Parallel 10G Switched Data Analysis Resource 10Gbps OptIPuter UCSD RCI Radical Change Enabled by Co-Lo Arista 7508 10G Switch 5 384 10G Capable 8 CENIC/ 2 32 NLR Triton 4 Existing 8 Commodity Trestles 32 2 Storage 100 TF 12 1/3 PB 40128 8 Dash 2000 TB Oasis Procurement (RFP) > 50 GB/s 128 • Phase0: > 8GB/s Sustained Today Gordon • Phase I: > 50 GB/sec for Lustre (May 2011) :Phase II: >100 GB/s (Feb 2012) Source: Philip Papadopoulos, SDSC/Calit2
  • 44. Data Oasis – 3 Different Types of Storage
  • 45. Calit2 CAMERA Automatic Overflows into SDSC Triton @ SDSC Triton Resource @ CALIT2 Transparently CAMERA - Sends Jobs to Managed Submit Portal Job Submit on Triton Portal (VM) 10Gbps Direct Mount CAMERA == DATA No Data Staging
  • 46. California and Washington Universities Are Testing a 10Gbps Lambda-Connected Commercial Data Cloud • Amazon Experiment for Big Data – Only Available Through CENIC & Pacific NW GigaPOP – Private 10Gbps Peering Paths – Includes Amazon EC2 Computing & S3 Storage Services • Early Experiments Underway – Phil Papadopoulos, Calit2/SDSC Rocks – Robert Grossman, Open Cloud Consortium
  • 47. Using Condor and Amazon EC2 on Adaptive Poisson-Boltzmann Solver (APBS) • APBS Rocks Roll (NBCR) + EC2 Roll + Condor Roll = Amazon VM • Cluster extension into Amazon using Condor Local Running in Amazon Cloud Cluster EC2 Cloud NBCR NBCR VM VM NBCR VM APBS + EC2 + Condor Source: Phil Papadopoulos, SDSC/Calit2
  • 48. Hybrid Cloud Computing with modENCODE Data • Computations in Bionimbus Can Span the Community Cloud & the Amazon Public Cloud to Form a Hybrid Cloud • Sector was used to Support the Data Transfer between Two Virtual Machines – One VM was at UIC and One VM was an Amazon EC2 Instance • Graph Illustrates How the Throughput between Two Virtual Machines in a Wide Area Cloud Depends upon the File Size Biological data (Bionimbus) Source: Robert Grossman, UChicago
  • 49. OptIPlanet Collaboratory: Enabled by 10Gbps ―End-to-End‖ Lightpaths HD/4k Live Video HPC Local or Remote Instruments End User OptIPortal National LambdaRail 10G Lightpaths Campus Optical Switch Data Repositories & Clusters HD/4k Video Repositories
  • 50. You Can Download This Presentation at lsmarr.calit2.net

Editor's Notes

  • #39: This is a production cluster with it’s own Force10 e1200 switch. It is connected to quartzite and is labeled as the “CAMERA Force10 E1200”.We built CAMERA this way because of technology deployed successfully in Quartzite