SlideShare a Scribd company logo
Comparing Server I/O Consolidation
Solutions: iSCSI, InfiniBand and FCoE
Dr. Walter Dey, Cisco Systems
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
22
SNIA Legal Notice
The material contained in this tutorial is copyrighted
by the SNIA.
Member companies and individuals may use this
material in presentations and literature under the
following conditions:
Any slide or slides used must be reproduced without
modification
The SNIA must be acknowledged as source of any material
used in the body of any document containing material from
these presentations.
This presentation is a project of the SNIA Education
Committee.
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
33
Abstract
Comparing Server I/O Consolidation: iSCSI, Infiniband and
FCoE
This tutorial gives an introduction to Server I/O consolidation, having
one network interface technology (Standard Ethernet, Data Center
Ethernet, InfiniBand), to support IP applications and block level storage
(iSCSI, FCoE and SRP/iSER) applications. The benefits for the end user
are discussed: less cabling, power and cooling. For these 3 solutions,
iSCSI, Infiniband and FCoE, we compare features like Infrastructure /
Cabling, Protocol Stack, Performance, Operating System drivers and
support, Management Tools, Security and best design practices.
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
4
Agenda
Definition of Server I/O Consolidation
Why Server I/O Consolidation
Introducing the 3 solutions
iSCSI
InfiniBand
FCoE
Differentiators
Conclusion
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
Definition of Server I/O Consolidation
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
6
What is Server I/O Consolidation
IT Organizations operate multiple parallel networks
IP Applications (including NFS, NAS,…) over a Ethernet network *)
SAN over a Fibre Channel network
HPC/IPC over an InfiniBand network **)
Server I/O consolidation combines the various traffic types
onto a single interface and single cable
Server I/O consolidation is the first phase for a Unified Fabric
(single network)
*) In this presentation we cover only Block Level Storage solutions, not File Level
(NAS, NFS,..)
**) For the remaining part, we don’t cover HPC; for lowest latency requirements,
InfiniBand is the best and most appropriate technology.
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
7
I/O Consolidation Benefits
Adaptor: NIC for Ethernet/IP, HCA for InfiniBand, Converged Network
Adaptor (CNA) for FCoE
Customer Benefit: Fewer NIC’s, HBA’s and cables, lower CapEx, OpEx
Adapter
Adapter
FC HBA
FC HBA
NIC
NIC
FC Traffic
FC Traffic
Enet Traffic
Enet Traffic
iSCSI
or
InfiniBand
or
FCoE
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
Why Server I/O Consolidation ?
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
9
The drive for I/O Consolidation
Multicore – Multisocket CPUs
Server Virtualization software (Hypervisor)
High demand for I/O bandwidth
Reductions in cables, power and cooling, therefore
reducing OpEx/CapEx
Limited number of interfaces for Blade Servers
Consolidated Input into Unified Fabric
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
10
New Network Requirements with Server
Virtualization
Virtual networks growing faster and larger than physical
Network admins are getting involved in virtual interface deployments
Network access layer needs to evolve to support consolidation and
mobility
Multi-core Computing driving Virtualization & new networking
needs
Driving SAN attach rates higher (10% 40% Growing)
Driving users to plan now for 10GE server interfaces
Virtualization enables the promise of blades
10GE and FC are highest growth technologies within blades
Virtualization and Consolidated I/O removes blade limitation
Network Virtualization enables CPU & I/O Intensive Workloads
to be Virtualized
Enable broader adoption of x86 class servers
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
11
10GbE Drivers in the Datacenter
Multi-Core CPU architectures allowing bigger and multiple
workloads on the same machine
Server virtualization driving the need for more bandwidth per
server due to server consolidation
Growing need for network storage driving the demand for
higher network bandwidth to the server
Multi-Core CPUs and Server Virtualization driving the
demand for higher bandwidth network connections
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
12
Evolution of Ethernet Physical Media
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
Introducing the three solutions
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
14
Server I/O Consolidation Solutions
iSCSI
LAN: Based on Ethernet and TCP/IP
SAN: Encapsulates SCSI in TCP/IP
InfiniBand
LAN: Transports IP over InfiniBand (IPoIB); Socket Direct Protocol (SDP)
between IB attached servers
SAN: Transports SCSI over Remote DMA protocol (SRP) or iSCSI Extensions
for RDMA (iSER)
HPC/IPC: Message Passing Interface (MPI) over InfiniBand network
FCoE
LAN: Based on Ethernet (Data Center Ethernet) and TCP/IP
SAN: Maps and transports Fibre Channel over Data Center Ethernet (lossless
Ethernet) *)
*) Data Center Ethernet is an architectural collection of Ethernet extensions designed to improve
Ethernet networking and management in the Data Center
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
15
Encapsulation technologies
Ethernet
1, 10 Gbps
IP
TCP
iSCSI
InfiniBand
SRP
IP
TCP
FCIP
FCP*
IP
TCP
iFCP
FCP*
Fibrechannel
FCP*
1, 2, 4, (8), 10 Gbps 10, 20 Gbps
iSCSI FCoE InfiniBand
iSER
iSCSI
Data Center
Ethernet
FCoE
FCP*
SCSI Layer
Operating System / Applications
(1), 10 Gbps
* Includes FC Layer
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
16
iSCSI
Ethernet
1, 10 . . . Gbps
IP
TCP
iSCSI
SCSI Layer
OS / Applications
A SCSI transport protocol that operates over
TCP
Encapsulates SCSI CDBs (operational commands:
e.g. read or write) and data into TCP/IP byte-
streams (defined by SAM-2—SCSI Architecture
Model 2)
Allows iSCSI Initiators to access IP-based iSCSI
targets (either natively or via iSCSI-to-FC
gateway)
Standards status
RFC 3720 on iSCSI
Collection of RFCs describing iSCSI
RFC 3347—iSCSI Requirements
RFC 3721—iSCSI Naming and Discover
RFC 3723—iSCSI Security
Broad industry support
Operating System vendors support their iSCSI
drivers
Gateway (Routers, Bridges) and Native iSCSI
storage arrays
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
17
iSCSI Messages
Ethernet Frame
IP Packet
TCP Segment
48 Bytes20 Bytes20 Bytes22 Bytes 4 Bytes
Contains routing information
so that the message can find its
way through the network
Provides information necessary
to guarantee delivery
Explains how to extract
SCSI commands and data
Ethernet HeaderEthernet Header Enet
Trailer
Enet
TrailerIP HeaderIP Header TCP HeaderTCP Header iSCSI Header iSCSI Data
PDU
nn
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
18
iSCSI Topology
Allows I/O consolidation
iSCSI is proposed today as an I/O consolidation option
Native (iSCSI Storage Array) and Gateway solutions
FC
fabric
iSCSI
Initiator iSCSI
gateway
IP/Ethernet FC
Target
iSCSI session
FCP session
stateful
iSCSI
Initiator
iSCSI
Storage
Array
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
19
View from Operating System
Operating System sees:
1 Gigabit Ethernet adapter
iSCSI Initiator
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
20
iSCSI based I/O Consolidation
Overhead of TCP/IP Protocol
It’s SCSI not FC
LAN/Metro/WAN (Routable)
Security of IP protocols (IPsec)
Stateful gateway (iSCSI <-> FCP)
Mainly 1G Initiator (Server)
10G for iSCSI Target recommended
Can use existing Ethernet switching
infrastructure
Offload Engine (TOE) suggested
(virtualized environment support ?)
QoS or separate VLAN for storage
traffic suggested
New Management Tools
Might require different Multipath
SoftwareEthernet
1, 10 . . . Gbps
IP
TCP
iSCSI
SCSI Layer
OS / Applications
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
21
InfiniBand
Standards-based interconnect
http://www.infinibandta.org
Channelized, connection-based
interconnect optimized for high
performance computing
Supports server and storage
attachments
Bandwidth Capabilities (SDR/DDR)
4x—10/20 Gbps: 8/16 Gbps
actual data rate
12x—30/60 Gbps: 24/48 Gbps
actual data rate
Built-in RDMA as core capability for
inter-CPU communication
IB
iSER
iSCSI
IB
SRP
10, 20 Gbps (4X SDR/DDR)
SCSI Layer
OS / Applications
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
22
InfiniBand:
SCSI RDMA Protocol (SRP)
SCSI Semantics over RDMA fabric
Provides High Performance block-level storage access
Not IB specific - Standard specified by T10
http://www.t10.org
Host drivers tie into standard SCSI I/F in kernel
Storage appears as normal SCSI/FC disks to local host
Can be used for end-to-end IB storage (No FC)
Can be used for SAN Boot over IB
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
23
InfiniBand:
iSCSI Extensions for RDMA (iSER)
IETF Standard
Enables iSCSI to take advantage
of RDMA.
Mainly offloads the data path
Leverages iSCSI management and
discovery architecture
Simplifies iSCSI protocol details
such as data integrity
management and error recovery
Not IB Specific
Needs a iSER Target to work end
to end
InfiniBand
HCA
Reliable
Connection
User Space
Application
Kernel Space
VFS Layer
File Systems
Block Drivers
iSCSI
iSER
Linux Driver Stack
InfiniBand
Storage
Targe
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
24
InfiniBand Gateway Topology:
Gateways for Network and Storage
Fibrechannel Fabric Infiniband Switch with
stateful Gateways
Ethernet LAN
Servers
IB Connected
Fibre Channel to InfiniBand gateway for storage
access
Fibre Channel to InfiniBand gateway for storage
access
Ethernet to InfiniBand gateway for LAN
access
Ethernet to InfiniBand gateway for LAN
access
Single InfiniBand link for:
- Storage
- Network
Single InfiniBand link for:
- Storage
- Network Servers
Ethernet
Connected
Ethernet
Fibrechannel
InfiniBand
IP Application Traffic
Block Level Storage
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
25
LAN
LAN
Physical vs. Logical view
Physical View
•Servers connected via IB
•SAN attached via public AL
•Ethernet attached via Gig
Etherchannel
Logical View
•Hosts present WWNN on SAN
•Hosts present IP address on
VLAN
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
26
View from Operating System
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
27
InfiniBand based I/O Consolidation
Requires new Eco system (HCA,
cabling, switches)
Mostly copper cabling, limited
distance
Datacenter protocol
New driver (SRP)
Stateful Gateway from SRP to
FCP (unless native IB attached
disk array)
RDMA capability of HCA used
Low CPU overhead
Payload is SCSI not FC
Concept of Virtual links and QoS
in InfiniBand
IB
iSER
iSCSI
IB
SRP
10, 20 Gbps (4X SDR/DDR)
SCSI Layer
OS / Applications
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
28
Data Center
Ethernet
FCoE
FCP*
SCSI Layer
OS / Applications
(1), 10 . . . Gbps
FCoE
From a Fibre Channel standpoint it’s
Fibrechannel encapsulated in
Ethernet
From an Ethernet standpoint it’s just
another ULP (Upper Layer Protocol)
FCoE is an extension of Fibre
Channel onto a Lossless (Data
Center) Ethernet fabric
FCoE is managed like FC at initiator,
target, and switch level, completely
based on the FC model
Same host-to-switch and switch-to-switch
behavior of FC
in order frame delivery or
FSPF load balancing
WWNs, FC-IDs, hard/soft
zoning, DNS, RSCN
Standards Work in T11, IEEE and IETF
not yet final
* Includes FC Layer
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
29
FCoE Enablers
Ethernet
Header
FCoE
Header
FC
Header
FC Payload
CRC
EOF
FCS
Same as a physical FC frame
Control information: version, ordered sets (SOF, EOF)
Ethernet V2 Frame, Ethertype = FCoE
10Gbps Ethernet
Lossless Ethernet (Data Center Ethernet)
Matches the B2B credits used in Fibrechannel to provide a lossless
service
Ethernet jumbo frames (2180 Bytes)
Max FC frame payload = 2112 bytes
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
30
Data Center Ethernet
Enhanced Ethernet for Data Center Applications
Priority Flow Control (Priority Pause) *)
Link Scheduling
Congestion Management
Layer 2 Multipathing
Configuration Management
Transport of FCoE
Enabling Technology for I/O Consolidation and
Unified Fabric
*) T11 BB 5 group has only required that Ethernet switches have standard Pause, and baby Jumbo
frame capability; which means no I/O consolidation support.
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
31
FCoE Enabler: Priority Flow Control
Enables lossless Fabrics for each class of service
PAUSE sent per virtual lane when buffers limit exceeded
Network resources are partitioned between VL’s (E.g. input
buffer and output queue)
The switch behavior is negotiable per VL
InfiniBand uses a similar mechanism for multiplexing multiple data
streams over a single physical link
Default LAN Default LAN
User 1 User 1
User 3 User 3
User 4 User 4
PAUSE User 2 full!
Traffic
Mgmt. Mgmt.
Mgmt. Mgmt.
FC/FCoE FC/FCoE
User 2 STOP
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
32
Fibre Channel
Drivers
Ethernet
Drivers
Operating System
Fibre Channel
Drivers
Ethernet
Drivers
Operating System
PCIe
FibreChannel
Ethernet
10GbEE
10GbEE
Link
FibreChannel
PCIe
4GHBA
4GHBA
Link
PCIe
Ethernet10GbE
10GbE
Link
FCoE Enabler:
Consolidated Network Adapter (CNA)
LAN CNAHBA
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
33
View from Operating System
Standard drivers
Same management
Operating System
sees:
Dual port 10 Gigabit
Ethernet adapter
Dual Port 4 Gbps Fibre
Channel HBAs
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
34
LANLAN SAN ASAN A SAN BSAN B
FCoE I/O Consolidation Topology
FCoE Target:
Dramatic reduction in adapters,
switch ports and cabling
4 cables to 2 cables per
server
Seamless connection to the
installed base of existing
SANs and LANs
High performance frame
mappers vs. gateway
bottlenecks
Effective sharing of high
bandwidth links
Consolidated network
infrastructure
Faster infrastructure
provisioning
Lower TCO
Data Center
Ethernet
with FCoE
Classical
Ethernet
Classical
FC
SERVER SERVER
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
35
Data Center
Ethernet
FCoE
FCP*
SCSI Layer
OS / Applications
(1), 10 . . . Gbps
FCoE based I/O Consolidation
FCP layer untouched
Requires Baby Jumbo Frames (2180 Bytes)
Nonroutable Datacenter protocol
Datacenter wide VLAN’s
Same management tools as for Fibre Channel
Same drivers as for Fibre Channel HBA’s
Same Multipathing software
Simplified certifications with storage subsystem
vendors
Requires lossless (10G) Ethernet switching
fabric
May require new host adaptors (unless FCoE
software stack)
* Includes FC Layer
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
Differentiators
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
37
Storage Part of I/O Consolidation
iSCSI FCoE IB-SRP
Payload SCSI Fibre Channel SCSI
Transport TCP/IP
Data Center
Ethernet
InfiniBand
Scope LAN/MAN/WAN Datacenter Datacenter
Bandwidth/Performance Low/Medium High High
CPU Overhead High Low Low
Gateway Overhead High Low High
FC Security Model No Yes No
FC Software on Host No Yes No
FC Management Model No Yes No
Initiator Implementation Yes Yes Yes
Target Implementation Yes Yes/Future Yes
IP Routable Yes No N/A
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
38
Storage Part of I/O Consolidation
iSCSI FCoE IB-SRP
Virtual Lanes No Yes Yes
Congestion Control TCP
Priority
Flow
Control
Credit
based
Gateway Functionality stateful stateless stateful
Connection Oriented Yes No Yes
Access Control IP/VLAN
VLAN /
VSAN
Partitions
RDMA primitives defined defined defined
Latency 100s of us 10s of us us
Adapter NIC CNA HCA
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
Conclusion
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
40
Encapsulation Technologies
Ethernet
1, 10 Gbps
IP
TCP
iSCSI
InfiniBand
SRP
IP
TCP
FCIP
FCP*
IP
TCP
iFCP
FCP*
Fibrechannel
FCP*
1, 2, 4, (8), 10 Gbps 10, 20 Gbps
iSCSI FCoE InfiniBand
iSER
iSCSI
Data Center
Ethernet
FCoE
FCP*
SCSI Layer
Operating System / Applications
(1), 10 Gbps
* Includes FC Layer
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
41
Conclusion
Server I/O Consolidation is driven by high I/O
bandwidth demand
I/O Bandwidth demand is driven by Multicore /
Socket Server and Virtualization
TCP/IP (iSCSI), Data Center Ethernet (FCoE) and
InfiniBand (SRP, iSER) are generic transport
protocols allowing Server I/O Consolidation
Server I/O Consolidation is the first phase,
consolidating input into a Unified Fabric
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
42
Thank You !
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
4343
Other SNIA Tutorials
Fibre Channel Technologies: Current and Future
IP Storage Protocols - iSCSI
InfiniBand Technology Overview
FCoE: Fibre Channel over Ethernet
Check out SNIA Tutorial:
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
4444
Q&A / Feedback
Please send any questions or comments on this
presentation to SNIA: tracknetworking@snia.org
Many thanks to the following individuals
for their contributions to this tutorial.
- SNIA Education Committee
Gilles Chekroun Howard Goldstein Marco Di Benedetto
Errol Roberts Bill Lulofs Carlos Pereira
Walter Dey Dror Goldenberg James Long
Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE
© 2008 Storage Networking Industry Association. All Rights Reserved.
45
References
http://www.fcoe.com/
http://www.t11.org/fcoe
http://www.fibrechannel.org/OVERVIEW/
FCIA_SNW_FCoE_WP_Final.pdf
http://www.fibrechannel.org/OVERVIEW/
FCIA_SNW_FCoE_flyer_Final.pdf
http://www.fibrechannel.org/FCoE.html

More Related Content

PPT
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
PPTX
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
PPTX
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
PPT
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
PDF
#IBMEdge: "Not all Networks are Equal"
PDF
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
PPTX
VMware EMC Service Talk
PDF
Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC ...
 
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
#IBMEdge: "Not all Networks are Equal"
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
VMware EMC Service Talk
Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC ...
 

What's hot (19)

PDF
I/O virtualization with InfiniBand and 40 Gigabit Ethernet
PDF
Mellanox Storage Solutions
PPTX
Mellanox Approach to NFV & SDN
PDF
Storage Area Networking: SAN Technology Update & Best Practice Deep Dive for ...
 
PDF
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
 
PDF
Advancing Applications Performance With InfiniBand
PDF
Cisco MDS Main Session EMC World 2015
PPTX
Mellanox VXLAN Acceleration
PDF
Converged Data Center: FCoE, iSCSI and the Future of Storage Networking
 
PDF
iSCSI Protocol and Functionality
PDF
IBTA Releases Updated Specification for RoCEv2
PDF
Analyst Perspective - Next Generation Storage Networking for Next Generation ...
PPTX
Cisco data center training for ibm
PDF
Конференция Brocade. 2
PPTX
InfiniBand Growth Trends - TOP500 (July 2015)
PDF
Конференция Brocade. 1. Новые тренды в сетях ЦОД: Программно-определяемые сет...
PPTX
iSCSI (Internet Small Computer System Interface)
PPTX
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
PPTX
Designing and deploying converged storage area networks final
I/O virtualization with InfiniBand and 40 Gigabit Ethernet
Mellanox Storage Solutions
Mellanox Approach to NFV & SDN
Storage Area Networking: SAN Technology Update & Best Practice Deep Dive for ...
 
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
 
Advancing Applications Performance With InfiniBand
Cisco MDS Main Session EMC World 2015
Mellanox VXLAN Acceleration
Converged Data Center: FCoE, iSCSI and the Future of Storage Networking
 
iSCSI Protocol and Functionality
IBTA Releases Updated Specification for RoCEv2
Analyst Perspective - Next Generation Storage Networking for Next Generation ...
Cisco data center training for ibm
Конференция Brocade. 2
InfiniBand Growth Trends - TOP500 (July 2015)
Конференция Brocade. 1. Новые тренды в сетях ЦОД: Программно-определяемые сет...
iSCSI (Internet Small Computer System Interface)
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
Designing and deploying converged storage area networks final
Ad

Viewers also liked (20)

PPT
Vm Management For Green It Data Centers
PDF
Presentation Robayet Nasim (IEEE CloudCom 2016)
PPT
Object storage
PPTX
Memory
PDF
Bright talk Elastic Block Storage Service on prem
PDF
Docker 1.9 Workshop
PDF
Ibm cloud object storage industry workloads
PPTX
iSCSI: Internet Small Computer System Interface
PPT
Turning OpenStack Swift into a VM storage platform
PPTX
Personal storage to enterprise storage system journey
PPT
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
PPTX
SCSI Protocol
PPT
Chapter 9: SCSI Drives and File Systems
PPT
File structures
PPTX
File Organization
PPTX
VMware VSAN Technical Deep Dive - March 2014
PDF
Detailed iSCSI presentation
PPT
11. Storage and File Structure in DBMS
PDF
Virtualization presentation
Vm Management For Green It Data Centers
Presentation Robayet Nasim (IEEE CloudCom 2016)
Object storage
Memory
Bright talk Elastic Block Storage Service on prem
Docker 1.9 Workshop
Ibm cloud object storage industry workloads
iSCSI: Internet Small Computer System Interface
Turning OpenStack Swift into a VM storage platform
Personal storage to enterprise storage system journey
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
SCSI Protocol
Chapter 9: SCSI Drives and File Systems
File structures
File Organization
VMware VSAN Technical Deep Dive - March 2014
Detailed iSCSI presentation
11. Storage and File Structure in DBMS
Virtualization presentation
Ad

Similar to Presentation comparing server io consolidation solution with i scsi, infiniband and f-coe (20)

PDF
I/O Consolidation in the Data Center -Excerpt
PPT
Introduction to storage
PPTX
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
PPT
Introduction to Storage types - basic.ppt
PPT
Storage-Presentation-ShazzadHossain.ppt
PPT
Introduction to Storage.ppt
PPT
I O Continuity Group July 23, 2008 Seminar
PDF
Converged data center_f_co_e_iscsi_future_storage_networking
 
PPTX
IP storage
PDF
Storage networking fcf_co_eiscsivsn_technology
 
PDF
Cisco --introduction-to-storage-area-networking-technologies
PPT
PPTX
EE 281-SAN DECODED PRESENTATION
PPTX
Ee 281 san decoded presentation(1)
PPTX
SAN_Module3_Part1_PPTs.pptx
PDF
Module 06 (1).pdf
PPTX
Cisco storage networking protect scale-simplify_dec_2016
PPTX
Higher Speed, Higher Density, More Flexible SAN Switching
PPT
FCoE Origins and Status for Ethernet Technology Summit
I/O Consolidation in the Data Center -Excerpt
Introduction to storage
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
Introduction to Storage types - basic.ppt
Storage-Presentation-ShazzadHossain.ppt
Introduction to Storage.ppt
I O Continuity Group July 23, 2008 Seminar
Converged data center_f_co_e_iscsi_future_storage_networking
 
IP storage
Storage networking fcf_co_eiscsivsn_technology
 
Cisco --introduction-to-storage-area-networking-technologies
EE 281-SAN DECODED PRESENTATION
Ee 281 san decoded presentation(1)
SAN_Module3_Part1_PPTs.pptx
Module 06 (1).pdf
Cisco storage networking protect scale-simplify_dec_2016
Higher Speed, Higher Density, More Flexible SAN Switching
FCoE Origins and Status for Ethernet Technology Summit

More from xKinAnx (20)

PPTX
Engage for success ibm spectrum accelerate 2
PPTX
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive
PDF
Software defined storage provisioning using ibm smart cloud
PDF
Ibm spectrum virtualize 101
PDF
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
PDF
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
PPTX
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
PPTX
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
PPTX
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
PPTX
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
PPTX
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
PPTX
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
PPTX
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
PPTX
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
PPT
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
PPTX
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
PDF
Presentation disaster recovery in virtualization and cloud
PDF
Presentation disaster recovery for oracle fusion middleware with the zfs st...
PDF
Presentation differentiated virtualization for enterprise clouds, large and...
PDF
Presentation desktops for the cloud the view rollout
Engage for success ibm spectrum accelerate 2
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive
Software defined storage provisioning using ibm smart cloud
Ibm spectrum virtualize 101
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Presentation disaster recovery in virtualization and cloud
Presentation disaster recovery for oracle fusion middleware with the zfs st...
Presentation differentiated virtualization for enterprise clouds, large and...
Presentation desktops for the cloud the view rollout

Recently uploaded (20)

PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
Spectroscopy.pptx food analysis technology
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Machine learning based COVID-19 study performance prediction
PPTX
Cloud computing and distributed systems.
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
KodekX | Application Modernization Development
NewMind AI Weekly Chronicles - August'25 Week I
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Programs and apps: productivity, graphics, security and other tools
Spectroscopy.pptx food analysis technology
The AUB Centre for AI in Media Proposal.docx
MYSQL Presentation for SQL database connectivity
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Encapsulation_ Review paper, used for researhc scholars
Digital-Transformation-Roadmap-for-Companies.pptx
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Spectral efficient network and resource selection model in 5G networks
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Building Integrated photovoltaic BIPV_UPV.pdf
Machine learning based COVID-19 study performance prediction
Cloud computing and distributed systems.
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
KodekX | Application Modernization Development

Presentation comparing server io consolidation solution with i scsi, infiniband and f-coe

  • 1. Comparing Server I/O Consolidation Solutions: iSCSI, InfiniBand and FCoE Dr. Walter Dey, Cisco Systems
  • 2. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 22 SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee.
  • 3. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 33 Abstract Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE This tutorial gives an introduction to Server I/O consolidation, having one network interface technology (Standard Ethernet, Data Center Ethernet, InfiniBand), to support IP applications and block level storage (iSCSI, FCoE and SRP/iSER) applications. The benefits for the end user are discussed: less cabling, power and cooling. For these 3 solutions, iSCSI, Infiniband and FCoE, we compare features like Infrastructure / Cabling, Protocol Stack, Performance, Operating System drivers and support, Management Tools, Security and best design practices.
  • 4. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 4 Agenda Definition of Server I/O Consolidation Why Server I/O Consolidation Introducing the 3 solutions iSCSI InfiniBand FCoE Differentiators Conclusion
  • 5. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. Definition of Server I/O Consolidation
  • 6. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 6 What is Server I/O Consolidation IT Organizations operate multiple parallel networks IP Applications (including NFS, NAS,…) over a Ethernet network *) SAN over a Fibre Channel network HPC/IPC over an InfiniBand network **) Server I/O consolidation combines the various traffic types onto a single interface and single cable Server I/O consolidation is the first phase for a Unified Fabric (single network) *) In this presentation we cover only Block Level Storage solutions, not File Level (NAS, NFS,..) **) For the remaining part, we don’t cover HPC; for lowest latency requirements, InfiniBand is the best and most appropriate technology.
  • 7. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 7 I/O Consolidation Benefits Adaptor: NIC for Ethernet/IP, HCA for InfiniBand, Converged Network Adaptor (CNA) for FCoE Customer Benefit: Fewer NIC’s, HBA’s and cables, lower CapEx, OpEx Adapter Adapter FC HBA FC HBA NIC NIC FC Traffic FC Traffic Enet Traffic Enet Traffic iSCSI or InfiniBand or FCoE
  • 8. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. Why Server I/O Consolidation ?
  • 9. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 9 The drive for I/O Consolidation Multicore – Multisocket CPUs Server Virtualization software (Hypervisor) High demand for I/O bandwidth Reductions in cables, power and cooling, therefore reducing OpEx/CapEx Limited number of interfaces for Blade Servers Consolidated Input into Unified Fabric
  • 10. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 10 New Network Requirements with Server Virtualization Virtual networks growing faster and larger than physical Network admins are getting involved in virtual interface deployments Network access layer needs to evolve to support consolidation and mobility Multi-core Computing driving Virtualization & new networking needs Driving SAN attach rates higher (10% 40% Growing) Driving users to plan now for 10GE server interfaces Virtualization enables the promise of blades 10GE and FC are highest growth technologies within blades Virtualization and Consolidated I/O removes blade limitation Network Virtualization enables CPU & I/O Intensive Workloads to be Virtualized Enable broader adoption of x86 class servers
  • 11. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 11 10GbE Drivers in the Datacenter Multi-Core CPU architectures allowing bigger and multiple workloads on the same machine Server virtualization driving the need for more bandwidth per server due to server consolidation Growing need for network storage driving the demand for higher network bandwidth to the server Multi-Core CPUs and Server Virtualization driving the demand for higher bandwidth network connections
  • 12. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 12 Evolution of Ethernet Physical Media
  • 13. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. Introducing the three solutions
  • 14. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 14 Server I/O Consolidation Solutions iSCSI LAN: Based on Ethernet and TCP/IP SAN: Encapsulates SCSI in TCP/IP InfiniBand LAN: Transports IP over InfiniBand (IPoIB); Socket Direct Protocol (SDP) between IB attached servers SAN: Transports SCSI over Remote DMA protocol (SRP) or iSCSI Extensions for RDMA (iSER) HPC/IPC: Message Passing Interface (MPI) over InfiniBand network FCoE LAN: Based on Ethernet (Data Center Ethernet) and TCP/IP SAN: Maps and transports Fibre Channel over Data Center Ethernet (lossless Ethernet) *) *) Data Center Ethernet is an architectural collection of Ethernet extensions designed to improve Ethernet networking and management in the Data Center
  • 15. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 15 Encapsulation technologies Ethernet 1, 10 Gbps IP TCP iSCSI InfiniBand SRP IP TCP FCIP FCP* IP TCP iFCP FCP* Fibrechannel FCP* 1, 2, 4, (8), 10 Gbps 10, 20 Gbps iSCSI FCoE InfiniBand iSER iSCSI Data Center Ethernet FCoE FCP* SCSI Layer Operating System / Applications (1), 10 Gbps * Includes FC Layer
  • 16. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 16 iSCSI Ethernet 1, 10 . . . Gbps IP TCP iSCSI SCSI Layer OS / Applications A SCSI transport protocol that operates over TCP Encapsulates SCSI CDBs (operational commands: e.g. read or write) and data into TCP/IP byte- streams (defined by SAM-2—SCSI Architecture Model 2) Allows iSCSI Initiators to access IP-based iSCSI targets (either natively or via iSCSI-to-FC gateway) Standards status RFC 3720 on iSCSI Collection of RFCs describing iSCSI RFC 3347—iSCSI Requirements RFC 3721—iSCSI Naming and Discover RFC 3723—iSCSI Security Broad industry support Operating System vendors support their iSCSI drivers Gateway (Routers, Bridges) and Native iSCSI storage arrays
  • 17. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 17 iSCSI Messages Ethernet Frame IP Packet TCP Segment 48 Bytes20 Bytes20 Bytes22 Bytes 4 Bytes Contains routing information so that the message can find its way through the network Provides information necessary to guarantee delivery Explains how to extract SCSI commands and data Ethernet HeaderEthernet Header Enet Trailer Enet TrailerIP HeaderIP Header TCP HeaderTCP Header iSCSI Header iSCSI Data PDU nn
  • 18. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 18 iSCSI Topology Allows I/O consolidation iSCSI is proposed today as an I/O consolidation option Native (iSCSI Storage Array) and Gateway solutions FC fabric iSCSI Initiator iSCSI gateway IP/Ethernet FC Target iSCSI session FCP session stateful iSCSI Initiator iSCSI Storage Array
  • 19. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 19 View from Operating System Operating System sees: 1 Gigabit Ethernet adapter iSCSI Initiator
  • 20. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 20 iSCSI based I/O Consolidation Overhead of TCP/IP Protocol It’s SCSI not FC LAN/Metro/WAN (Routable) Security of IP protocols (IPsec) Stateful gateway (iSCSI <-> FCP) Mainly 1G Initiator (Server) 10G for iSCSI Target recommended Can use existing Ethernet switching infrastructure Offload Engine (TOE) suggested (virtualized environment support ?) QoS or separate VLAN for storage traffic suggested New Management Tools Might require different Multipath SoftwareEthernet 1, 10 . . . Gbps IP TCP iSCSI SCSI Layer OS / Applications
  • 21. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 21 InfiniBand Standards-based interconnect http://www.infinibandta.org Channelized, connection-based interconnect optimized for high performance computing Supports server and storage attachments Bandwidth Capabilities (SDR/DDR) 4x—10/20 Gbps: 8/16 Gbps actual data rate 12x—30/60 Gbps: 24/48 Gbps actual data rate Built-in RDMA as core capability for inter-CPU communication IB iSER iSCSI IB SRP 10, 20 Gbps (4X SDR/DDR) SCSI Layer OS / Applications
  • 22. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 22 InfiniBand: SCSI RDMA Protocol (SRP) SCSI Semantics over RDMA fabric Provides High Performance block-level storage access Not IB specific - Standard specified by T10 http://www.t10.org Host drivers tie into standard SCSI I/F in kernel Storage appears as normal SCSI/FC disks to local host Can be used for end-to-end IB storage (No FC) Can be used for SAN Boot over IB
  • 23. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 23 InfiniBand: iSCSI Extensions for RDMA (iSER) IETF Standard Enables iSCSI to take advantage of RDMA. Mainly offloads the data path Leverages iSCSI management and discovery architecture Simplifies iSCSI protocol details such as data integrity management and error recovery Not IB Specific Needs a iSER Target to work end to end InfiniBand HCA Reliable Connection User Space Application Kernel Space VFS Layer File Systems Block Drivers iSCSI iSER Linux Driver Stack InfiniBand Storage Targe
  • 24. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 24 InfiniBand Gateway Topology: Gateways for Network and Storage Fibrechannel Fabric Infiniband Switch with stateful Gateways Ethernet LAN Servers IB Connected Fibre Channel to InfiniBand gateway for storage access Fibre Channel to InfiniBand gateway for storage access Ethernet to InfiniBand gateway for LAN access Ethernet to InfiniBand gateway for LAN access Single InfiniBand link for: - Storage - Network Single InfiniBand link for: - Storage - Network Servers Ethernet Connected Ethernet Fibrechannel InfiniBand IP Application Traffic Block Level Storage
  • 25. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 25 LAN LAN Physical vs. Logical view Physical View •Servers connected via IB •SAN attached via public AL •Ethernet attached via Gig Etherchannel Logical View •Hosts present WWNN on SAN •Hosts present IP address on VLAN
  • 26. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 26 View from Operating System
  • 27. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 27 InfiniBand based I/O Consolidation Requires new Eco system (HCA, cabling, switches) Mostly copper cabling, limited distance Datacenter protocol New driver (SRP) Stateful Gateway from SRP to FCP (unless native IB attached disk array) RDMA capability of HCA used Low CPU overhead Payload is SCSI not FC Concept of Virtual links and QoS in InfiniBand IB iSER iSCSI IB SRP 10, 20 Gbps (4X SDR/DDR) SCSI Layer OS / Applications
  • 28. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 28 Data Center Ethernet FCoE FCP* SCSI Layer OS / Applications (1), 10 . . . Gbps FCoE From a Fibre Channel standpoint it’s Fibrechannel encapsulated in Ethernet From an Ethernet standpoint it’s just another ULP (Upper Layer Protocol) FCoE is an extension of Fibre Channel onto a Lossless (Data Center) Ethernet fabric FCoE is managed like FC at initiator, target, and switch level, completely based on the FC model Same host-to-switch and switch-to-switch behavior of FC in order frame delivery or FSPF load balancing WWNs, FC-IDs, hard/soft zoning, DNS, RSCN Standards Work in T11, IEEE and IETF not yet final * Includes FC Layer
  • 29. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 29 FCoE Enablers Ethernet Header FCoE Header FC Header FC Payload CRC EOF FCS Same as a physical FC frame Control information: version, ordered sets (SOF, EOF) Ethernet V2 Frame, Ethertype = FCoE 10Gbps Ethernet Lossless Ethernet (Data Center Ethernet) Matches the B2B credits used in Fibrechannel to provide a lossless service Ethernet jumbo frames (2180 Bytes) Max FC frame payload = 2112 bytes
  • 30. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 30 Data Center Ethernet Enhanced Ethernet for Data Center Applications Priority Flow Control (Priority Pause) *) Link Scheduling Congestion Management Layer 2 Multipathing Configuration Management Transport of FCoE Enabling Technology for I/O Consolidation and Unified Fabric *) T11 BB 5 group has only required that Ethernet switches have standard Pause, and baby Jumbo frame capability; which means no I/O consolidation support.
  • 31. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 31 FCoE Enabler: Priority Flow Control Enables lossless Fabrics for each class of service PAUSE sent per virtual lane when buffers limit exceeded Network resources are partitioned between VL’s (E.g. input buffer and output queue) The switch behavior is negotiable per VL InfiniBand uses a similar mechanism for multiplexing multiple data streams over a single physical link Default LAN Default LAN User 1 User 1 User 3 User 3 User 4 User 4 PAUSE User 2 full! Traffic Mgmt. Mgmt. Mgmt. Mgmt. FC/FCoE FC/FCoE User 2 STOP
  • 32. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 32 Fibre Channel Drivers Ethernet Drivers Operating System Fibre Channel Drivers Ethernet Drivers Operating System PCIe FibreChannel Ethernet 10GbEE 10GbEE Link FibreChannel PCIe 4GHBA 4GHBA Link PCIe Ethernet10GbE 10GbE Link FCoE Enabler: Consolidated Network Adapter (CNA) LAN CNAHBA
  • 33. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 33 View from Operating System Standard drivers Same management Operating System sees: Dual port 10 Gigabit Ethernet adapter Dual Port 4 Gbps Fibre Channel HBAs
  • 34. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 34 LANLAN SAN ASAN A SAN BSAN B FCoE I/O Consolidation Topology FCoE Target: Dramatic reduction in adapters, switch ports and cabling 4 cables to 2 cables per server Seamless connection to the installed base of existing SANs and LANs High performance frame mappers vs. gateway bottlenecks Effective sharing of high bandwidth links Consolidated network infrastructure Faster infrastructure provisioning Lower TCO Data Center Ethernet with FCoE Classical Ethernet Classical FC SERVER SERVER
  • 35. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 35 Data Center Ethernet FCoE FCP* SCSI Layer OS / Applications (1), 10 . . . Gbps FCoE based I/O Consolidation FCP layer untouched Requires Baby Jumbo Frames (2180 Bytes) Nonroutable Datacenter protocol Datacenter wide VLAN’s Same management tools as for Fibre Channel Same drivers as for Fibre Channel HBA’s Same Multipathing software Simplified certifications with storage subsystem vendors Requires lossless (10G) Ethernet switching fabric May require new host adaptors (unless FCoE software stack) * Includes FC Layer
  • 36. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. Differentiators
  • 37. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 37 Storage Part of I/O Consolidation iSCSI FCoE IB-SRP Payload SCSI Fibre Channel SCSI Transport TCP/IP Data Center Ethernet InfiniBand Scope LAN/MAN/WAN Datacenter Datacenter Bandwidth/Performance Low/Medium High High CPU Overhead High Low Low Gateway Overhead High Low High FC Security Model No Yes No FC Software on Host No Yes No FC Management Model No Yes No Initiator Implementation Yes Yes Yes Target Implementation Yes Yes/Future Yes IP Routable Yes No N/A
  • 38. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 38 Storage Part of I/O Consolidation iSCSI FCoE IB-SRP Virtual Lanes No Yes Yes Congestion Control TCP Priority Flow Control Credit based Gateway Functionality stateful stateless stateful Connection Oriented Yes No Yes Access Control IP/VLAN VLAN / VSAN Partitions RDMA primitives defined defined defined Latency 100s of us 10s of us us Adapter NIC CNA HCA
  • 39. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. Conclusion
  • 40. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 40 Encapsulation Technologies Ethernet 1, 10 Gbps IP TCP iSCSI InfiniBand SRP IP TCP FCIP FCP* IP TCP iFCP FCP* Fibrechannel FCP* 1, 2, 4, (8), 10 Gbps 10, 20 Gbps iSCSI FCoE InfiniBand iSER iSCSI Data Center Ethernet FCoE FCP* SCSI Layer Operating System / Applications (1), 10 Gbps * Includes FC Layer
  • 41. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 41 Conclusion Server I/O Consolidation is driven by high I/O bandwidth demand I/O Bandwidth demand is driven by Multicore / Socket Server and Virtualization TCP/IP (iSCSI), Data Center Ethernet (FCoE) and InfiniBand (SRP, iSER) are generic transport protocols allowing Server I/O Consolidation Server I/O Consolidation is the first phase, consolidating input into a Unified Fabric
  • 42. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 42 Thank You !
  • 43. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 4343 Other SNIA Tutorials Fibre Channel Technologies: Current and Future IP Storage Protocols - iSCSI InfiniBand Technology Overview FCoE: Fibre Channel over Ethernet Check out SNIA Tutorial:
  • 44. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 4444 Q&A / Feedback Please send any questions or comments on this presentation to SNIA: tracknetworking@snia.org Many thanks to the following individuals for their contributions to this tutorial. - SNIA Education Committee Gilles Chekroun Howard Goldstein Marco Di Benedetto Errol Roberts Bill Lulofs Carlos Pereira Walter Dey Dror Goldenberg James Long
  • 45. Comparing Server I/O Consolidation: iSCSI, Infiniband and FCoE © 2008 Storage Networking Industry Association. All Rights Reserved. 45 References http://www.fcoe.com/ http://www.t11.org/fcoe http://www.fibrechannel.org/OVERVIEW/ FCIA_SNW_FCoE_WP_Final.pdf http://www.fibrechannel.org/OVERVIEW/ FCIA_SNW_FCoE_flyer_Final.pdf http://www.fibrechannel.org/FCoE.html