SlideShare a Scribd company logo
Yuma POC “phase 2” @ comForte
Thomas Burg, March 2016
Results
Product visions
> What is InfiniBand and
why should you care
> The comForte Yuma
POC phase 2 results
> A business case and
technical vision for
„Hybrid NonStop“
> The comForte Yuma
product vision
Agenda
3
“full roundtrip time”
= 0.4 milli seconds
= 400 micro seconds
= 2500 TPS
165 min to
move a
TeraByte
Current
speed NSX
over 1 Gb
Ethernet
Typical speeds for TCP/IP networking (*)
4
“full roundtrip time”
= 11 micro seconds
= 90.000 TPS
= 34 x faster
3 min to move
a TeraByte
= 55 times
faster
Awesome
Speed of IB
Moving to InfiniBand, comparing to 1 Gbit TCP IP over
Ethernet…
(*) How fast is fast enough? Comparing Apples to ….
5
When do I need to move data really really fast …
BigData
(Duh)
Stock
Exchanges
Telco
NonStop
Hybrid
Discussion
to follow
> Context
> Goals
> Results
> Moving data
> Moving files
> Other observations
The comForte Yuma
(a.k.a. NSADI)
POC Phase 2
comForte – better always on
Phase 2 of comForte Yuma POC - Context
>Now all on comForte Hardware
> comForte owned and operated NS3 X1
> HPE ProLiant, RHEL Linux, Mellanox IB card
> Only a single InfiniBand cable, no Switch on
“Linux end” of connection
>Still with plenty of help from HPE folks
> Direct contact with key developers
> Direct contact with HPE product management
Thank you much HPE!
HPE
ProLiant
comForte – better always on
Phase 2 of comForte Yuma POC - Context
>comForte Resources for Phase 2
> comForte: Thomas Burg, various folks in sys
admin NonStop and Linux
> Gemini: Richard Pope, Dave Cikra
>Gemini Communications, Inc.
> www.geminic.com
> No direct sales
> Several ‘comm’ products over the decades, some
of them sold by comForte now
comForte – better always on
Phase 2 of comForte Yuma POC - Goals
>Compare InfiniBand with 1 Gbit TCP/IP
> Like all NS3 X1, comForte system does not have 10 Gbit Ethernet
> Hence 10 Gbit could not be measured
> Compare 1 Gbit Ethernet with InfiniBand
>Re-measure some key data points for ‘moving of data’:
> Latency and throughput for ‘typical’ packet sizes
> Maximum throughput using ‘optimal’ packet sizes
>Can we do ‘FTP over InfiniBand’ and if so, how fast?
comForte – better always on
Phase 2 of comForte Yuma POC - Disclaimer
>It has been a tight race to GTUG
> The comForte NS3 X1 system was delivered in October 2015
> The Linux system was set up in January 2016
> The missing InfiniBand cable was ordered in February 2016
> InfiniBand was up and running in March 2016
>Please treat all number as preliminary. Things should only
get better, but all numbers are the result of a POC, rather
than benchmarks of a finished product
The comForte Yuma POC Phase 2
Moving Data
comForte – better always on
Moving data – model used
>For “POC Phase 1” (TBC Nov 2015) we used ‘echo’ approach
> Send some bytes of data
> Send same packet size back
>For “POC Phase 2” (GTUG April 2016) we used ‘one way’ approach
> Send some bytes of data
> Send small packet (“acknowledgement”) back
>Both models occur in real life, but we felt ‘one way’ is more common
comForte – better always on
Moving data 16 KBytes – results
>‘one way’ approach (see prior slide)
> 16 KBytes = 16384 bytes data, 20 bytes “ack”
> Data moves from NonStop to Linux
Transport over Latency
(microseconds)
MegaBytes/s
TCP/IP 1 Gbit Ethernet 374 43
InfiniBand 11 1413
InfiniBand gain x 34 x 32
comForte – better always on
Moving data optimum packet size – results
>‘one way’ approach
> ‘Optimal’ packet size chosen for InfiniBand and TCP/IP, “ack” still 20 bytes
> Data moves from NonStop to Linux
Transport over Packet size Chunk of data moved
to measure real time
[in GigaBytes]
Real time
elapsed
[in seconds]
Throughput
[in MegaBytes/s]
TCP/IP 1 Gbit Ethernet 262144 10 97 102
InfiniBand 2097152 1024
[one TeraByte]
176 5734
InfiniBand gain x 55
> time to move one TeraByte over TCP/IP 1 Gbit Ethernet extrapolates to 9900 seconds
The comForte Yuma POC Phase 2
Moving files from NonStop to Linux
comForte – better always on
‘FTP’ over InfiniBand - introduction
>During POC phase 1 comForte and Gemini managed to connect
NonStop FTPSERV with Linux open source FTP client
> No modifications to NonStop FTPSERV (!). Used comForte “TCP/IP to InfiniBand
intercept framework” (see next slide)
> Converted Linux open source FTP client to rsockets
>FTP protocol is NOT ‘InfiniBand’ friendly
>During POC phase 2 we focused on speed measurements, hence
we wrote test programs with direct file I/O on both ends
comForte – better always on
comForte FTPSERV over IB POC (done for TBC 2015)
>This worked, but
it needed some
‘tricks’
>Performance was
good, but not
faster than 10
Gbit Ethernet,
about 300 MB/s
>Works for Telnet
as well 
HP NonStop Linux (Red Hat)
FTPSERV
Guardian
Open Source FTP
Client, ported to
rsockets
comForte
TCP/IP
Intercept
Library
comForte IB
Daemon
(OSS 64bit
PUT)
InfiniBandIPC
rsockets
NonStop file
system
Linux file
system
comForte – better always on
‘FTP’ over InfiniBand – changes for Phase 2 of POC
>No longer use FTP protocol at all
>Have comForte code on both ends
>Full control, no extra IPC between Guardian and OSS layer
comForte – better always on
comForte ‘FTP’ over Infiniband April 2016
HP NonStop Linux (Red Hat)
comForte InfiniBand file server
OSS, 64bit PUT
InfiniBand
(rsockets)
NonStop file
system
Linux file
system
comForte InfiniBand file client
C, Native Linux
comForte – better always on
FTP over TCP/IP, 1 Gbit Ethernet
> Single file read maxes out @ about 150 MByte/s [used test program for this]
> TCP/IP maxes out @ about 128 MByte/s
> FTP file transfers based on number of parallel transfers for a 1 GigaByte file from NonStop to Linux
comForte – better always on
‘FTP’ over Infiniband – POC results
> InfiniBand has no real limit here  [it is about 6 GByte/s]
> ‘FTP’ file transfers based on number of parallel transfers, same file, but now over InfiniBand:
> Already moved from 111 MByte/s to 410 MByte/s  Nearly four times faster
> Limitations of file transfer speed are now:
> How effectively can we “scale out” File I/O read operation
> This was measured on a two CPU NS3 X1
comForte – better always on
Moving data from NonStop to Linux
– testing the limits on Linux and InfiniBand
> Use ‘FTP over InfiniBand’ POC framework
> Do *not* do file read on NonStop, use test data created in memory
> Send data to Linux, flush to disk
> This measures
> Disk write speed on Linux
> How well current comForte POC FTP over InfiniBand file server and client scale
comForte – better always on
Moving data from NonStop to Linux
– testing the limits on Linux and InfiniBand
> Scales up nicely on a two CPU system with a single InfiniBand cable
comForte – better always on
What to make of ‘FTP over IB’ results
> comForte can move data real fast from NonStop to Linux
> 6 GigaBytes per second seems doable on a fully scaled out NS7 X1
> This includes flushing the data to Linux Disk
> Potential use cases (???):
> Fast replacement for FTP
> Data replication
> Big data
> Backup
The comForte Yuma POC Phase 2
Other observations
comForte – better always on
Other observations during POC
> Setting up InfiniBand hardware on NonStop and Linux is new to sysadmin folks (both on Hardware and
Software level)
> InfiniBand rsockets interface is straightforward to code, both on NonStop and Linux
> InfiniBand Low level verb interface is NOT straightforward to code
> Did not get beyond very early POC code but making progress
> InfiniBand and rsockets are rock solid both on NonStop and RHEL Linux
> rsockets is only available from OSS PUT64 (not available under Guardian!). That’s why comForte built a
plug-compatible sockets DLL for Guardian socket apps (like CSL, FTPSERV, anything using TCP/IP under
Guardian)
> HPE NonStop InfiniBand team very competent and helpful
A business case and technical vision for
„Hybrid NonStop“
Cloud Business Case “Looking versus Booking”:
Many NonStop systems as of today
NonStop System
transactions coming
from “somewhere”
DATABASE
Server classes
encapsulating
business logic
Looking and Booking traffic (typical use case for multiple
NonStop customers in travel section):
Looking is stateless, 95+% of traffic
By nature of transaction, can be hosted in cloud or on
commodity platform
Booking is transactional
By nature of data, you don’t want to lose it and it also has
“state” (ACID) – run it on NonStop
Similar two-types-of-transaction logic applies to stock
exchanges, potentially other verticals (Base24 !?)
Cloud Business Case “Looking versus Booking”:
The high level requirement/vision
NonStop System
transactions coming
from “somewhere”
DATABASE
Server classes
encapsulating
business logic
Looking does not hit NonStop at all
… and is handled in the cloud (public or private)
… but how to move ‘state’ (database) to cloud???
Cloud Business Case “Looking versus Booking” – InfiniBand and NonStop Hybrid vision
NonStop System
Server classes
encapsulating
business logic Looking indeed handled in cloud
transactions do not hit NonStop (business tier knows it is
Looking and hence simply uses local DB copy)
Cloud tier sends Booking transactions to NonStop,
via Infiniband (again, business logic sees this is Booking, hence
switches to NonStop)
Fast replication via InfiniBand enables (one-way, “read only”,
near real-time) replication to multiple Linux boxes in parallel
with low latency and low CPU overhead
CLOUD
Web traffic – looking
and booking
CLOUD
Database,
Replicated into
cloud near
real-time
Cloud tier
business logic
DATABASE
> CSL/Infiniband
> Become *the*
company for IB-
enabling applications
and middleware
products
> Work with ISVs, end
usersThe comForte Yuma Product Vision
32
CSL/InfiniBand
>Covers “left half” of
InfiniBand Hybrid
vision
>Available very soon…
33
CSL/InfiniBand
> A very natural extension of the CSL product
> A new option CSL/InfiniBand
> First release will provide C/C++ API on Linux
> To be announced @ GTUG Berlin, again at TBC 2016
> EAP-ready October 2016
> Come to comForte presentation or talk to us to find out more
34
The broader comForte Yuma framework
> Can InfiniBand-enable *any* existing application on HPE NonStop
> Without application changes (!)
> Just like SecurData and CSL – it is a *framework*
> Existing application/middleware on NonStop
> InfiniBand will boost performance
> With comForte experience from POC and framework to be announced @ TBC 2016, Middleware/application
vendors can focus on their features, comForte takes care of InfiniBand details
> Customer/partner needs to do work on Linux himself
> Rather easy via rsockets approach
> comForte can provide proxy (speed to be confirmed)
> comForte can help
> comForte vision: Become THE player in “InfiniBand low level coding”
35
The broader comForte Yuma framework (contd.)
>Whom does this help moving from TCP/IP to InfiniBand?
> NonStop ISVs
> Software houses with their own applications
> NonStop users with their own applications
>Interested?
> Come talk to comForte
Summary, Q&A
37
Summary, Q&A
>HPE NonStop is now InfiniBand enabled to
connect to HPE ProLiant Server running RHEL
>InfiniBand is extremely fast
>Now that HPE has created an environment
that we can build on with InfiniBand,
comForte has several products which can be
used in the “Hybrid space”
> CSL/InfiniBand
> InfiniBand enabling framework
> maRunga/InfiniBand (?)
>Time to start moving to Hybrid!?
“full roundtrip time”
= 11 micro seconds
= 90.000 TPS
= 34 x faster
3 min to move a
TeraByte
= 55 times faster
THANK YOU !
Questions?

More Related Content

PDF
Kernel bug hunting
PDF
HTTP/2: What's new?
PDF
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)
PDF
Fast Convergence Techniques
PPTX
Introduction to the Cluster Infrastructure and the Systems Provisioning Engin...
ODP
Moscow erlang users meeetup 2013 01-12 erlrtpproxy
PDF
Accelerating Envoy and Istio with Cilium and the Linux Kernel
PPTX
Advanced Internet of Things firmware engineering with Thingsquare and Contiki...
Kernel bug hunting
HTTP/2: What's new?
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)
Fast Convergence Techniques
Introduction to the Cluster Infrastructure and the Systems Provisioning Engin...
Moscow erlang users meeetup 2013 01-12 erlrtpproxy
Accelerating Envoy and Istio with Cilium and the Linux Kernel
Advanced Internet of Things firmware engineering with Thingsquare and Contiki...

What's hot (20)

PPTX
Building the Internet of Things with Thingsquare and Contiki - day 1, part 2
PDF
Quick QUIC Technical Update (2017)
PPTX
Building the Internet of Things with Thingsquare and Contiki - day 2 part 1
PPTX
Future Internet protocols
PPT
Transport Stream And Next Generation Logging
PPTX
Building day 2 upload Building the Internet of Things with Thingsquare and ...
PDF
Fun with TCP Packets
PPT
OSTU - hrPING QuickStart Part 1 (by Tony Fortunato & Peter Ciuffreda)
PDF
Cilium - Network security for microservices
PDF
Network Automation (Bay Area Juniper Networks Meetup)
PDF
Criteo Labs Infrastructure Tech Talk Meetup Nov. 7
PPTX
Senior Design: Raspberry Pi Cluster Computing
PDF
Mobicents Summit 2012 - Vladimir Ralev - Mobicents Load Balancer and High Ava...
PPTX
Beyond TCP: The evolution of Internet transport protocols
PDF
EBPF and Linux Networking
PDF
WAMP as a platform for composite SOA applications and its implementation on Lua
PDF
More specific announcments in BGP
PDF
LinuxCon 2015 Stateful NAT with OVS
PDF
pfSense firewall workshop guide
PDF
Kernel Recipes 2013 - Nftables, what motivations and what solutions
Building the Internet of Things with Thingsquare and Contiki - day 1, part 2
Quick QUIC Technical Update (2017)
Building the Internet of Things with Thingsquare and Contiki - day 2 part 1
Future Internet protocols
Transport Stream And Next Generation Logging
Building day 2 upload Building the Internet of Things with Thingsquare and ...
Fun with TCP Packets
OSTU - hrPING QuickStart Part 1 (by Tony Fortunato & Peter Ciuffreda)
Cilium - Network security for microservices
Network Automation (Bay Area Juniper Networks Meetup)
Criteo Labs Infrastructure Tech Talk Meetup Nov. 7
Senior Design: Raspberry Pi Cluster Computing
Mobicents Summit 2012 - Vladimir Ralev - Mobicents Load Balancer and High Ava...
Beyond TCP: The evolution of Internet transport protocols
EBPF and Linux Networking
WAMP as a platform for composite SOA applications and its implementation on Lua
More specific announcments in BGP
LinuxCon 2015 Stateful NAT with OVS
pfSense firewall workshop guide
Kernel Recipes 2013 - Nftables, what motivations and what solutions
Ad

Similar to HPE NonStop GTUG Berlin - 'Yuma' Workshop (20)

PDF
[Lucas Films] Using a Perforce Proxy with Alternate Transports
PPT
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
PDF
Thousands of Threads and Blocking I/O
PPTX
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
PDF
Ethernetv infiniband
PPT
Tuning 17 march
PPT
Linux Based Advanced Routing with Firewall and Traffic Control
KEY
DIY InfiniBand networking
PDF
Design Cloud system: InfiniBand vs. Ethernet
PPT
PPT
Designing Tools and Implementing Workflows to Enhance Serials EDI
PPTX
Collaborate nfs kyle_final
PPTX
Intel aspera-medical-v1
PPTX
cFrame framework slides
PDF
How To Install OFED Linux/VMware/Windows
PPTX
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
PPT
High Performance Communication for Oracle using InfiniBand
PPTX
InfiniBand Presentation
PPT
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
PDF
Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC ...
 
[Lucas Films] Using a Perforce Proxy with Alternate Transports
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Thousands of Threads and Blocking I/O
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ethernetv infiniband
Tuning 17 march
Linux Based Advanced Routing with Firewall and Traffic Control
DIY InfiniBand networking
Design Cloud system: InfiniBand vs. Ethernet
Designing Tools and Implementing Workflows to Enhance Serials EDI
Collaborate nfs kyle_final
Intel aspera-medical-v1
cFrame framework slides
How To Install OFED Linux/VMware/Windows
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
High Performance Communication for Oracle using InfiniBand
InfiniBand Presentation
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC ...
 
Ad

More from Thomas Burg (12)

PDF
Comparing the TCO of HP NonStop with Oracle RAC
PDF
HP NonStop applications: Modernization from the Ground-up and the User-in
PDF
BASE24 classic - modernization options
PPTX
You may be compliant, but are you really secure?
PDF
2014 11 data at rest protection for base24 - lessons learned in production
PDF
The attack against target - how was it done and how has it changed the securi...
PDF
The attack on TARGET: how was it done - lessons learned for protecting HP Non...
PDF
comForte CSL: a messaging middleware framework for HP NonStop
PDF
2014 02 comForte SecurTape product
PDF
From Russia with Love - modern tools used in Cyber Attacks
PDF
The Verizon 2012/2013 Data Breach Investigations Reports - Lessons Learned fo...
PDF
Survival of the Fittest: Modernize your NonStop applications today
Comparing the TCO of HP NonStop with Oracle RAC
HP NonStop applications: Modernization from the Ground-up and the User-in
BASE24 classic - modernization options
You may be compliant, but are you really secure?
2014 11 data at rest protection for base24 - lessons learned in production
The attack against target - how was it done and how has it changed the securi...
The attack on TARGET: how was it done - lessons learned for protecting HP Non...
comForte CSL: a messaging middleware framework for HP NonStop
2014 02 comForte SecurTape product
From Russia with Love - modern tools used in Cyber Attacks
The Verizon 2012/2013 Data Breach Investigations Reports - Lessons Learned fo...
Survival of the Fittest: Modernize your NonStop applications today

Recently uploaded (20)

PPTX
Advanced SystemCare Ultimate Crack + Portable (2025)
PPTX
Reimagine Home Health with the Power of Agentic AI​
PPTX
Monitoring Stack: Grafana, Loki & Promtail
PPTX
CHAPTER 2 - PM Management and IT Context
PPTX
Log360_SIEM_Solutions Overview PPT_Feb 2020.pptx
PDF
iTop VPN Crack Latest Version Full Key 2025
PDF
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PDF
17 Powerful Integrations Your Next-Gen MLM Software Needs
PDF
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
PDF
CapCut Video Editor 6.8.1 Crack for PC Latest Download (Fully Activated) 2025
PDF
wealthsignaloriginal-com-DS-text-... (1).pdf
PDF
Website Design Services for Small Businesses.pdf
PDF
Complete Guide to Website Development in Malaysia for SMEs
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
PPTX
WiFi Honeypot Detecscfddssdffsedfseztor.pptx
PDF
Design an Analysis of Algorithms I-SECS-1021-03
Advanced SystemCare Ultimate Crack + Portable (2025)
Reimagine Home Health with the Power of Agentic AI​
Monitoring Stack: Grafana, Loki & Promtail
CHAPTER 2 - PM Management and IT Context
Log360_SIEM_Solutions Overview PPT_Feb 2020.pptx
iTop VPN Crack Latest Version Full Key 2025
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
Wondershare Filmora 15 Crack With Activation Key [2025
17 Powerful Integrations Your Next-Gen MLM Software Needs
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
CapCut Video Editor 6.8.1 Crack for PC Latest Download (Fully Activated) 2025
wealthsignaloriginal-com-DS-text-... (1).pdf
Website Design Services for Small Businesses.pdf
Complete Guide to Website Development in Malaysia for SMEs
Internet Downloader Manager (IDM) Crack 6.42 Build 41
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Adobe Illustrator 28.6 Crack My Vision of Vector Design
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
WiFi Honeypot Detecscfddssdffsedfseztor.pptx
Design an Analysis of Algorithms I-SECS-1021-03

HPE NonStop GTUG Berlin - 'Yuma' Workshop

  • 1. Yuma POC “phase 2” @ comForte Thomas Burg, March 2016 Results Product visions
  • 2. > What is InfiniBand and why should you care > The comForte Yuma POC phase 2 results > A business case and technical vision for „Hybrid NonStop“ > The comForte Yuma product vision Agenda
  • 3. 3 “full roundtrip time” = 0.4 milli seconds = 400 micro seconds = 2500 TPS 165 min to move a TeraByte Current speed NSX over 1 Gb Ethernet Typical speeds for TCP/IP networking (*)
  • 4. 4 “full roundtrip time” = 11 micro seconds = 90.000 TPS = 34 x faster 3 min to move a TeraByte = 55 times faster Awesome Speed of IB Moving to InfiniBand, comparing to 1 Gbit TCP IP over Ethernet… (*) How fast is fast enough? Comparing Apples to ….
  • 5. 5 When do I need to move data really really fast … BigData (Duh) Stock Exchanges Telco NonStop Hybrid Discussion to follow
  • 6. > Context > Goals > Results > Moving data > Moving files > Other observations The comForte Yuma (a.k.a. NSADI) POC Phase 2
  • 7. comForte – better always on Phase 2 of comForte Yuma POC - Context >Now all on comForte Hardware > comForte owned and operated NS3 X1 > HPE ProLiant, RHEL Linux, Mellanox IB card > Only a single InfiniBand cable, no Switch on “Linux end” of connection >Still with plenty of help from HPE folks > Direct contact with key developers > Direct contact with HPE product management Thank you much HPE! HPE ProLiant
  • 8. comForte – better always on Phase 2 of comForte Yuma POC - Context >comForte Resources for Phase 2 > comForte: Thomas Burg, various folks in sys admin NonStop and Linux > Gemini: Richard Pope, Dave Cikra >Gemini Communications, Inc. > www.geminic.com > No direct sales > Several ‘comm’ products over the decades, some of them sold by comForte now
  • 9. comForte – better always on Phase 2 of comForte Yuma POC - Goals >Compare InfiniBand with 1 Gbit TCP/IP > Like all NS3 X1, comForte system does not have 10 Gbit Ethernet > Hence 10 Gbit could not be measured > Compare 1 Gbit Ethernet with InfiniBand >Re-measure some key data points for ‘moving of data’: > Latency and throughput for ‘typical’ packet sizes > Maximum throughput using ‘optimal’ packet sizes >Can we do ‘FTP over InfiniBand’ and if so, how fast?
  • 10. comForte – better always on Phase 2 of comForte Yuma POC - Disclaimer >It has been a tight race to GTUG > The comForte NS3 X1 system was delivered in October 2015 > The Linux system was set up in January 2016 > The missing InfiniBand cable was ordered in February 2016 > InfiniBand was up and running in March 2016 >Please treat all number as preliminary. Things should only get better, but all numbers are the result of a POC, rather than benchmarks of a finished product
  • 11. The comForte Yuma POC Phase 2 Moving Data
  • 12. comForte – better always on Moving data – model used >For “POC Phase 1” (TBC Nov 2015) we used ‘echo’ approach > Send some bytes of data > Send same packet size back >For “POC Phase 2” (GTUG April 2016) we used ‘one way’ approach > Send some bytes of data > Send small packet (“acknowledgement”) back >Both models occur in real life, but we felt ‘one way’ is more common
  • 13. comForte – better always on Moving data 16 KBytes – results >‘one way’ approach (see prior slide) > 16 KBytes = 16384 bytes data, 20 bytes “ack” > Data moves from NonStop to Linux Transport over Latency (microseconds) MegaBytes/s TCP/IP 1 Gbit Ethernet 374 43 InfiniBand 11 1413 InfiniBand gain x 34 x 32
  • 14. comForte – better always on Moving data optimum packet size – results >‘one way’ approach > ‘Optimal’ packet size chosen for InfiniBand and TCP/IP, “ack” still 20 bytes > Data moves from NonStop to Linux Transport over Packet size Chunk of data moved to measure real time [in GigaBytes] Real time elapsed [in seconds] Throughput [in MegaBytes/s] TCP/IP 1 Gbit Ethernet 262144 10 97 102 InfiniBand 2097152 1024 [one TeraByte] 176 5734 InfiniBand gain x 55 > time to move one TeraByte over TCP/IP 1 Gbit Ethernet extrapolates to 9900 seconds
  • 15. The comForte Yuma POC Phase 2 Moving files from NonStop to Linux
  • 16. comForte – better always on ‘FTP’ over InfiniBand - introduction >During POC phase 1 comForte and Gemini managed to connect NonStop FTPSERV with Linux open source FTP client > No modifications to NonStop FTPSERV (!). Used comForte “TCP/IP to InfiniBand intercept framework” (see next slide) > Converted Linux open source FTP client to rsockets >FTP protocol is NOT ‘InfiniBand’ friendly >During POC phase 2 we focused on speed measurements, hence we wrote test programs with direct file I/O on both ends
  • 17. comForte – better always on comForte FTPSERV over IB POC (done for TBC 2015) >This worked, but it needed some ‘tricks’ >Performance was good, but not faster than 10 Gbit Ethernet, about 300 MB/s >Works for Telnet as well  HP NonStop Linux (Red Hat) FTPSERV Guardian Open Source FTP Client, ported to rsockets comForte TCP/IP Intercept Library comForte IB Daemon (OSS 64bit PUT) InfiniBandIPC rsockets NonStop file system Linux file system
  • 18. comForte – better always on ‘FTP’ over InfiniBand – changes for Phase 2 of POC >No longer use FTP protocol at all >Have comForte code on both ends >Full control, no extra IPC between Guardian and OSS layer
  • 19. comForte – better always on comForte ‘FTP’ over Infiniband April 2016 HP NonStop Linux (Red Hat) comForte InfiniBand file server OSS, 64bit PUT InfiniBand (rsockets) NonStop file system Linux file system comForte InfiniBand file client C, Native Linux
  • 20. comForte – better always on FTP over TCP/IP, 1 Gbit Ethernet > Single file read maxes out @ about 150 MByte/s [used test program for this] > TCP/IP maxes out @ about 128 MByte/s > FTP file transfers based on number of parallel transfers for a 1 GigaByte file from NonStop to Linux
  • 21. comForte – better always on ‘FTP’ over Infiniband – POC results > InfiniBand has no real limit here  [it is about 6 GByte/s] > ‘FTP’ file transfers based on number of parallel transfers, same file, but now over InfiniBand: > Already moved from 111 MByte/s to 410 MByte/s  Nearly four times faster > Limitations of file transfer speed are now: > How effectively can we “scale out” File I/O read operation > This was measured on a two CPU NS3 X1
  • 22. comForte – better always on Moving data from NonStop to Linux – testing the limits on Linux and InfiniBand > Use ‘FTP over InfiniBand’ POC framework > Do *not* do file read on NonStop, use test data created in memory > Send data to Linux, flush to disk > This measures > Disk write speed on Linux > How well current comForte POC FTP over InfiniBand file server and client scale
  • 23. comForte – better always on Moving data from NonStop to Linux – testing the limits on Linux and InfiniBand > Scales up nicely on a two CPU system with a single InfiniBand cable
  • 24. comForte – better always on What to make of ‘FTP over IB’ results > comForte can move data real fast from NonStop to Linux > 6 GigaBytes per second seems doable on a fully scaled out NS7 X1 > This includes flushing the data to Linux Disk > Potential use cases (???): > Fast replacement for FTP > Data replication > Big data > Backup
  • 25. The comForte Yuma POC Phase 2 Other observations
  • 26. comForte – better always on Other observations during POC > Setting up InfiniBand hardware on NonStop and Linux is new to sysadmin folks (both on Hardware and Software level) > InfiniBand rsockets interface is straightforward to code, both on NonStop and Linux > InfiniBand Low level verb interface is NOT straightforward to code > Did not get beyond very early POC code but making progress > InfiniBand and rsockets are rock solid both on NonStop and RHEL Linux > rsockets is only available from OSS PUT64 (not available under Guardian!). That’s why comForte built a plug-compatible sockets DLL for Guardian socket apps (like CSL, FTPSERV, anything using TCP/IP under Guardian) > HPE NonStop InfiniBand team very competent and helpful
  • 27. A business case and technical vision for „Hybrid NonStop“
  • 28. Cloud Business Case “Looking versus Booking”: Many NonStop systems as of today NonStop System transactions coming from “somewhere” DATABASE Server classes encapsulating business logic Looking and Booking traffic (typical use case for multiple NonStop customers in travel section): Looking is stateless, 95+% of traffic By nature of transaction, can be hosted in cloud or on commodity platform Booking is transactional By nature of data, you don’t want to lose it and it also has “state” (ACID) – run it on NonStop Similar two-types-of-transaction logic applies to stock exchanges, potentially other verticals (Base24 !?)
  • 29. Cloud Business Case “Looking versus Booking”: The high level requirement/vision NonStop System transactions coming from “somewhere” DATABASE Server classes encapsulating business logic Looking does not hit NonStop at all … and is handled in the cloud (public or private) … but how to move ‘state’ (database) to cloud???
  • 30. Cloud Business Case “Looking versus Booking” – InfiniBand and NonStop Hybrid vision NonStop System Server classes encapsulating business logic Looking indeed handled in cloud transactions do not hit NonStop (business tier knows it is Looking and hence simply uses local DB copy) Cloud tier sends Booking transactions to NonStop, via Infiniband (again, business logic sees this is Booking, hence switches to NonStop) Fast replication via InfiniBand enables (one-way, “read only”, near real-time) replication to multiple Linux boxes in parallel with low latency and low CPU overhead CLOUD Web traffic – looking and booking CLOUD Database, Replicated into cloud near real-time Cloud tier business logic DATABASE
  • 31. > CSL/Infiniband > Become *the* company for IB- enabling applications and middleware products > Work with ISVs, end usersThe comForte Yuma Product Vision
  • 32. 32 CSL/InfiniBand >Covers “left half” of InfiniBand Hybrid vision >Available very soon…
  • 33. 33 CSL/InfiniBand > A very natural extension of the CSL product > A new option CSL/InfiniBand > First release will provide C/C++ API on Linux > To be announced @ GTUG Berlin, again at TBC 2016 > EAP-ready October 2016 > Come to comForte presentation or talk to us to find out more
  • 34. 34 The broader comForte Yuma framework > Can InfiniBand-enable *any* existing application on HPE NonStop > Without application changes (!) > Just like SecurData and CSL – it is a *framework* > Existing application/middleware on NonStop > InfiniBand will boost performance > With comForte experience from POC and framework to be announced @ TBC 2016, Middleware/application vendors can focus on their features, comForte takes care of InfiniBand details > Customer/partner needs to do work on Linux himself > Rather easy via rsockets approach > comForte can provide proxy (speed to be confirmed) > comForte can help > comForte vision: Become THE player in “InfiniBand low level coding”
  • 35. 35 The broader comForte Yuma framework (contd.) >Whom does this help moving from TCP/IP to InfiniBand? > NonStop ISVs > Software houses with their own applications > NonStop users with their own applications >Interested? > Come talk to comForte
  • 37. 37 Summary, Q&A >HPE NonStop is now InfiniBand enabled to connect to HPE ProLiant Server running RHEL >InfiniBand is extremely fast >Now that HPE has created an environment that we can build on with InfiniBand, comForte has several products which can be used in the “Hybrid space” > CSL/InfiniBand > InfiniBand enabling framework > maRunga/InfiniBand (?) >Time to start moving to Hybrid!? “full roundtrip time” = 11 micro seconds = 90.000 TPS = 34 x faster 3 min to move a TeraByte = 55 times faster THANK YOU ! Questions?