SlideShare a Scribd company logo
Hyperconverged OpenStack and Storage
OpenStack Day - Melbourne 2017
Andrew Hatfield
Practice Lead - Cloud Storage and Big Data
June 1st, 2017
2
Agenda
● What is Hyper-converged Infrastructure (HCI)?
● Where should you consider it?
● HCI Resource Isolation
● HCI and NFV
3
What is Hyper-converged Infrastructure (HCI)?
● A server which runs both compute and storage processes is a hyper-converged node
● Today’s focus is on running OpenStack Nova Compute and Ceph Object Storage
Daemon services on the same server
● HCI is not a step backwards to local storage because it does not introduce a single point
of failure
4
Local Compute/Storage: Single Point of Failure
Each server only uses its local disks
5
Separate Storage/Compute: No SPoF
Servers store their data on separate storage cluster
6
HCI: Local Storage without SPoF
Each server adds to Compute and Storage resources without being a single point of failure
Two clusters, Compute and Storage, but both are using resources from the same servers
7
OpenStack/Ceph HCI Architecture
8
Why is there demand for HCI?
● HCI lets us deploy a smaller footprint
○ HCI: 6 nodes for HA (+ director)
■ 3 controllers/monitors + 3 computes/OSDs
○ Non-HCI: 9 nodes for HA (+ director)
■ 3 controllers/monitors + 3 computes + 3 OSDs
● Further standardization of hardware
○ Hardware vendors may offer a discount for more of the same type of server
○ Fewer server types simplify operations
○ May enable more efficient use of hardware resources
9
Small Footprint Benefits
● An implication of NFV is putting more power on the edge of the network
● Devices put on the edge of the network should be small and dense
● PoC requires a smaller initial investment
10
HCI Reference Architecture
https://access.redhat.com/articles/2861641
All Heat environment files and scripts available at https://github.com/RHsyseng/hci
Applying Resource Isolation with
Director
12
Nova Reserved Memory for HCI
We can figure out the reserved memory with a formula
left_over_mem = mem - (GB_per_OSD * osds)
number_of_guests = int(left_over_mem /
(average_guest_size + GB_overhead_per_guest))
nova_reserved_mem_MB = MB_per_GB * (
(GB_per_OSD * osds) +
(number_of_guests * GB_overhead_per_guest) )
13
Nova CPU Allocation Ratio for HCI
We can figure out the the CPU allocation ratio with a formula
cores_per_OSD = 1.0
average_guest_util = 0.1 # 10%
nonceph_cores = cores - (cores_per_OSD * osds)
guest_vCPUs = nonceph_cores / average_guest_util
cpu_allocation_ratio = guest_vCPUs / cores
● The above is for rotational hard drives.
● Increase the core per OSD if you use an NVMe SSD to keep up with it
https://github.com/RHsyseng/hci/blob/master/scripts/nova_mem_cpu_calc.py
14
Nova Memory and CPU Calculator
Busy VMs Many VMs
15
Tuning Ceph OSDs for HCI
● We limited Nova’s memory and CPU resources so Ceph and the OS can use them
● We now want to use numactl to pin the Ceph process to a NUMA node
● The socket to which Ceph should be pinned is the one that has the network IRQ
○ If a workload is network intensive and not storage intensive, this may not true.
HCI for NFV
Typical VNF deployment today
fabric 0
(provider network)
VNF0
VF3VF0
mgt
DPDK
kernel
fabric 1
(provider network)
SR-IOV
VNFs and OpenStack
Management
(tenants network)
compute node
OVS
regular NICs
VF1 VF2
VNF1
VF3VF0
mgt
DPDK
kernel
VF1 VF2
Depending on the VNF, it will be connected to
one or more fabric network, typically:
● One fabric for control plane
● One fabric for user plane (end user datas:
mobile traffic, web, ...)
regular NICs
DHCP+PXE
bond bond bond bond
PF0 PF1 PF2 PF3
18
SR-IOV Host/VNFs guests resources partitioning
Typical 18 cores per node dual socket compute node (E5-2599 v3)
Host
VNF1
VNF0 Ceph osd
SR-IOV
mgt
NUMA node0 NUMA node1
one core, 2 hyperthreads
This looks like RT but is not RT, just partitioning
SR-IOV interfaces bonding handled by the VNF
All host IRQs routed on host cores
All VNFx cores dedicated to VNFx
● Isolation from others VNFs
● Isolation from the host
● Isolation from Ceph Mon
● 21 Mpps/core with zero frame loss, 12 hours run (I/O
bound due to Niantic PCIe x4 switch): equal to
bare-metal performances
● The number above show that virtualization/SR-IOV cost
is null, and that the VNF is not preempted/interrupted.
OVS-DPDK NFV deployment
fabric 0
(provider network)
VNF0
eth0
DPDK kernel
fabric 1
(provider network)
OVS-DPDK bridges
OpenStack APIs
compute node regular NICs
eth1
VNF1
eth0
DPDK kernel
eth1
regular NICs
DHCP+PXE
Base (VNFs mgt)
(provider network)
mgt
mgt
bonded
bondedbonded
DPDKNICs
DPDK NICs
DPDK NICs
bonded
20
RHOSP10 OVS-DPDK
Host/VNFs guests resources partitioning
Typical 18 cores per node dual socket compute node (E5-2599 v3)
Host
DPDK NICs
VNF mgt
NUMA node1
Same as SR-IOV, except of a 4th partition for OVS-DPDK
● CPUs list dedicated to OVS-DPDK
● Huge Pages reserved for OVS-DPDK
Not mixing VNF management traffic and Telco
Traffic requires additional NICs as NICs cannot be shared
between OVS-DPDK and the host:
● 3.5 Mpps/core for PVP configuration (OVS-DPDK 2.5) with
zero packet loss
VNF0
VNF1
OVS-DPDK
host mgt
Ceph osd
NUMA node0
one core, 2 hyperthreads
21
Performance Testing HCI with NFV: Results
22
Performance Testing HCI with NFV: Results
Summary
● When disk I/O activity started, VNFs experienced a ~4% degrade in throughput momentarily, then
experienced ~2% variation in throughput, but on average maintained 99% of the throughput
without disk I/O
● Each VM was reading an average ~200MB/sec, lower I/O should reflect much lower impact to VNF
● Investigations into interactions between disk I/O and VNF effects underway @RH
● Our goal is to completely eliminate the disk I/O effects on the VNF performance
THANK YOU
plus.google.com/+RedHat
linkedin.com/company/red-hat
youtube.com/user/RedHatVideos
facebook.com/redhatinc
twitter.com/RedHatNews

More Related Content

PDF
Supercomputing by API: Connecting Modern Web Apps to HPC
PDF
Disaggregating Ceph using NVMeoF
PDF
CEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAY
PDF
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
PDF
CEPH DAY BERLIN - CEPH ON THE BRAIN!
PDF
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
PDF
What's New with Ceph - Ceph Day Silicon Valley
PDF
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Supercomputing by API: Connecting Modern Web Apps to HPC
Disaggregating Ceph using NVMeoF
CEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAY
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
CEPH DAY BERLIN - CEPH ON THE BRAIN!
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
What's New with Ceph - Ceph Day Silicon Valley
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui

What's hot (20)

PDF
NantOmics
PDF
Red Hat Storage Roadmap
PPTX
Ceph Deployment at Target: Customer Spotlight
PDF
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
PDF
CEPH DAY BERLIN - UNLIMITED FILESERVER WITH SAMBA CTDB AND CEPHFS
PDF
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
PDF
2015 open storage workshop ceph software defined storage
PDF
Red Hat Ceph Storage Roadmap: January 2016
PDF
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
PDF
CEPH DAY BERLIN - DEPLOYING CEPH IN KUBERNETES WITH ROOK
PPTX
Ceph and OpenStack - Feb 2014
ODP
Accessing gluster ufo_-_eco_willson
PDF
Performance tuning in BlueStore & RocksDB - Li Xiaoyan
PDF
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
PDF
Antoine Coetsier - billing the cloud
PDF
Ceph on arm64 upload
PDF
Experiences building a distributed shared log on RADOS - Noah Watkins
PPTX
Red Hat Gluster Storage, Container Storage and CephFS Plans
PDF
Ceph Tech Talk: Ceph at DigitalOcean
PPTX
CEPH DAY BERLIN - DISK HEALTH PREDICTION AND RESOURCE ALLOCATION FOR CEPH BY ...
NantOmics
Red Hat Storage Roadmap
Ceph Deployment at Target: Customer Spotlight
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
CEPH DAY BERLIN - UNLIMITED FILESERVER WITH SAMBA CTDB AND CEPHFS
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
2015 open storage workshop ceph software defined storage
Red Hat Ceph Storage Roadmap: January 2016
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
CEPH DAY BERLIN - DEPLOYING CEPH IN KUBERNETES WITH ROOK
Ceph and OpenStack - Feb 2014
Accessing gluster ufo_-_eco_willson
Performance tuning in BlueStore & RocksDB - Li Xiaoyan
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Antoine Coetsier - billing the cloud
Ceph on arm64 upload
Experiences building a distributed shared log on RADOS - Noah Watkins
Red Hat Gluster Storage, Container Storage and CephFS Plans
Ceph Tech Talk: Ceph at DigitalOcean
CEPH DAY BERLIN - DISK HEALTH PREDICTION AND RESOURCE ALLOCATION FOR CEPH BY ...
Ad

Similar to Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat (20)

PDF
TUT18972: Unleash the power of Ceph across the Data Center
PPTX
Disaggregating Ceph using NVMeoF
PDF
Red hat open stack and storage presentation
PDF
Ambedded - how to build a true no single point of failure ceph cluster
PDF
Achieving the Ultimate Performance with KVM
PDF
XPDDS17: To Grant or Not to Grant? - João Martins, Oracle
PPTX
Lec04 gpu architecture
PDF
Ceph for Big Science - Dan van der Ster
PDF
Achieving the Ultimate Performance with KVM
PDF
Running OpenStack in Production - Barcamp Saigon 2016
PDF
Libvirt/KVM Driver Update (Kilo)
PPTX
OSS-10mins-7th2.pptx
PPT
NWU and HPC
PPTX
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
PDF
Achieving the ultimate performance with KVM
PDF
State of Containers and the Convergence of HPC and BigData
PDF
CloudOpen 2013: Developing cloud infrastructure: from scratch: the tale of an...
PDF
IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...
PPTX
Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
PDF
Ceph Day London 2014 - The current state of CephFS development
TUT18972: Unleash the power of Ceph across the Data Center
Disaggregating Ceph using NVMeoF
Red hat open stack and storage presentation
Ambedded - how to build a true no single point of failure ceph cluster
Achieving the Ultimate Performance with KVM
XPDDS17: To Grant or Not to Grant? - João Martins, Oracle
Lec04 gpu architecture
Ceph for Big Science - Dan van der Ster
Achieving the Ultimate Performance with KVM
Running OpenStack in Production - Barcamp Saigon 2016
Libvirt/KVM Driver Update (Kilo)
OSS-10mins-7th2.pptx
NWU and HPC
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Achieving the ultimate performance with KVM
State of Containers and the Convergence of HPC and BigData
CloudOpen 2013: Developing cloud infrastructure: from scratch: the tale of an...
IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...
Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day London 2014 - The current state of CephFS development
Ad

More from OpenStack (20)

PDF
Swinburne University of Technology - Shunde Zhang & Kieran Spear, Aptira
PDF
Related OSS Projects - Peter Rowe, Flexera Software
PDF
Federation and Interoperability in the Nectar Research Cloud
PDF
Simplifying the Move to OpenStack
PDF
Migrating your infrastructure to OpenStack - Avi Miller, Oracle
PDF
A glimpse into an industry Cloud using Open Source Technologies - Adrian Koh,...
PDF
Enabling OpenStack for Enterprise - Tarso Dos Santos, Veritas
PDF
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSE
PPTX
Diving in the desert: A quick overview into OpenStack Sahara capabilities - A...
PDF
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
PDF
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...
PDF
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
PDF
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
PPTX
Ironically, Infrastructure Doesn't Matter - Quinton Anderson, Commonwealth Ba...
PDF
Traditional Enterprise to OpenStack Cloud - An Unexpected Journey
PDF
Building a GPU-enabled OpenStack Cloud for HPC - Lance Wilson, Monash University
PDF
Monitoring Uptime on the NeCTAR Research Cloud - Andy Botting, University of ...
PPTX
Containers and OpenStack: Marc Van Hoof, Kumulus: Containers and OpenStack
PDF
Moving to Cloud for Good: Alexander Tsirel, HiveTec
PDF
We Are OpenStack: David F. Flanders & Tom Fifield, OpenStack Foundation
Swinburne University of Technology - Shunde Zhang & Kieran Spear, Aptira
Related OSS Projects - Peter Rowe, Flexera Software
Federation and Interoperability in the Nectar Research Cloud
Simplifying the Move to OpenStack
Migrating your infrastructure to OpenStack - Avi Miller, Oracle
A glimpse into an industry Cloud using Open Source Technologies - Adrian Koh,...
Enabling OpenStack for Enterprise - Tarso Dos Santos, Veritas
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSE
Diving in the desert: A quick overview into OpenStack Sahara capabilities - A...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
Ironically, Infrastructure Doesn't Matter - Quinton Anderson, Commonwealth Ba...
Traditional Enterprise to OpenStack Cloud - An Unexpected Journey
Building a GPU-enabled OpenStack Cloud for HPC - Lance Wilson, Monash University
Monitoring Uptime on the NeCTAR Research Cloud - Andy Botting, University of ...
Containers and OpenStack: Marc Van Hoof, Kumulus: Containers and OpenStack
Moving to Cloud for Good: Alexander Tsirel, HiveTec
We Are OpenStack: David F. Flanders & Tom Fifield, OpenStack Foundation

Recently uploaded (20)

PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Machine learning based COVID-19 study performance prediction
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
Big Data Technologies - Introduction.pptx
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
cuic standard and advanced reporting.pdf
PDF
Encapsulation theory and applications.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Empathic Computing: Creating Shared Understanding
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Modernizing your data center with Dell and AMD
Reach Out and Touch Someone: Haptics and Empathic Computing
“AI and Expert System Decision Support & Business Intelligence Systems”
The AUB Centre for AI in Media Proposal.docx
Machine learning based COVID-19 study performance prediction
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Big Data Technologies - Introduction.pptx
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
The Rise and Fall of 3GPP – Time for a Sabbatical?
Spectral efficient network and resource selection model in 5G networks
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Advanced methodologies resolving dimensionality complications for autism neur...
Diabetes mellitus diagnosis method based random forest with bat algorithm
cuic standard and advanced reporting.pdf
Encapsulation theory and applications.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Network Security Unit 5.pdf for BCA BBA.
Empathic Computing: Creating Shared Understanding
Mobile App Security Testing_ A Comprehensive Guide.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Modernizing your data center with Dell and AMD

Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat

  • 1. Hyperconverged OpenStack and Storage OpenStack Day - Melbourne 2017 Andrew Hatfield Practice Lead - Cloud Storage and Big Data June 1st, 2017
  • 2. 2 Agenda ● What is Hyper-converged Infrastructure (HCI)? ● Where should you consider it? ● HCI Resource Isolation ● HCI and NFV
  • 3. 3 What is Hyper-converged Infrastructure (HCI)? ● A server which runs both compute and storage processes is a hyper-converged node ● Today’s focus is on running OpenStack Nova Compute and Ceph Object Storage Daemon services on the same server ● HCI is not a step backwards to local storage because it does not introduce a single point of failure
  • 4. 4 Local Compute/Storage: Single Point of Failure Each server only uses its local disks
  • 5. 5 Separate Storage/Compute: No SPoF Servers store their data on separate storage cluster
  • 6. 6 HCI: Local Storage without SPoF Each server adds to Compute and Storage resources without being a single point of failure Two clusters, Compute and Storage, but both are using resources from the same servers
  • 8. 8 Why is there demand for HCI? ● HCI lets us deploy a smaller footprint ○ HCI: 6 nodes for HA (+ director) ■ 3 controllers/monitors + 3 computes/OSDs ○ Non-HCI: 9 nodes for HA (+ director) ■ 3 controllers/monitors + 3 computes + 3 OSDs ● Further standardization of hardware ○ Hardware vendors may offer a discount for more of the same type of server ○ Fewer server types simplify operations ○ May enable more efficient use of hardware resources
  • 9. 9 Small Footprint Benefits ● An implication of NFV is putting more power on the edge of the network ● Devices put on the edge of the network should be small and dense ● PoC requires a smaller initial investment
  • 10. 10 HCI Reference Architecture https://access.redhat.com/articles/2861641 All Heat environment files and scripts available at https://github.com/RHsyseng/hci
  • 12. 12 Nova Reserved Memory for HCI We can figure out the reserved memory with a formula left_over_mem = mem - (GB_per_OSD * osds) number_of_guests = int(left_over_mem / (average_guest_size + GB_overhead_per_guest)) nova_reserved_mem_MB = MB_per_GB * ( (GB_per_OSD * osds) + (number_of_guests * GB_overhead_per_guest) )
  • 13. 13 Nova CPU Allocation Ratio for HCI We can figure out the the CPU allocation ratio with a formula cores_per_OSD = 1.0 average_guest_util = 0.1 # 10% nonceph_cores = cores - (cores_per_OSD * osds) guest_vCPUs = nonceph_cores / average_guest_util cpu_allocation_ratio = guest_vCPUs / cores ● The above is for rotational hard drives. ● Increase the core per OSD if you use an NVMe SSD to keep up with it https://github.com/RHsyseng/hci/blob/master/scripts/nova_mem_cpu_calc.py
  • 14. 14 Nova Memory and CPU Calculator Busy VMs Many VMs
  • 15. 15 Tuning Ceph OSDs for HCI ● We limited Nova’s memory and CPU resources so Ceph and the OS can use them ● We now want to use numactl to pin the Ceph process to a NUMA node ● The socket to which Ceph should be pinned is the one that has the network IRQ ○ If a workload is network intensive and not storage intensive, this may not true.
  • 17. Typical VNF deployment today fabric 0 (provider network) VNF0 VF3VF0 mgt DPDK kernel fabric 1 (provider network) SR-IOV VNFs and OpenStack Management (tenants network) compute node OVS regular NICs VF1 VF2 VNF1 VF3VF0 mgt DPDK kernel VF1 VF2 Depending on the VNF, it will be connected to one or more fabric network, typically: ● One fabric for control plane ● One fabric for user plane (end user datas: mobile traffic, web, ...) regular NICs DHCP+PXE bond bond bond bond PF0 PF1 PF2 PF3
  • 18. 18 SR-IOV Host/VNFs guests resources partitioning Typical 18 cores per node dual socket compute node (E5-2599 v3) Host VNF1 VNF0 Ceph osd SR-IOV mgt NUMA node0 NUMA node1 one core, 2 hyperthreads This looks like RT but is not RT, just partitioning SR-IOV interfaces bonding handled by the VNF All host IRQs routed on host cores All VNFx cores dedicated to VNFx ● Isolation from others VNFs ● Isolation from the host ● Isolation from Ceph Mon ● 21 Mpps/core with zero frame loss, 12 hours run (I/O bound due to Niantic PCIe x4 switch): equal to bare-metal performances ● The number above show that virtualization/SR-IOV cost is null, and that the VNF is not preempted/interrupted.
  • 19. OVS-DPDK NFV deployment fabric 0 (provider network) VNF0 eth0 DPDK kernel fabric 1 (provider network) OVS-DPDK bridges OpenStack APIs compute node regular NICs eth1 VNF1 eth0 DPDK kernel eth1 regular NICs DHCP+PXE Base (VNFs mgt) (provider network) mgt mgt bonded bondedbonded DPDKNICs DPDK NICs DPDK NICs bonded
  • 20. 20 RHOSP10 OVS-DPDK Host/VNFs guests resources partitioning Typical 18 cores per node dual socket compute node (E5-2599 v3) Host DPDK NICs VNF mgt NUMA node1 Same as SR-IOV, except of a 4th partition for OVS-DPDK ● CPUs list dedicated to OVS-DPDK ● Huge Pages reserved for OVS-DPDK Not mixing VNF management traffic and Telco Traffic requires additional NICs as NICs cannot be shared between OVS-DPDK and the host: ● 3.5 Mpps/core for PVP configuration (OVS-DPDK 2.5) with zero packet loss VNF0 VNF1 OVS-DPDK host mgt Ceph osd NUMA node0 one core, 2 hyperthreads
  • 21. 21 Performance Testing HCI with NFV: Results
  • 22. 22 Performance Testing HCI with NFV: Results Summary ● When disk I/O activity started, VNFs experienced a ~4% degrade in throughput momentarily, then experienced ~2% variation in throughput, but on average maintained 99% of the throughput without disk I/O ● Each VM was reading an average ~200MB/sec, lower I/O should reflect much lower impact to VNF ● Investigations into interactions between disk I/O and VNF effects underway @RH ● Our goal is to completely eliminate the disk I/O effects on the VNF performance