SlideShare a Scribd company logo
Extending OpenVIM R3 to support Unikernels (and Xen)
Paolo Lungaroni (1), Claudio Pisa(2), Stefano Salsano(2,3), Giuseppe Siracusano(3), Francesco Lombardo(2)
(1)Consortium GARR, Italy; (2)CNIT, Italy; (3)Univ. of Rome Tor Vergata, Italy
Stefano Salsano
Project coordinator – Superfluidity project
Univ. of Rome Tor Vergata, Italy / CNIT, Italy
ETSI OSM-Mid-Release#4 meeting, February 8th, Roma, Italy
A super-fluid, cloud-native, converged edge system
Outline
• Superfluidity project goals and approach
• Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers)
• Unikernels orchestration over OpenStack, OpenVIM and Nomad –
Performance evaluation
• Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support
Unikernels orchestration – Live demo
• Details of OpenVIM extensions for Unikernels support (proposal for a patch…)
2
Superfluidity project
Superfluidity Goals
• Instantiate network functions and services on-the-fly
• Run them anywhere in the network (core, aggregation, edge), across
heterogeneous infrastructure environments (computing and networking), taking
advantage of specific hardware features, such as high performance accelerators,
when available
Superfluidity Approach
• Decomposition of network components and services into elementary and
reusable primitives (“Reusable Functional Blocks – RFBs”)
• Platform-independent abstractions, permitting reuse of network functions
across heterogeneous hardware platforms
3
The Superfluidity vision
4
Current NFV
technology
Granularity
Time scale
Superfluid
NFV
technology
Days, Hours Minutes Seconds Milliseconds
Big VMs
Small
components
Micro
operations • From VNF
Virtual Network Functions
to RFB
Reusable Functional Blocks
• Heterogeneous RFB execution
environments
- Hypervisors
- Modular routers
- Packet processors
…
Outline
• Superfluidity project goals and approach
• Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers)
• Unikernels orchestration over OpenStack, OpenVIM and Nomad –
Performance evaluation
• Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support
Unikernels orchestration – Live demo
• Details of OpenVIM extensions for Unikernels support (proposal for a patch…)
5
Extending the ETSI NFV models to support Unikernels
6
• In the NFV models, a Virtual Network Function (VNF) is decomposed in
Virtual Deployment Units (VDU)
• We extended the VDU information elements in the model to support
Unikernel VDUs (based on the ClickOS Unikernel)
• “Regular” VDUs based on traditional VMs and Unikernel VDUs can
coexist in the same VNF Descriptor
Working prototype (see the live demo!)
7
Orchestrator
prototype
RDCL 3D
VIM
OpenVIM
XEN
We configured XEN to support
both regular VMs (HVM) and
Click Unikernels
NSD
NSD
NSD
ETSI release 2
descriptors
NSD
NSD
VNFD
Our orchestrator
prototype
(RDCL 3D) uses the
enhanced VDU
descriptors and
interacts with
OpenVIM
OpenVIM has been
enhanced to support
XEN and Unikernels
Working prototype (see the live demo!)
8
This is a regular
VM (XEN HVM)
These are 3 Unikernel
VMs
(ClickOS)
Regular
VM
(Alpine)
Unikernels Chaining Proof of Concept
9
OpenVSwitch
ICMP
responder
(ClickOS)
Firewall
(ClickOS)
OpenVIM
Firewall
Descriptor
ICMP
Responder
Descriptor
VLAN
Encapsulator/
Decapsulator
Descriptor
VLAN
Encap/
Decap
(ClickOS)
“Regular” Linux
Alpine VM
Descriptor
3 Unikernel VMs1 “regular” VM
Extended ETSI NFV Release 2 models
Extended OpenVIM YAML descriptors
RDCL 3D
Some details of the working prototype
10
RDCL 3D GUI
VIM
OpenVIM
XEN
NSDNSDNSD
ETSI release 2
descriptorsNSDNSDVNFD
ClickOS images are prepared
“on the fly” by the RDCL 3D agent
using the Click Configuration files
RDCL 3D
Agent
libvirt
ClickOS
images
NSDNSDClick
Click
Configurations
Unikernel Chaining Proof of Concept
• Regular VM
– Pings towards the ICMP responder over a VLAN
• VLAN Encapsulator/Decapsulator
– Decapsulates the VLAN header (and re-encapsulates in the return path)
• Firewall
– Lets through only ARP and IP packets with TOS == 0xcc
• ICMP Responder
– Responds to ARP and ICMP echo requests
09/02/2018 CNIT 11
ClickOS configurations
09/02/2018 CNIT 12
Firewall
ALLOW: ToS=0xCC
Ping
Responder
IP: 10.10.0.3
VLAN
Decap/Encap
VLAN ID: 100
Compute Node
eth3 IP: 10.10.0.2
define($IP 10.10.0.3);
define($MAC 00:15:17:15:5d:75);
source :: FromDevice(0);
sink :: ToDevice(1);
// classifies packets
c :: Classifier(
12/0806 20/0001, // ARP Requests goes to output 0
12/0806 20/0002, // ARP Replies to output 1
12/0800, // ICMP Requests to output 2
-); // without a match to output 3
arpq :: ARPQuerier($IP, $MAC);
arpr :: ARPResponder($IP $MAC);
source -> Print -> c;
c[0] -> ARPPrint -> arpr -> sink;
c[1] -> [1]arpq;
Idle -> [0]arpq;
arpq -> ARPPrint -> sink;
c[2] -> CheckIPHeader(14) -> ICMPPingResponder() ->
EtherMirror() -> sink;
c[3] -> Discard;
source0 :: FromDevice(0);
sink0 :: ToDevice(1);
source1 :: FromDevice(1);
sink1 :: ToDevice(0);
VLANDecapsulator ::VLANDecap()
VLANEncapsulator ::VLANEncap(100)
//source0 -> VLANDecapsulator -> EnsureEther() -> sink0;
source0 -> VLANDecapsulator -> sink0;
source1 -> VLANEncapsulator -> sink1;
source0 :: FromDevice(0);
sink0 :: ToDevice(1);
source1 :: FromDevice(1);
sink1 :: ToDevice(0);
c :: Classifier(
12/0806, // ARP goes to output 0
12/0800 15/cc, // IP to output 1, only if QoS == 0xcc
-); // without a match to output 2
source0 -> c;
c[0] -> sink0;
// c[1] -> CheckIPHeader -> ipf -> sink0;
c[1] -> sink0;
c[2] -> Print -> Discard;
source1 -> Null -> sink1;
ClickOS chain scenario
09/02/2018 CNIT 13
Firewall
ALLOW: ToS=0xCC
Ping
Responder
IP: 10.10.0.3
VLAN
Encap/Decap
VLAN ID: 100
Compute Node
eth3 IP: 10.10.0.2
Alpine Linux
eth0.100: 10.10.0.4
Status checks after VM startup
09/02/2018 CNIT 14
After the completion of the VM startup, we can check the status via the Libvirt and Xen command line
tools in the target compute node
• On Libvirt CLI:
$ virsh -c xen:/// list
Id Name State
----------------------------------------------------
105 vm-clickos-ping2_56c0edb0-5b4c-11e7-ad8f-0cc47a7794be running
• On Xen console:
$ sudo xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 10238 8 r----- 96646.2
vm-clickos-ping2_56c0edb0-5b4c-11e7-ad8f-0cc47a7794be 105 8 1 r----- 227.6
Live Demo
Outline
• Superfluidity project goals and approach
• Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers)
• Unikernel orchestration over OpenStack, OpenVIM and Nomad –
Performance evaluation
• Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support
Unikernels orchestration – Live demo
• Details of OpenVIM extensions for Unikernels support (proposal for a patch…)
16
OpenVIM extensions
1. Extension to OpenVIM to support Xen and Unikernel VMs
2. Extension to OpenVIM for a different networking model (multiple
OvS bridges)
1709/02/2018 CNIT
OpenVIM extension 1 (Xen/Unikernels)
Extension to OpenVIM to support Unikernel VMs
• Xen hypervisor support
– Unikernel support (in particular, ClickOS Unikernels)
– Full HVM machines support
– Coexistence on the same compute node of Unikernels and HVM VMs
In order to specify that we are using the Xen hypervisor and the Unikernels, the
configuration is extended by adding a new tags in the object descriptor files.
• No changes in configuration openvim.cfg file.
This extension works in “development” mode and in “normal” mode.
1809/02/2018 CNIT
OpenVIM extension 1 (Xen/Unikernels)
• The patch extends the behavior of OpenVIM enabling the support for Xen :
– Orchestrate a unikernel machine such as ClickOS
– Orchestrate a standard Virtual Machines with Xen hypervisor
• Backward compatibility is granted with the original OpenVIM’s modes
(“normal”, “test”, “host only”, “OF only”, “development”)
• NB: We execute our experiments in “development” mode, to run them with
hardware not meeting all the requirements for “normal” OpenVIM mode
1909/02/2018 CNIT
Extension 1 : New descriptor tags
Server (VMs) descriptor new tags:
• hypervisor [kvm|xen-unik|xenhvm] defines which hypervisor is used. “kvm”
reflects the original mode, while "xen-unik" and "xenhvm" start xen with support
for Unikernels and full VM respectively.
• osImageType: [clickos] defines the type of Unikernel image to start. It is
mandatory if hypervisor = xen-unik. Currently, only ClickOS Unikernel are
supported, but this tag allows future support of different types of Unikernels.
Host (Compute Nodes) descriptor new flag:
• hypervisors (comma separated list of kvm,xen-unik,xenhvm) defines the
hypervisors supported by the compute node. NB: in a compute node kvm and
xen* are mutually exclusive, while xenhvm and xen-unik can coexist.
2009/02/2018 CNIT
Extension 1 : Scheduling enhancements
• The Compute Node is now selected based on the available resources
AND the type of hypervisor.
• If a specific Compute Node is requested for a Server (using the
“hostId” tag), a consistency check between the requested hypervisor
type and the supported hypervisor type in the Compute Node is
performed. An error is returned if the hypervisor type is not
supported.
09/02/2018 CNIT 21
Extension 2: OpenVIM Networking enhancements
NB This extension is independent from the previous one, we used it to
support the VNF chaining in the proposed example
• Networking enhancements
– Additional networking model: a separate OVS datapath (within the same OVS
instance) is associated to each OpenVIM network
• It allows transparent L2 networking instead of VLAN based
• It could be extended to work across multiple compute nodes (with VXLAN
tunneling)
2209/02/2018 CNIT
Extension 2: An additional Networking Model
09/02/2018 CNIT 23
Open vSwitch
Bridge 1
VNF 1
VNF 2
VNF 3
VNF N
Open vSwitch
Bridge 2
Open vSwitch
Bridge M
External
Network
Open vSwitch
Bridge α
VNF a
VNF b
VNF z
Open vSwitch
Bridge β
Open vSwitch
Bridge ω
VXLAN Tunnel
Compute NodeCompute Node
OpenVIM Instantiation sequence
09/02/2018 CNIT 24
OpenVIM deamon
Compute Node
VNF Descriptors files
OpenVIM CLI tool
Libvirt XML
descriptor
POST
create_server
./openvim vm-create clickos-ping.yaml
OpenVIM supports an OpenStack-like REST API on
Northbound side.
A CLI tool called openvim sends command over the
REST APIs to OpenVIM deamon. This tool convert
YAML descriptor to JSON format and sends it via REST.
OpenVIM Flavor and Image descriptors for a Unikernel
• Flavor
1 flavor:
2 name: CloudVM_1C_8M
3 description: clickos cloud image with 8M, 1core
4 ram: 8
5 vcpus: 1
$ openvim flavor-create flavor_1C_8M.yaml
5a258552-0a51-11e7-a086-0cc47a7794be CloudVM_1C_8M
• Image
1 image:
2 name: clickos-ping
3 description: click-os ping image
4 path: /var/lib/libvirt/images/clickos_ping
5 metadata:
6 use_incremental: "no"
$ openvim image-create vmimage-clickos-ping.yaml
c418a8ec-10c1-11e7-ad8f-0cc47a7794be clickos-ping
25
An example of Unikernel «Server» descriptor (extension 1)
09/02/2018 CNIT 26
New tags
• Server
1 server:
2 name: vm-clickos-ping2
3 description: ClickOS ping vm with simple requisites.
4 imageRef: 'c418a8ec-10c1-11e7-ad8f-0cc47a7794be'
5 flavorRef: '5a258552-0a51-11e7-a086-0cc47a7794be'
6 # hostId: '195d4fb2-54fe-11e7-ad8f-0cc47a7794be'
7 start: "yes"
8 hypervisor: "xen-unik"
9 osImageType: "clickos"
10 networks:
11 - name: vif0
12 uuid: f136bd32-3fd8-11e7-ad8f-0cc47a7794be
13 mac_address: "00:15:17:15:5d:74“
$ openvim net-create net-firewall_ping.yaml
56c0edb0-5b4c-11e7-ad8f-0cc47a7794be vm-clickos-ping2 Created
«Host» descriptor (extension 1)
• Host
1 {
2 "host":{
3 "name": "nec-test-408-eth3",
4 "user": "compute408",
5 "password": "*****",
6 "ip_name": "10.0.11.2"
7 },
8 "host-data":
9 {
10 "name": "nec-test-408-eth3",
11 "ranking": 300,
12 "description": "compute host for openvim testing",
13 "ip_name": "10.0.11.2",
14 "features": "lps,dioc,hwsv,ht,64b,tlbps",
15 "hypervisors": "xen-unik,xenhvm",
16 "user": "compute408",
17 "password": "*****",
...
292 }
09/02/2018 CNIT 27
New tag
OpenVIM Network descriptor (Extension 2)
• Net
1 network:
2 name: firewall_ping
3 type: bridge_data
4 provider: ovsbr:firewall_ping
5 enable_dhcp: false
6 shared: false
$ openvim net-create net-firewall_ping.yaml
f136bd32-3fd8-11e7-ad8f-0cc47a7794be firewall_ping ACTIVE
09/02/2018 CNIT 28
New value
Current approach
Libvirt XML descriptor for ClickOS Unikernel
generated by OpenVIM
<domain type='xen'>
<name>vm-clickos-ping2_56c0edb0-5b4c-11e7-ad8f-
0cc47a7794be</name>
<uuid>56c0edb0-5b4c-11e7-ad8f-0cc47a7794be</uuid>
<memory unit='KiB'>8192</memory>
<currentMemory unit='KiB'>8192</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch='x86_64' machine='xenpv'>xen</type>
<kernel>/var/lib/libvirt/images/clickos_ping</kernel>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-model'></cpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<console type='pty'>
<target type='xen' port='0'/>
</console>
<interface type='bridge'>
<source bridge='ovim-firewall_ping'/>
<script path='vif-openvswitch'/>
<mac address='00:15:17:15:5d:74'/>
</interface>
</devices>
</domain>
09/02/2018 CNIT 29
Installation of the extended OpenVIM (R2, R3)
• Download the extended version of OpenVIM in your system from our repository:
$ git clone https://github.com/superfluidity/openvim4unikernels.git
• Install OpenVIM via bash script:
openvim/scripts$ ./install-openvim.sh –noclone
• Our extensions are in the “unikernel” branch:
openvim/scripts$ git checkout unikernel
unikernel/scripts$ ./unikernels_patch_vim_db.sh –u vim –p vimpw install
After updating the database, you can start OpenVIM as usual.
3009/02/2018 CNIT
Repository structure
• Unikernel folder contains some tool ad example that is
useful to start work with our patch.
• Descriptors contains some preconfigured ClickOS images
and the descriptors to use as an example to start working
with the Unikernels.
• Docs contains documentation
• Scripts folder contains a bash scripts that updates
OpenVIM database to support the new fields for
unikernel operations and a script for a quick example to
start work with ClickOS.
3109/02/2018 CNIT
Conclusions – Feedbacks
• We have designed and implemented a solution for the combined
orchestration of regular VMs and Unikernels
• OpenVIM implementation has been extended. We can propose two
patches:
– 1. Extension to support Xen and Unikernels
– 2. Extension for multi OvS bridges networking model
32
Thank you. Questions?
Contacts
Stefano Salsano
University of Rome Tor Vergata / CNIT
stefano.salsano@uniroma2.it
These tools are available on github (Apache 2.0 license)
https://github.com/superfluidity/RDCL3D
https://github.com/superfluidity/openvim4unikernels
https://github.com/netgroup/vim-tuning-and-eval-tools
http://superfluidity.eu/
The work presented here only covers a subset of the work performed in the project
33
References
• SUPERFLUIDITY project Home Page http://superfluidity.eu/
• G. Bianchi, et al. “Superfluidity: a flexible functional architecture for 5G networks”, Transactions on
Emerging Telecommunications Technologies 27, no. 9, Sep 2016
• P. L. Ventre, C. Pisa, S. Salsano, G. Siracusano, F. Schmidt, P. Lungaroni,
N. Blefari-Melazzi, “Performance Evaluation and Tuning of Virtual Infrastructure Managers for
(Micro) Virtual Network Functions”,
IEEE NFV-SDN Conference, Palo Alto, USA, 7-9 November 2016
http://netgroup.uniroma2.it/Stefano_Salsano/papers/salsano-ieee-nfv-sdn-2016-vim-performance-for-unikernels.pdf
• S. Salsano, F. Lombardo, C. Pisa, P. Greto, N. Blefari-Melazzi,
“RDCL 3D, a Model Agnostic Web Framework for the Design and Composition of NFV Services”,
submitted paper, https://arxiv.org/abs/1702.08242
34
References – Speed up of Virtualization Platforms / Guests
• Light VM project http://cnp.neclab.eu/projects/lightvm/
• F. Manco, C. Lupu, F. Schmidt, J. Mendes, Simon Kuenzer, S. Sati, K. Yasukata, C. Raiciu, F. Huici,
“My VM is Lighter (and Safer) than your Container”, SOSP 2017
• J. Martins, M. Ahmed, C. Raiciu, V. Olteanu, M. Honda, R. Bifulco, F. Huici, “ClickOS and the art
of network function virtualization”, NSDI 2014, 11th USENIX Conference on Networked
Systems Design and Implementation, 2014.
• F. Manco, J. Martins, K. Yasukata, J. Mendes, S. Kuenzer, F. Huici,
“The Case for the Superfluid Cloud”, 7th USENIX Workshop on Hot Topics in Cloud Computing
(HotCloud 15), 2015
35
References – Unikraft project
• http://cnp.neclab.eu/projects/unikraft/
• https://www.xenproject.org/developers/teams/unikraft.html
The fundamental drawback of unikernels is that they require that applications be manually
ported to the underlying minimalistic OS (e.g. having to port nginx, snort, mysql or memcached to
MiniOS or OSv); this requires both expert work and often considerable amount of time. In
essence, we need to pick between either high performance with unikernels, or no porting effort
but decreased performance and decreased efficiency with standard OS/VM images. The goal of
this proposal is to change this status quo by providing a highly configurable unikernel code base;
we call this base Unikraft.
36
Background information
Outline
• Superfluidity project goals and approach
• Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers)
• Unikernel orchestration over OpenStack, OpenVIM and Nomad –
Performance evaluation
• Extending ETSI NFV Release 2 models (NFV-IFA 011&014) and OpenVIM to
support Unikernels orchestration – Live demo
• Details of OpenVIM extensions for Unikernels support
38
Unikernels: a tool for superfluid virtualization
Containers
e.g. Docker
• Lightweight (not enough?)
• Poor isolation
39
Hypervisors (traditional VMs)
e.g. XEN, KVM, wmware…
• Strong isolation
• Heavyweight
Unikernels
Specialized VMs (e.g. MiniOS, ClickOS…)
• Strong isolation
• Very Lightweight
• Very good security properties
They break the “myth” of VMs being heavy weight…
What is a Unikernel?
• Specialized VM: single application +
minimalistic OS
• Single address space,
co-operative scheduler so low
overheads
• Unikernel virtualization platforms
extend existing hypervisors (e.g. XEN)
driver1
driver2
app1
(e.g., Linux, FreeBSD)
KERNELSPACEUSERSPACE
app2
appNdriverN
Vdriver1
vdriver2
app
SINGLEADDRESS
SPACE
40
General purpose OS Unikernel
a minimalistic OS
(e.g., MiniOS, Osv)
ClickOS Unikernel
• ClickOS Unikernel combines:
– Click modular router
• a software architecture to build flexible and configurable routers
– MiniOS
• a minimalistic Unikernel OS available with the Xen sources
• ClickOS VMs
– Are small: ~6MB
– Boot quickly: ~ few ms
– Add little delay: ~45µs
– Support ~10Gb/s throughput for almost all packet sizes
09/02/2018 CNIT 41
Unikernels (ClickOS) memory footprint and boot time
VM configuration: MiniOS, 1 VCPU, 8MB RAM, 1 VIF
• 4 ms
• 87.77 ms
42
Boot time, state of the art results
Recent results from Superfluidity,
by redesigning the XEN toolstack
Memory footprint
• Hello world guest VM : 296 KB
• Ponger (ping responder) guest VM : ~700KB
Unikernels (ClickOS) memory footprint and boot time
VM configuration: MiniOS, 1 VCPU, 8MB RAM, 1 VIF
43
Boot time, state of the art results
Memory footprint
• Hello world guest VM : 296 KB
• Ponger (ping responder) guest VM : ~700KB
Recent results from Superfluidity,
by redesigning the XEN toolstack
• 4 ms
• 87.77 ms
VM instantiation and boot time
typical performance (no Unikernels)
44
Orchestrator
request
VIM
operations
Virtualization
Platform
Guest OS (VM)
Boot time
1-2 s
5-10 s
~1 s
VM instantiation and boot time
typical performance (no Unikernels)
45
Orchestrator
request
VIM
operations
Virtualization
Platform
Guest OS (VM)
Boot time
1-2 s
~1 ms
~1 ms
XEN Hypervisor
Enhancements
Unikernels
Unikernels and Hypervisor can provide
low instantiation times for “Micro-VNF”
VM instantiation and boot time
typical performance (no Unikernels)
46
Orchestrator
request
VIM
operations
Virtualization
Platform
Guest OS (VM)
Boot time
1-2 s
~1 ms
~1 ms
XEN Hypervisor
Enhancements
Unikernels
Can we improve VIM
performances?
Unikernels and Hypervisor can provide
low instantiation times for “Micro-VNF”
Outline
• Superfluidity project goals and approach
• Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers)
• Unikernels orchestration over OpenStack, OpenVIM and Nomad –
Performance evaluation
• Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support
Unikernels orchestration – Live demo
• Details of OpenVIM extensions for Unikernels support (proposal for a patch…)
48
Performance analysis and Tuning of
Virtual Infrastructure Managers (VIMs) for Unikernel VNFs
• We considered 3 VIMs (OpenStack, Nomad, OpenVIM)
49
- General model of the VNF instantiation process, mapping of the
operations of the 3 VIMs in the general model
- (Quick & dirty) modifications to VIMs to instantiate Micro-VNFs
based on ClickOS Unikernel
- Performance Evaluation
Virtual Infrastructure Managers (VIMs)
We considered three VIMs :
• OpenStack Nova
– OpenStack is composed by subprojects
– Nova: orchestration and management of computing resources ---> VIM
– 1 Nova node (scheduling) + several compute nodes (which interact with the hypervisor)
– Not tied to a specific virtualization technology
• Nomad by HashiCorp
– Minimalistic cluster manager and job scheduler
– Nomad server (scheduling) + Nomad clients (interact with the hypervisor)
– Not tied to a specific virtualization technology
• OpenVIM
– NFV specific VIM, originally developed by the OpenMANO open source project, now
maintained in the context of ETSI OSM 50
Results – ClickOS instantiation times
(OpenStack, Nomad, OpenVIM)
51
OpenStack Nova
Nomad
seconds
seconds
OpenVIM
seconds
The SUPERFLUIDITY project has received funding from the European Union’s Horizon
2020 research and innovation programme under grant agreement No.671566
(Research and Innovation Action).
The information given is the author’s view and does not necessarily represent the view
of the European Commission (EC). No liability is accepted for any use that may be
made of the information contained.
53

More Related Content

PPTX
Testbeds IntErconnections with L2 overlays - SRv6 for SFC
PDF
Ieee nfv-sdn-2020-srv6-tutorial
PDF
hpsr-2020-srv6-tutorial
PPTX
Layer 123 SDN World Congress OpenDaylight Service Function Chaining Use Cases
PDF
Virtual Extensible LAN (VXLAN)
PPTX
VXLAN Distributed Service Node
PPT
MPLS SDN 2016 - Microloop avoidance with segment routing
PPT
Networking Fundamentals
Testbeds IntErconnections with L2 overlays - SRv6 for SFC
Ieee nfv-sdn-2020-srv6-tutorial
hpsr-2020-srv6-tutorial
Layer 123 SDN World Congress OpenDaylight Service Function Chaining Use Cases
Virtual Extensible LAN (VXLAN)
VXLAN Distributed Service Node
MPLS SDN 2016 - Microloop avoidance with segment routing
Networking Fundamentals

What's hot (20)

PDF
Xpress path vxlan_bgp_evpn_appricot2019-v2_
PPTX
OSPF Summary LSA (Type 3 LSA)
PDF
MPLS WC 2014 Segment Routing TI-LFA Fast ReRoute
PDF
Bonding Interface in MikroTik
PPTX
OSPF Configuration
PPTX
Vxlan frame format and forwarding
PPT
Sigtran Workshop
PPT
Expl sw chapter_05_stp_part_i-rev2.
PPTX
Introduction to vxlan
PPTX
Operationalizing VRF in the Data Center
PDF
Segment routing tutorial
PDF
SRv6 Mobile User Plane P4 proto-type
PDF
SRv6 experience for italy iPv6 council
PPTX
Ospf area types
PPTX
Comparing ospf vs isis
PPTX
Multi-Area OSPF on IOS XR
PPTX
Comparison between-ipv6-and-6 lowpan
PPTX
Fabric Path PPT by NETWORKERS HOME
PPT
OSPF- Multi area
PDF
Segment Routing for Dummies
Xpress path vxlan_bgp_evpn_appricot2019-v2_
OSPF Summary LSA (Type 3 LSA)
MPLS WC 2014 Segment Routing TI-LFA Fast ReRoute
Bonding Interface in MikroTik
OSPF Configuration
Vxlan frame format and forwarding
Sigtran Workshop
Expl sw chapter_05_stp_part_i-rev2.
Introduction to vxlan
Operationalizing VRF in the Data Center
Segment routing tutorial
SRv6 Mobile User Plane P4 proto-type
SRv6 experience for italy iPv6 council
Ospf area types
Comparing ospf vs isis
Multi-Area OSPF on IOS XR
Comparison between-ipv6-and-6 lowpan
Fabric Path PPT by NETWORKERS HOME
OSPF- Multi area
Segment Routing for Dummies
Ad

Similar to Extending OpenVIM R3 to support Unikernels (and Xen) (20)

PPTX
Extending ETSI VNF descriptors and OpenVIM to support Unikernels
PPTX
Deploying of Unikernels in the NFV Infrastructure
PPTX
Tuning VIM performance for unikernels
PDF
Introduction to Open Mano
PPTX
Virtual Networking (1) (1).pptx
PPT
Linux virtualization
PDF
OpenVZ Linux Containers
DOCX
final proposal-Xen based Hypervisor in a Box
PDF
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
PDF
Test and perspectives on nfvi from china unicom sdn nfv lab
PPTX
CIF16/Scale14x: The latest from the Xen Project (Lars Kurth, Chairman of Xen ...
PPT
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
PDF
Workshop eNovance/OpenStack 20-12-2012
PDF
Rmll Virtualization As Is Tool 20090707 V1.0
PDF
RMLL / LSM 2009
PDF
Containers and Namespaces in the Linux Kernel
PDF
Hands on Virtualization with Ganeti
PDF
SDN TEST Suite
PDF
Xen in Linux (aka PVOPS update)
PDF
Openstack Networking Internals - first part
Extending ETSI VNF descriptors and OpenVIM to support Unikernels
Deploying of Unikernels in the NFV Infrastructure
Tuning VIM performance for unikernels
Introduction to Open Mano
Virtual Networking (1) (1).pptx
Linux virtualization
OpenVZ Linux Containers
final proposal-Xen based Hypervisor in a Box
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
Test and perspectives on nfvi from china unicom sdn nfv lab
CIF16/Scale14x: The latest from the Xen Project (Lars Kurth, Chairman of Xen ...
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
Workshop eNovance/OpenStack 20-12-2012
Rmll Virtualization As Is Tool 20090707 V1.0
RMLL / LSM 2009
Containers and Namespaces in the Linux Kernel
Hands on Virtualization with Ganeti
SDN TEST Suite
Xen in Linux (aka PVOPS update)
Openstack Networking Internals - first part
Ad

More from Stefano Salsano (12)

PPTX
Dataplane programming with eBPF: architecture and tools
PPTX
Energy-efficient Path Allocation Heuristic for Service Function Chaining
PPTX
RDCL 3D, a Model Agnostic Web Framework for the Design and Composition of NFV...
PPTX
Superfluid networking for 5G: vision and state of the art
PPTX
D-STREAMON - NFV-capable distributed framework for network monitoring
PPTX
Superfluid Deployment of Virtual Functions: Exploiting Mobile Edge Computing ...
PPTX
Superfluid Orchestration of heterogeneous Reusable Functional Blocks for 5G n...
PPTX
The SCISSOR approach to establishing situational awareness in Industrial Cont...
PPTX
Superfluid NFV: VMs and Virtual Infrastructure Managers speed-up for instanta...
PPTX
Input for Cloud and mobile
PPTX
Generalized Virtual Networking, an enabler for Service Centric Networking and...
PPTX
OSHI - Open Source Hybrid IP/SDN networking @EWSDN14
Dataplane programming with eBPF: architecture and tools
Energy-efficient Path Allocation Heuristic for Service Function Chaining
RDCL 3D, a Model Agnostic Web Framework for the Design and Composition of NFV...
Superfluid networking for 5G: vision and state of the art
D-STREAMON - NFV-capable distributed framework for network monitoring
Superfluid Deployment of Virtual Functions: Exploiting Mobile Edge Computing ...
Superfluid Orchestration of heterogeneous Reusable Functional Blocks for 5G n...
The SCISSOR approach to establishing situational awareness in Industrial Cont...
Superfluid NFV: VMs and Virtual Infrastructure Managers speed-up for instanta...
Input for Cloud and mobile
Generalized Virtual Networking, an enabler for Service Centric Networking and...
OSHI - Open Source Hybrid IP/SDN networking @EWSDN14

Recently uploaded (20)

PDF
Sims 4 Historia para lo sims 4 para jugar
PDF
The New Creative Director: How AI Tools for Social Media Content Creation Are...
PPTX
SAP Ariba Sourcing PPT for learning material
PPTX
INTERNET------BASICS-------UPDATED PPT PRESENTATION
PPTX
Digital Literacy And Online Safety on internet
PPTX
Introuction about WHO-FIC in ICD-10.pptx
PPTX
PptxGenJS_Demo_Chart_20250317130215833.pptx
PDF
Tenda Login Guide: Access Your Router in 5 Easy Steps
PDF
Unit-1 introduction to cyber security discuss about how to secure a system
PDF
The Internet -By the Numbers, Sri Lanka Edition
PPTX
Funds Management Learning Material for Beg
PDF
Cloud-Scale Log Monitoring _ Datadog.pdf
PDF
Testing WebRTC applications at scale.pdf
PDF
Slides PDF The World Game (s) Eco Economic Epochs.pdf
PDF
Automated vs Manual WooCommerce to Shopify Migration_ Pros & Cons.pdf
PPTX
introduction about ICD -10 & ICD-11 ppt.pptx
PDF
An introduction to the IFRS (ISSB) Stndards.pdf
PPT
Design_with_Watersergyerge45hrbgre4top (1).ppt
PDF
FINAL CALL-6th International Conference on Networks & IOT (NeTIOT 2025)
DOCX
Unit-3 cyber security network security of internet system
Sims 4 Historia para lo sims 4 para jugar
The New Creative Director: How AI Tools for Social Media Content Creation Are...
SAP Ariba Sourcing PPT for learning material
INTERNET------BASICS-------UPDATED PPT PRESENTATION
Digital Literacy And Online Safety on internet
Introuction about WHO-FIC in ICD-10.pptx
PptxGenJS_Demo_Chart_20250317130215833.pptx
Tenda Login Guide: Access Your Router in 5 Easy Steps
Unit-1 introduction to cyber security discuss about how to secure a system
The Internet -By the Numbers, Sri Lanka Edition
Funds Management Learning Material for Beg
Cloud-Scale Log Monitoring _ Datadog.pdf
Testing WebRTC applications at scale.pdf
Slides PDF The World Game (s) Eco Economic Epochs.pdf
Automated vs Manual WooCommerce to Shopify Migration_ Pros & Cons.pdf
introduction about ICD -10 & ICD-11 ppt.pptx
An introduction to the IFRS (ISSB) Stndards.pdf
Design_with_Watersergyerge45hrbgre4top (1).ppt
FINAL CALL-6th International Conference on Networks & IOT (NeTIOT 2025)
Unit-3 cyber security network security of internet system

Extending OpenVIM R3 to support Unikernels (and Xen)

  • 1. Extending OpenVIM R3 to support Unikernels (and Xen) Paolo Lungaroni (1), Claudio Pisa(2), Stefano Salsano(2,3), Giuseppe Siracusano(3), Francesco Lombardo(2) (1)Consortium GARR, Italy; (2)CNIT, Italy; (3)Univ. of Rome Tor Vergata, Italy Stefano Salsano Project coordinator – Superfluidity project Univ. of Rome Tor Vergata, Italy / CNIT, Italy ETSI OSM-Mid-Release#4 meeting, February 8th, Roma, Italy A super-fluid, cloud-native, converged edge system
  • 2. Outline • Superfluidity project goals and approach • Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers) • Unikernels orchestration over OpenStack, OpenVIM and Nomad – Performance evaluation • Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support Unikernels orchestration – Live demo • Details of OpenVIM extensions for Unikernels support (proposal for a patch…) 2
  • 3. Superfluidity project Superfluidity Goals • Instantiate network functions and services on-the-fly • Run them anywhere in the network (core, aggregation, edge), across heterogeneous infrastructure environments (computing and networking), taking advantage of specific hardware features, such as high performance accelerators, when available Superfluidity Approach • Decomposition of network components and services into elementary and reusable primitives (“Reusable Functional Blocks – RFBs”) • Platform-independent abstractions, permitting reuse of network functions across heterogeneous hardware platforms 3
  • 4. The Superfluidity vision 4 Current NFV technology Granularity Time scale Superfluid NFV technology Days, Hours Minutes Seconds Milliseconds Big VMs Small components Micro operations • From VNF Virtual Network Functions to RFB Reusable Functional Blocks • Heterogeneous RFB execution environments - Hypervisors - Modular routers - Packet processors …
  • 5. Outline • Superfluidity project goals and approach • Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers) • Unikernels orchestration over OpenStack, OpenVIM and Nomad – Performance evaluation • Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support Unikernels orchestration – Live demo • Details of OpenVIM extensions for Unikernels support (proposal for a patch…) 5
  • 6. Extending the ETSI NFV models to support Unikernels 6 • In the NFV models, a Virtual Network Function (VNF) is decomposed in Virtual Deployment Units (VDU) • We extended the VDU information elements in the model to support Unikernel VDUs (based on the ClickOS Unikernel) • “Regular” VDUs based on traditional VMs and Unikernel VDUs can coexist in the same VNF Descriptor
  • 7. Working prototype (see the live demo!) 7 Orchestrator prototype RDCL 3D VIM OpenVIM XEN We configured XEN to support both regular VMs (HVM) and Click Unikernels NSD NSD NSD ETSI release 2 descriptors NSD NSD VNFD Our orchestrator prototype (RDCL 3D) uses the enhanced VDU descriptors and interacts with OpenVIM OpenVIM has been enhanced to support XEN and Unikernels
  • 8. Working prototype (see the live demo!) 8 This is a regular VM (XEN HVM) These are 3 Unikernel VMs (ClickOS)
  • 9. Regular VM (Alpine) Unikernels Chaining Proof of Concept 9 OpenVSwitch ICMP responder (ClickOS) Firewall (ClickOS) OpenVIM Firewall Descriptor ICMP Responder Descriptor VLAN Encapsulator/ Decapsulator Descriptor VLAN Encap/ Decap (ClickOS) “Regular” Linux Alpine VM Descriptor 3 Unikernel VMs1 “regular” VM Extended ETSI NFV Release 2 models Extended OpenVIM YAML descriptors RDCL 3D
  • 10. Some details of the working prototype 10 RDCL 3D GUI VIM OpenVIM XEN NSDNSDNSD ETSI release 2 descriptorsNSDNSDVNFD ClickOS images are prepared “on the fly” by the RDCL 3D agent using the Click Configuration files RDCL 3D Agent libvirt ClickOS images NSDNSDClick Click Configurations
  • 11. Unikernel Chaining Proof of Concept • Regular VM – Pings towards the ICMP responder over a VLAN • VLAN Encapsulator/Decapsulator – Decapsulates the VLAN header (and re-encapsulates in the return path) • Firewall – Lets through only ARP and IP packets with TOS == 0xcc • ICMP Responder – Responds to ARP and ICMP echo requests 09/02/2018 CNIT 11
  • 12. ClickOS configurations 09/02/2018 CNIT 12 Firewall ALLOW: ToS=0xCC Ping Responder IP: 10.10.0.3 VLAN Decap/Encap VLAN ID: 100 Compute Node eth3 IP: 10.10.0.2 define($IP 10.10.0.3); define($MAC 00:15:17:15:5d:75); source :: FromDevice(0); sink :: ToDevice(1); // classifies packets c :: Classifier( 12/0806 20/0001, // ARP Requests goes to output 0 12/0806 20/0002, // ARP Replies to output 1 12/0800, // ICMP Requests to output 2 -); // without a match to output 3 arpq :: ARPQuerier($IP, $MAC); arpr :: ARPResponder($IP $MAC); source -> Print -> c; c[0] -> ARPPrint -> arpr -> sink; c[1] -> [1]arpq; Idle -> [0]arpq; arpq -> ARPPrint -> sink; c[2] -> CheckIPHeader(14) -> ICMPPingResponder() -> EtherMirror() -> sink; c[3] -> Discard; source0 :: FromDevice(0); sink0 :: ToDevice(1); source1 :: FromDevice(1); sink1 :: ToDevice(0); VLANDecapsulator ::VLANDecap() VLANEncapsulator ::VLANEncap(100) //source0 -> VLANDecapsulator -> EnsureEther() -> sink0; source0 -> VLANDecapsulator -> sink0; source1 -> VLANEncapsulator -> sink1; source0 :: FromDevice(0); sink0 :: ToDevice(1); source1 :: FromDevice(1); sink1 :: ToDevice(0); c :: Classifier( 12/0806, // ARP goes to output 0 12/0800 15/cc, // IP to output 1, only if QoS == 0xcc -); // without a match to output 2 source0 -> c; c[0] -> sink0; // c[1] -> CheckIPHeader -> ipf -> sink0; c[1] -> sink0; c[2] -> Print -> Discard; source1 -> Null -> sink1;
  • 13. ClickOS chain scenario 09/02/2018 CNIT 13 Firewall ALLOW: ToS=0xCC Ping Responder IP: 10.10.0.3 VLAN Encap/Decap VLAN ID: 100 Compute Node eth3 IP: 10.10.0.2 Alpine Linux eth0.100: 10.10.0.4
  • 14. Status checks after VM startup 09/02/2018 CNIT 14 After the completion of the VM startup, we can check the status via the Libvirt and Xen command line tools in the target compute node • On Libvirt CLI: $ virsh -c xen:/// list Id Name State ---------------------------------------------------- 105 vm-clickos-ping2_56c0edb0-5b4c-11e7-ad8f-0cc47a7794be running • On Xen console: $ sudo xl list Name ID Mem VCPUs State Time(s) Domain-0 0 10238 8 r----- 96646.2 vm-clickos-ping2_56c0edb0-5b4c-11e7-ad8f-0cc47a7794be 105 8 1 r----- 227.6
  • 16. Outline • Superfluidity project goals and approach • Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers) • Unikernel orchestration over OpenStack, OpenVIM and Nomad – Performance evaluation • Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support Unikernels orchestration – Live demo • Details of OpenVIM extensions for Unikernels support (proposal for a patch…) 16
  • 17. OpenVIM extensions 1. Extension to OpenVIM to support Xen and Unikernel VMs 2. Extension to OpenVIM for a different networking model (multiple OvS bridges) 1709/02/2018 CNIT
  • 18. OpenVIM extension 1 (Xen/Unikernels) Extension to OpenVIM to support Unikernel VMs • Xen hypervisor support – Unikernel support (in particular, ClickOS Unikernels) – Full HVM machines support – Coexistence on the same compute node of Unikernels and HVM VMs In order to specify that we are using the Xen hypervisor and the Unikernels, the configuration is extended by adding a new tags in the object descriptor files. • No changes in configuration openvim.cfg file. This extension works in “development” mode and in “normal” mode. 1809/02/2018 CNIT
  • 19. OpenVIM extension 1 (Xen/Unikernels) • The patch extends the behavior of OpenVIM enabling the support for Xen : – Orchestrate a unikernel machine such as ClickOS – Orchestrate a standard Virtual Machines with Xen hypervisor • Backward compatibility is granted with the original OpenVIM’s modes (“normal”, “test”, “host only”, “OF only”, “development”) • NB: We execute our experiments in “development” mode, to run them with hardware not meeting all the requirements for “normal” OpenVIM mode 1909/02/2018 CNIT
  • 20. Extension 1 : New descriptor tags Server (VMs) descriptor new tags: • hypervisor [kvm|xen-unik|xenhvm] defines which hypervisor is used. “kvm” reflects the original mode, while "xen-unik" and "xenhvm" start xen with support for Unikernels and full VM respectively. • osImageType: [clickos] defines the type of Unikernel image to start. It is mandatory if hypervisor = xen-unik. Currently, only ClickOS Unikernel are supported, but this tag allows future support of different types of Unikernels. Host (Compute Nodes) descriptor new flag: • hypervisors (comma separated list of kvm,xen-unik,xenhvm) defines the hypervisors supported by the compute node. NB: in a compute node kvm and xen* are mutually exclusive, while xenhvm and xen-unik can coexist. 2009/02/2018 CNIT
  • 21. Extension 1 : Scheduling enhancements • The Compute Node is now selected based on the available resources AND the type of hypervisor. • If a specific Compute Node is requested for a Server (using the “hostId” tag), a consistency check between the requested hypervisor type and the supported hypervisor type in the Compute Node is performed. An error is returned if the hypervisor type is not supported. 09/02/2018 CNIT 21
  • 22. Extension 2: OpenVIM Networking enhancements NB This extension is independent from the previous one, we used it to support the VNF chaining in the proposed example • Networking enhancements – Additional networking model: a separate OVS datapath (within the same OVS instance) is associated to each OpenVIM network • It allows transparent L2 networking instead of VLAN based • It could be extended to work across multiple compute nodes (with VXLAN tunneling) 2209/02/2018 CNIT
  • 23. Extension 2: An additional Networking Model 09/02/2018 CNIT 23 Open vSwitch Bridge 1 VNF 1 VNF 2 VNF 3 VNF N Open vSwitch Bridge 2 Open vSwitch Bridge M External Network Open vSwitch Bridge α VNF a VNF b VNF z Open vSwitch Bridge β Open vSwitch Bridge ω VXLAN Tunnel Compute NodeCompute Node
  • 24. OpenVIM Instantiation sequence 09/02/2018 CNIT 24 OpenVIM deamon Compute Node VNF Descriptors files OpenVIM CLI tool Libvirt XML descriptor POST create_server ./openvim vm-create clickos-ping.yaml OpenVIM supports an OpenStack-like REST API on Northbound side. A CLI tool called openvim sends command over the REST APIs to OpenVIM deamon. This tool convert YAML descriptor to JSON format and sends it via REST.
  • 25. OpenVIM Flavor and Image descriptors for a Unikernel • Flavor 1 flavor: 2 name: CloudVM_1C_8M 3 description: clickos cloud image with 8M, 1core 4 ram: 8 5 vcpus: 1 $ openvim flavor-create flavor_1C_8M.yaml 5a258552-0a51-11e7-a086-0cc47a7794be CloudVM_1C_8M • Image 1 image: 2 name: clickos-ping 3 description: click-os ping image 4 path: /var/lib/libvirt/images/clickos_ping 5 metadata: 6 use_incremental: "no" $ openvim image-create vmimage-clickos-ping.yaml c418a8ec-10c1-11e7-ad8f-0cc47a7794be clickos-ping 25
  • 26. An example of Unikernel «Server» descriptor (extension 1) 09/02/2018 CNIT 26 New tags • Server 1 server: 2 name: vm-clickos-ping2 3 description: ClickOS ping vm with simple requisites. 4 imageRef: 'c418a8ec-10c1-11e7-ad8f-0cc47a7794be' 5 flavorRef: '5a258552-0a51-11e7-a086-0cc47a7794be' 6 # hostId: '195d4fb2-54fe-11e7-ad8f-0cc47a7794be' 7 start: "yes" 8 hypervisor: "xen-unik" 9 osImageType: "clickos" 10 networks: 11 - name: vif0 12 uuid: f136bd32-3fd8-11e7-ad8f-0cc47a7794be 13 mac_address: "00:15:17:15:5d:74“ $ openvim net-create net-firewall_ping.yaml 56c0edb0-5b4c-11e7-ad8f-0cc47a7794be vm-clickos-ping2 Created
  • 27. «Host» descriptor (extension 1) • Host 1 { 2 "host":{ 3 "name": "nec-test-408-eth3", 4 "user": "compute408", 5 "password": "*****", 6 "ip_name": "10.0.11.2" 7 }, 8 "host-data": 9 { 10 "name": "nec-test-408-eth3", 11 "ranking": 300, 12 "description": "compute host for openvim testing", 13 "ip_name": "10.0.11.2", 14 "features": "lps,dioc,hwsv,ht,64b,tlbps", 15 "hypervisors": "xen-unik,xenhvm", 16 "user": "compute408", 17 "password": "*****", ... 292 } 09/02/2018 CNIT 27 New tag
  • 28. OpenVIM Network descriptor (Extension 2) • Net 1 network: 2 name: firewall_ping 3 type: bridge_data 4 provider: ovsbr:firewall_ping 5 enable_dhcp: false 6 shared: false $ openvim net-create net-firewall_ping.yaml f136bd32-3fd8-11e7-ad8f-0cc47a7794be firewall_ping ACTIVE 09/02/2018 CNIT 28 New value Current approach
  • 29. Libvirt XML descriptor for ClickOS Unikernel generated by OpenVIM <domain type='xen'> <name>vm-clickos-ping2_56c0edb0-5b4c-11e7-ad8f- 0cc47a7794be</name> <uuid>56c0edb0-5b4c-11e7-ad8f-0cc47a7794be</uuid> <memory unit='KiB'>8192</memory> <currentMemory unit='KiB'>8192</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='xenpv'>xen</type> <kernel>/var/lib/libvirt/images/clickos_ping</kernel> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='host-model'></cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <console type='pty'> <target type='xen' port='0'/> </console> <interface type='bridge'> <source bridge='ovim-firewall_ping'/> <script path='vif-openvswitch'/> <mac address='00:15:17:15:5d:74'/> </interface> </devices> </domain> 09/02/2018 CNIT 29
  • 30. Installation of the extended OpenVIM (R2, R3) • Download the extended version of OpenVIM in your system from our repository: $ git clone https://github.com/superfluidity/openvim4unikernels.git • Install OpenVIM via bash script: openvim/scripts$ ./install-openvim.sh –noclone • Our extensions are in the “unikernel” branch: openvim/scripts$ git checkout unikernel unikernel/scripts$ ./unikernels_patch_vim_db.sh –u vim –p vimpw install After updating the database, you can start OpenVIM as usual. 3009/02/2018 CNIT
  • 31. Repository structure • Unikernel folder contains some tool ad example that is useful to start work with our patch. • Descriptors contains some preconfigured ClickOS images and the descriptors to use as an example to start working with the Unikernels. • Docs contains documentation • Scripts folder contains a bash scripts that updates OpenVIM database to support the new fields for unikernel operations and a script for a quick example to start work with ClickOS. 3109/02/2018 CNIT
  • 32. Conclusions – Feedbacks • We have designed and implemented a solution for the combined orchestration of regular VMs and Unikernels • OpenVIM implementation has been extended. We can propose two patches: – 1. Extension to support Xen and Unikernels – 2. Extension for multi OvS bridges networking model 32
  • 33. Thank you. Questions? Contacts Stefano Salsano University of Rome Tor Vergata / CNIT stefano.salsano@uniroma2.it These tools are available on github (Apache 2.0 license) https://github.com/superfluidity/RDCL3D https://github.com/superfluidity/openvim4unikernels https://github.com/netgroup/vim-tuning-and-eval-tools http://superfluidity.eu/ The work presented here only covers a subset of the work performed in the project 33
  • 34. References • SUPERFLUIDITY project Home Page http://superfluidity.eu/ • G. Bianchi, et al. “Superfluidity: a flexible functional architecture for 5G networks”, Transactions on Emerging Telecommunications Technologies 27, no. 9, Sep 2016 • P. L. Ventre, C. Pisa, S. Salsano, G. Siracusano, F. Schmidt, P. Lungaroni, N. Blefari-Melazzi, “Performance Evaluation and Tuning of Virtual Infrastructure Managers for (Micro) Virtual Network Functions”, IEEE NFV-SDN Conference, Palo Alto, USA, 7-9 November 2016 http://netgroup.uniroma2.it/Stefano_Salsano/papers/salsano-ieee-nfv-sdn-2016-vim-performance-for-unikernels.pdf • S. Salsano, F. Lombardo, C. Pisa, P. Greto, N. Blefari-Melazzi, “RDCL 3D, a Model Agnostic Web Framework for the Design and Composition of NFV Services”, submitted paper, https://arxiv.org/abs/1702.08242 34
  • 35. References – Speed up of Virtualization Platforms / Guests • Light VM project http://cnp.neclab.eu/projects/lightvm/ • F. Manco, C. Lupu, F. Schmidt, J. Mendes, Simon Kuenzer, S. Sati, K. Yasukata, C. Raiciu, F. Huici, “My VM is Lighter (and Safer) than your Container”, SOSP 2017 • J. Martins, M. Ahmed, C. Raiciu, V. Olteanu, M. Honda, R. Bifulco, F. Huici, “ClickOS and the art of network function virtualization”, NSDI 2014, 11th USENIX Conference on Networked Systems Design and Implementation, 2014. • F. Manco, J. Martins, K. Yasukata, J. Mendes, S. Kuenzer, F. Huici, “The Case for the Superfluid Cloud”, 7th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 15), 2015 35
  • 36. References – Unikraft project • http://cnp.neclab.eu/projects/unikraft/ • https://www.xenproject.org/developers/teams/unikraft.html The fundamental drawback of unikernels is that they require that applications be manually ported to the underlying minimalistic OS (e.g. having to port nginx, snort, mysql or memcached to MiniOS or OSv); this requires both expert work and often considerable amount of time. In essence, we need to pick between either high performance with unikernels, or no porting effort but decreased performance and decreased efficiency with standard OS/VM images. The goal of this proposal is to change this status quo by providing a highly configurable unikernel code base; we call this base Unikraft. 36
  • 38. Outline • Superfluidity project goals and approach • Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers) • Unikernel orchestration over OpenStack, OpenVIM and Nomad – Performance evaluation • Extending ETSI NFV Release 2 models (NFV-IFA 011&014) and OpenVIM to support Unikernels orchestration – Live demo • Details of OpenVIM extensions for Unikernels support 38
  • 39. Unikernels: a tool for superfluid virtualization Containers e.g. Docker • Lightweight (not enough?) • Poor isolation 39 Hypervisors (traditional VMs) e.g. XEN, KVM, wmware… • Strong isolation • Heavyweight Unikernels Specialized VMs (e.g. MiniOS, ClickOS…) • Strong isolation • Very Lightweight • Very good security properties They break the “myth” of VMs being heavy weight…
  • 40. What is a Unikernel? • Specialized VM: single application + minimalistic OS • Single address space, co-operative scheduler so low overheads • Unikernel virtualization platforms extend existing hypervisors (e.g. XEN) driver1 driver2 app1 (e.g., Linux, FreeBSD) KERNELSPACEUSERSPACE app2 appNdriverN Vdriver1 vdriver2 app SINGLEADDRESS SPACE 40 General purpose OS Unikernel a minimalistic OS (e.g., MiniOS, Osv)
  • 41. ClickOS Unikernel • ClickOS Unikernel combines: – Click modular router • a software architecture to build flexible and configurable routers – MiniOS • a minimalistic Unikernel OS available with the Xen sources • ClickOS VMs – Are small: ~6MB – Boot quickly: ~ few ms – Add little delay: ~45µs – Support ~10Gb/s throughput for almost all packet sizes 09/02/2018 CNIT 41
  • 42. Unikernels (ClickOS) memory footprint and boot time VM configuration: MiniOS, 1 VCPU, 8MB RAM, 1 VIF • 4 ms • 87.77 ms 42 Boot time, state of the art results Recent results from Superfluidity, by redesigning the XEN toolstack Memory footprint • Hello world guest VM : 296 KB • Ponger (ping responder) guest VM : ~700KB
  • 43. Unikernels (ClickOS) memory footprint and boot time VM configuration: MiniOS, 1 VCPU, 8MB RAM, 1 VIF 43 Boot time, state of the art results Memory footprint • Hello world guest VM : 296 KB • Ponger (ping responder) guest VM : ~700KB Recent results from Superfluidity, by redesigning the XEN toolstack • 4 ms • 87.77 ms
  • 44. VM instantiation and boot time typical performance (no Unikernels) 44 Orchestrator request VIM operations Virtualization Platform Guest OS (VM) Boot time 1-2 s 5-10 s ~1 s
  • 45. VM instantiation and boot time typical performance (no Unikernels) 45 Orchestrator request VIM operations Virtualization Platform Guest OS (VM) Boot time 1-2 s ~1 ms ~1 ms XEN Hypervisor Enhancements Unikernels Unikernels and Hypervisor can provide low instantiation times for “Micro-VNF”
  • 46. VM instantiation and boot time typical performance (no Unikernels) 46 Orchestrator request VIM operations Virtualization Platform Guest OS (VM) Boot time 1-2 s ~1 ms ~1 ms XEN Hypervisor Enhancements Unikernels Can we improve VIM performances? Unikernels and Hypervisor can provide low instantiation times for “Micro-VNF”
  • 47. Outline • Superfluidity project goals and approach • Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers) • Unikernels orchestration over OpenStack, OpenVIM and Nomad – Performance evaluation • Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support Unikernels orchestration – Live demo • Details of OpenVIM extensions for Unikernels support (proposal for a patch…) 48
  • 48. Performance analysis and Tuning of Virtual Infrastructure Managers (VIMs) for Unikernel VNFs • We considered 3 VIMs (OpenStack, Nomad, OpenVIM) 49 - General model of the VNF instantiation process, mapping of the operations of the 3 VIMs in the general model - (Quick & dirty) modifications to VIMs to instantiate Micro-VNFs based on ClickOS Unikernel - Performance Evaluation
  • 49. Virtual Infrastructure Managers (VIMs) We considered three VIMs : • OpenStack Nova – OpenStack is composed by subprojects – Nova: orchestration and management of computing resources ---> VIM – 1 Nova node (scheduling) + several compute nodes (which interact with the hypervisor) – Not tied to a specific virtualization technology • Nomad by HashiCorp – Minimalistic cluster manager and job scheduler – Nomad server (scheduling) + Nomad clients (interact with the hypervisor) – Not tied to a specific virtualization technology • OpenVIM – NFV specific VIM, originally developed by the OpenMANO open source project, now maintained in the context of ETSI OSM 50
  • 50. Results – ClickOS instantiation times (OpenStack, Nomad, OpenVIM) 51 OpenStack Nova Nomad seconds seconds OpenVIM seconds
  • 51. The SUPERFLUIDITY project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No.671566 (Research and Innovation Action). The information given is the author’s view and does not necessarily represent the view of the European Commission (EC). No liability is accepted for any use that may be made of the information contained. 53