SlideShare a Scribd company logo
Microservices in Unikernels
Rean Griffith, Madhuri Yechuri
1
Agenda
Introduction - bios
Unikernel Background
Developer/DevOps care about
Metric Set 1: Application lifecycle overhead
CIO cares about
Metric Set 2: Application datacenter footprint
2
What is a Unikernel?
A single purpose (virtual) appliance (Madhavapeddy et al.)
Specialized at compile-time into a standalone kernel
A single-process, single address-space runtime environment
No fork()
No shared memory
No IPC
Smaller attack surface (potentially) 3
fork() Shared
memory
Application
IPC
networking
sched Application
networking
thread
sched
services
vmm
vmm
Unikernel Background
4
Unmodified Legacy App
support
Multi-threaded App support
OSv Partial Yes (1: glibc subset, no
fork/exec)
Yes* (pthread subset)
Rumprun Yes* (no
fork/execve/sigaction/mmap)
Yes (pthread)
MirageOS No* (until non-OCAML language
bindings are available, no
fork/execve)
Green threads (event loop) only
IncludeOS No Green threads (event loop) only
Developer/DevOps care about
Enterprise Application Lifecycle management
Developer: Time to build app from source code, preferably unmodified
DevOps: Time to configure runtime parameters (ex: TCP port, log file
location)
DevOps: Time to deploy application
DevOps: Qualitative ease of managing+debugging long-running (weeks /
months) application 5
App Lifecycle Experiment Environment
Machine
CPU: Intel(R) Core(TM) i3 CPU M 380 @ 2.53GHz
Memory: 4GB RAM
OS: Ubuntu 16.04 LTS
Applications
Web tier: Nginx
Application tier: Tomcat
Deployment Options (local image)
6
Metric Set 1: Application Lifecycle
Convert Code to Image (Hours)
VM 8 (Nginx, Issues: 1 , 2, 3)
8 (Tomcat, Issues with Alpine glibc availability)
Container 0 (Nginx)
0 (Tomcat)
Unikernel 40 (Nginx, Issues: 1, 2)
4 (Tomcat)
7
Unikernel conversion - ukonvrt
Demo
8
Metric Set 1: Application Lifecycle
Convert Code to
Image (Hours)
Start Time
(Seconds)
VM 8 (Nginx)
8 (Tomcat)
66.557 (Nginx)
68.964 (Tomcat)
Container 0 (Nginx)
0 (Tomcat)
1.113 (Nginx)
4.1 (Tomcat)
Unikernel 40 (Nginx)
0 (Tomcat)
0.483 (Nginx)
10 (Tomcat)
9
Metric Set 1: Application Lifecycle
Convert
Code to
Image
(Hours)
Start Time
(Seconds)
Stop Time
(Seconds)
VM 8 (Nginx)
8 (Tomcat)
66.557 (Nginx)
68.964 (Tomcat)
7.478 (Nginx)
5.418 (Tomcat)
Container 0 (Nginx)
0 (Tomcat)
1.113 (Nginx)
4.1 (Tomcat)
0.685 (Nginx)
0.016 (Tomcat)
Unikernel 40 (Nginx)
4 (Tomcat)
0.483 (Nginx)
10 (Tomcat)
0.019 (Nginx)
0.006 (Tomcat)
10
Metric Set 1: Application Lifecycle
Code to
Image
(Hours)
Start Time
(Seconds)
Stop Time
(Seconds)
Debuggability
VM 8 (Nginx)
8 (Tomcat)
66.557 (Nginx)
68.964 (Tomcat)
7.478 (Nginx)
5.418 (Tomcat)
Container 0 (Nginx)
0 (Tomcat)
1.113 (Nginx)
4.1 (Tomcat)
0.685 (Nginx)
0.016 (Tomcat)
Unikernel 40 (Nginx)
4 (Tomcat)
0.483 (Nginx)
10 (Tomcat)
0.019 (Nginx)
0.006 (Tomcat)
CIO cares about
Consolidation of applications on finite hardware resources
Multi-tenant security isolation amongst applications on a compute node
Multi-tenant Resource Management
Manageability, Accounting, Auditability
Infrastructure Power consumption
12
Metric Set 2: Data center footprint
Image Size
(MB)
VM 143 (Nginx)
447 (Tomcat, Issue 1 - Alpine musl
vs glibc)
Container 182.8 (Nginx)
357.5 (Tomcat)
Unikernel 27.8 (Nginx)
106 (Tomcat)
Metric Set 2: Data center footprint
Image Size
(MB)
Runtime Memory Overhead
(MB)
VM 143 (Nginx)
447 (Tomcat)
619 (Nginx)
878 (Tomcat)
(/proc/{vboxpid}/status/{VmSize} - Configured)
Container 182.8 (Nginx)
357.5 (Tomcat)
274.4 (Nginx)
210.5 (Tomcat)
(containerd-shim /proc/{pid}/status/{VmSize})
Unikernel 7.8 (Nginx)
106 (Tomcat)
1222 (Nginx)
2056 (Tomcat)
(/proc/{qemupid}/status/{VmSize} - Configured)
Metric Set 2: Data center footprint
Image Size
(MB)
Runtime Memory
Overhead (MB)
Security (Tenant
Isolation)
VM 143 (Nginx)
447 (Tomcat)
619 (Nginx)
878 (Tomcat)
Strong
Container 182.8 (Nginx)
357.5 (Tomcat)
274.4 (Nginx)
210.5 (Tomcat)
Weak
Unikernel 7.8 (Nginx)
106 (Tomcat)
1222 (Nginx)
2056 (Tomcat)
Strong
Metric Set 2: Data center footprint
Image Size
(MB)
Runtime Memory
Overhead (MB)
Security (Tenant
Isolation)
Resource Knobs
VM 143 (Nginx)
447 (Tomcat)
619 (Nginx)
878 (Tomcat)
Strong Strong
(Reservation,
Limits)
Container 182.8 (Nginx)
357.5 (Tomcat)
274.4 (Nginx)
210.5 (Tomcat)
Weak Moderate (Limits)
Unikernel 7.8 (Nginx)
106 (Tomcat)
1222 (Nginx)
2056 (Tomcat)
Strong Moderate (knobs
available, not used
yet)
Customer cares about
Application Performance
Resource Isolation
Security
Application high-availability
17
Performance Experiment Environment
Machine
Lenovo W520, CPU: Intel i7-2760QM CPU 2.40 GHz x 8 logical cores, Memory: 19.5 GB RAM
OS: Ubuntu 16.04.1 LTS (64-bit)
Deployment Options
Linux (host machine) - Ubuntu 16.04.1 LTS, Linux kernel: 4.4.0-34-generic #53-Ubuntu SMP
VM: VirtualBox (v5.1.2) - Ubuntu 16.04.1 LTS, Linux kernel: 4.4.0-34-generic #53-Ubuntu SMP, 8GB RAM, 4 vCPUs
Container: Docker (v1.12.0) - Linux kernel: 4.4.0-34-generic #53-Ubuntu SMP
Unikernel: OSv (based on git hash: f53c0c39) - v0.24-176-g2e19ba4 (Ubuntu 16.04.1 LTS, Linux kernel: 4.4.0-34-generic
#53-Ubuntu SMP), 4 vCPUs, 2GB RAM 18
Metric Set 3a: Application Performance
(nginx)
19
Metrics Set 3: Throughput Explanation
nginx-osv > nginx-linux > nginx-docker > nginx-vm
Baseline: 1 thread/client
Nginx-linux (bare metal) ~600 requests/sec
Nginx-vm slightly lower: expected because the client request needs to traverse two I/O
stacks - the hypervisor’s and the Guest OS’s
Nginx-docker is close to bare metal: expected since the only thing separating the container
from the workload generator is a network bridge
Nginx-osv slightly better than bare metal: client requests still have to go through the
unikernel’s I/O stack but the I/O stack for OSV was designed to be light/lower-overhead -
influenced by a design based on Van Jacobson’s net channels
10 threads
Results get slightly more than 10X better (this is mostly because of reductions in average
latency - next graph) but the ordering remains the same 20
Metrics Set 3: Response Time Explanation
nginx-osv > nginx-linux > nginx-docker > nginx-vm
Overall response times between 1ms and 2ms
Single thread case ~1.5ms, and 10 thread case < 1.5ms
Reduction in response time moving 1 to 10 threads is mostly a result of
caching and multiplexing.
With multiple threads, more work gets done per-unit time. While thread A is processing the
results of a response, thread B, which was waiting, can quickly be given a cached copy of
the static file being served.
21
Metric Set 3b Application Performance
(Apache Tomcat)
22
Summary
Many microservice tools can be deployed in a unikernel
Nginx, Tomcat, JVM, Nodejs, Redis, Memcached etc. (list is growing)
Performance is comparable
Smaller “attack” surface (no extraneous services)
Lean network-stack (e.g., OSv)
Lean OS (no kernel-userspace crossing, no context-switching, heavy mem mgt etc.,)
Opportunities in tooling to help flesh out the workflow for planning or
23
Acknowledgements
Thank you!
OSv: Nadav Har’El
Nirmata: Jim Bugwadia
Microservices and Cloud Native Apps Meetup
Mike Larkin, Carl Waldspurger, Anne Holler
24
Links
Tools: Ukonvrt, ukdctl
Rain Workload Toolkit
Nginx VirtualBox repo
Nginx OSv
OSv networking hack
Performance evaluation of OSv
25
Q & A
Madhuri
cosmokramer@gmail.com
GitHub: myechuri
Rean
rean@caa.columbia.edu
GitHub: rean
26

More Related Content

PPTX
Unikernels
PPTX
Unikernels and Cloud Computing
PDF
Next Generation Cloud: Rise of the Unikernel V3 (UPDATED)
PPTX
XPDS14: Unikernels: Who, What, Where, When, Why - Adam Wick, Galois
PDF
Unikernels Introduction
PDF
Présentation d'Unikernel
PPTX
Metrics towards enterprise readiness of unikernels
PDF
Lightning talk unikernels
Unikernels
Unikernels and Cloud Computing
Next Generation Cloud: Rise of the Unikernel V3 (UPDATED)
XPDS14: Unikernels: Who, What, Where, When, Why - Adam Wick, Galois
Unikernels Introduction
Présentation d'Unikernel
Metrics towards enterprise readiness of unikernels
Lightning talk unikernels

What's hot (20)

PDF
Docker Online Meetup #31: Unikernels
PDF
CIF16: Unikernels, Meet Docker! Containing Unikernels (Richard Mortier, Anil ...
PPTX
Unik Slides
PDF
CIF16: Building the Superfluid Cloud with Unikernels (Simon Kuenzer, NEC Europe)
PDF
CIF16: Unikernels: The Past, the Present, the Future ( Russell Pavlicek, Xen ...
PDF
Unikernel User Summit 2015: Getting started in unikernels using the rump kernel
PDF
OSAC16: Unikernel-powered Transient Microservices: Changing the Face of Softw...
PPTX
CIF16: Rethinking Foundations for Zero-devops Clouds (Maxim Kharchenko, Cloud...
PDF
Unikernels - Keep It Simple to the Bare Metal
PDF
Unikernel User Summit 2015: The Next Generation Cloud: Unleashing the Power o...
PDF
CIF16: Knock, Knock: Unikernels Calling! (Richard Mortier, Cambridge University)
PDF
XPDS16: Xen Project Weather Report 2016
PPTX
CIF16/Scale14x: The latest from the Xen Project (Lars Kurth, Chairman of Xen ...
PDF
IITCC15: Xen Project 4.6 Update
PPT
ICALEPCS 2011: Testing Environments using Virtualization
PDF
CIF16: Solo5: Building a Unikernel Base From Scratch (Dan Williams, IBM)
PDF
IITCC15: The Bare-Metal Hypervisor as a Platform for Innovation
PDF
Status of Embedded Linux
PDF
4 virtual router CloudStack Developer Day
Docker Online Meetup #31: Unikernels
CIF16: Unikernels, Meet Docker! Containing Unikernels (Richard Mortier, Anil ...
Unik Slides
CIF16: Building the Superfluid Cloud with Unikernels (Simon Kuenzer, NEC Europe)
CIF16: Unikernels: The Past, the Present, the Future ( Russell Pavlicek, Xen ...
Unikernel User Summit 2015: Getting started in unikernels using the rump kernel
OSAC16: Unikernel-powered Transient Microservices: Changing the Face of Softw...
CIF16: Rethinking Foundations for Zero-devops Clouds (Maxim Kharchenko, Cloud...
Unikernels - Keep It Simple to the Bare Metal
Unikernel User Summit 2015: The Next Generation Cloud: Unleashing the Power o...
CIF16: Knock, Knock: Unikernels Calling! (Richard Mortier, Cambridge University)
XPDS16: Xen Project Weather Report 2016
CIF16/Scale14x: The latest from the Xen Project (Lars Kurth, Chairman of Xen ...
IITCC15: Xen Project 4.6 Update
ICALEPCS 2011: Testing Environments using Virtualization
CIF16: Solo5: Building a Unikernel Base From Scratch (Dan Williams, IBM)
IITCC15: The Bare-Metal Hypervisor as a Platform for Innovation
Status of Embedded Linux
4 virtual router CloudStack Developer Day
Ad

Similar to Microservices in Unikernels (20)

PPTX
ClickOS_EE80777777777777777777777777777.pptx
PDF
mTCP使ってみた
PDF
Mpls conference 2016-data center virtualisation-11-march
PDF
”Bare-Metal Container" presented at HPCC2016
PDF
Playing BBR with a userspace network stack
PDF
Ceph Day Shanghai - On the Productization Practice of Ceph
PDF
Direct Code Execution - LinuxCon Japan 2014
PDF
Userspace networking
PPT
Evolution of the Windows Kernel Architecture, by Dave Probert
PPT
Oct2009
PDF
Supermicro X12 Performance Update
PPT
2337610
PDF
MINA2 (Apache Netty)
PDF
Achieving Performance Isolation with Lightweight Co-Kernels
PDF
XPDS13: Enabling Fast, Dynamic Network Processing with ClickOS - Joao Martins...
ODP
Comparison of Open Source Virtualization Technology
PPTX
The Modern Telco Network: Defining The Telco Cloud
PPTX
Superfluid networking for 5G: vision and state of the art
PPTX
Pushing Packets - How do the ML2 Mechanism Drivers Stack Up
PDF
BMC: Bare Metal Container @Open Source Summit Japan 2017
ClickOS_EE80777777777777777777777777777.pptx
mTCP使ってみた
Mpls conference 2016-data center virtualisation-11-march
”Bare-Metal Container" presented at HPCC2016
Playing BBR with a userspace network stack
Ceph Day Shanghai - On the Productization Practice of Ceph
Direct Code Execution - LinuxCon Japan 2014
Userspace networking
Evolution of the Windows Kernel Architecture, by Dave Probert
Oct2009
Supermicro X12 Performance Update
2337610
MINA2 (Apache Netty)
Achieving Performance Isolation with Lightweight Co-Kernels
XPDS13: Enabling Fast, Dynamic Network Processing with ClickOS - Joao Martins...
Comparison of Open Source Virtualization Technology
The Modern Telco Network: Defining The Telco Cloud
Superfluid networking for 5G: vision and state of the art
Pushing Packets - How do the ML2 Mechanism Drivers Stack Up
BMC: Bare Metal Container @Open Source Summit Japan 2017
Ad

Recently uploaded (20)

PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Encapsulation theory and applications.pdf
PDF
Empathic Computing: Creating Shared Understanding
PPTX
Cloud computing and distributed systems.
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
sap open course for s4hana steps from ECC to s4
PDF
cuic standard and advanced reporting.pdf
PPTX
MYSQL Presentation for SQL database connectivity
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Assigned Numbers - 2025 - Bluetooth® Document
Spectral efficient network and resource selection model in 5G networks
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Encapsulation theory and applications.pdf
Empathic Computing: Creating Shared Understanding
Cloud computing and distributed systems.
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Programs and apps: productivity, graphics, security and other tools
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
gpt5_lecture_notes_comprehensive_20250812015547.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Advanced methodologies resolving dimensionality complications for autism neur...
sap open course for s4hana steps from ECC to s4
cuic standard and advanced reporting.pdf
MYSQL Presentation for SQL database connectivity

Microservices in Unikernels

  • 1. Microservices in Unikernels Rean Griffith, Madhuri Yechuri 1
  • 2. Agenda Introduction - bios Unikernel Background Developer/DevOps care about Metric Set 1: Application lifecycle overhead CIO cares about Metric Set 2: Application datacenter footprint 2
  • 3. What is a Unikernel? A single purpose (virtual) appliance (Madhavapeddy et al.) Specialized at compile-time into a standalone kernel A single-process, single address-space runtime environment No fork() No shared memory No IPC Smaller attack surface (potentially) 3 fork() Shared memory Application IPC networking sched Application networking thread sched services vmm vmm
  • 4. Unikernel Background 4 Unmodified Legacy App support Multi-threaded App support OSv Partial Yes (1: glibc subset, no fork/exec) Yes* (pthread subset) Rumprun Yes* (no fork/execve/sigaction/mmap) Yes (pthread) MirageOS No* (until non-OCAML language bindings are available, no fork/execve) Green threads (event loop) only IncludeOS No Green threads (event loop) only
  • 5. Developer/DevOps care about Enterprise Application Lifecycle management Developer: Time to build app from source code, preferably unmodified DevOps: Time to configure runtime parameters (ex: TCP port, log file location) DevOps: Time to deploy application DevOps: Qualitative ease of managing+debugging long-running (weeks / months) application 5
  • 6. App Lifecycle Experiment Environment Machine CPU: Intel(R) Core(TM) i3 CPU M 380 @ 2.53GHz Memory: 4GB RAM OS: Ubuntu 16.04 LTS Applications Web tier: Nginx Application tier: Tomcat Deployment Options (local image) 6
  • 7. Metric Set 1: Application Lifecycle Convert Code to Image (Hours) VM 8 (Nginx, Issues: 1 , 2, 3) 8 (Tomcat, Issues with Alpine glibc availability) Container 0 (Nginx) 0 (Tomcat) Unikernel 40 (Nginx, Issues: 1, 2) 4 (Tomcat) 7
  • 8. Unikernel conversion - ukonvrt Demo 8
  • 9. Metric Set 1: Application Lifecycle Convert Code to Image (Hours) Start Time (Seconds) VM 8 (Nginx) 8 (Tomcat) 66.557 (Nginx) 68.964 (Tomcat) Container 0 (Nginx) 0 (Tomcat) 1.113 (Nginx) 4.1 (Tomcat) Unikernel 40 (Nginx) 0 (Tomcat) 0.483 (Nginx) 10 (Tomcat) 9
  • 10. Metric Set 1: Application Lifecycle Convert Code to Image (Hours) Start Time (Seconds) Stop Time (Seconds) VM 8 (Nginx) 8 (Tomcat) 66.557 (Nginx) 68.964 (Tomcat) 7.478 (Nginx) 5.418 (Tomcat) Container 0 (Nginx) 0 (Tomcat) 1.113 (Nginx) 4.1 (Tomcat) 0.685 (Nginx) 0.016 (Tomcat) Unikernel 40 (Nginx) 4 (Tomcat) 0.483 (Nginx) 10 (Tomcat) 0.019 (Nginx) 0.006 (Tomcat) 10
  • 11. Metric Set 1: Application Lifecycle Code to Image (Hours) Start Time (Seconds) Stop Time (Seconds) Debuggability VM 8 (Nginx) 8 (Tomcat) 66.557 (Nginx) 68.964 (Tomcat) 7.478 (Nginx) 5.418 (Tomcat) Container 0 (Nginx) 0 (Tomcat) 1.113 (Nginx) 4.1 (Tomcat) 0.685 (Nginx) 0.016 (Tomcat) Unikernel 40 (Nginx) 4 (Tomcat) 0.483 (Nginx) 10 (Tomcat) 0.019 (Nginx) 0.006 (Tomcat)
  • 12. CIO cares about Consolidation of applications on finite hardware resources Multi-tenant security isolation amongst applications on a compute node Multi-tenant Resource Management Manageability, Accounting, Auditability Infrastructure Power consumption 12
  • 13. Metric Set 2: Data center footprint Image Size (MB) VM 143 (Nginx) 447 (Tomcat, Issue 1 - Alpine musl vs glibc) Container 182.8 (Nginx) 357.5 (Tomcat) Unikernel 27.8 (Nginx) 106 (Tomcat)
  • 14. Metric Set 2: Data center footprint Image Size (MB) Runtime Memory Overhead (MB) VM 143 (Nginx) 447 (Tomcat) 619 (Nginx) 878 (Tomcat) (/proc/{vboxpid}/status/{VmSize} - Configured) Container 182.8 (Nginx) 357.5 (Tomcat) 274.4 (Nginx) 210.5 (Tomcat) (containerd-shim /proc/{pid}/status/{VmSize}) Unikernel 7.8 (Nginx) 106 (Tomcat) 1222 (Nginx) 2056 (Tomcat) (/proc/{qemupid}/status/{VmSize} - Configured)
  • 15. Metric Set 2: Data center footprint Image Size (MB) Runtime Memory Overhead (MB) Security (Tenant Isolation) VM 143 (Nginx) 447 (Tomcat) 619 (Nginx) 878 (Tomcat) Strong Container 182.8 (Nginx) 357.5 (Tomcat) 274.4 (Nginx) 210.5 (Tomcat) Weak Unikernel 7.8 (Nginx) 106 (Tomcat) 1222 (Nginx) 2056 (Tomcat) Strong
  • 16. Metric Set 2: Data center footprint Image Size (MB) Runtime Memory Overhead (MB) Security (Tenant Isolation) Resource Knobs VM 143 (Nginx) 447 (Tomcat) 619 (Nginx) 878 (Tomcat) Strong Strong (Reservation, Limits) Container 182.8 (Nginx) 357.5 (Tomcat) 274.4 (Nginx) 210.5 (Tomcat) Weak Moderate (Limits) Unikernel 7.8 (Nginx) 106 (Tomcat) 1222 (Nginx) 2056 (Tomcat) Strong Moderate (knobs available, not used yet)
  • 17. Customer cares about Application Performance Resource Isolation Security Application high-availability 17
  • 18. Performance Experiment Environment Machine Lenovo W520, CPU: Intel i7-2760QM CPU 2.40 GHz x 8 logical cores, Memory: 19.5 GB RAM OS: Ubuntu 16.04.1 LTS (64-bit) Deployment Options Linux (host machine) - Ubuntu 16.04.1 LTS, Linux kernel: 4.4.0-34-generic #53-Ubuntu SMP VM: VirtualBox (v5.1.2) - Ubuntu 16.04.1 LTS, Linux kernel: 4.4.0-34-generic #53-Ubuntu SMP, 8GB RAM, 4 vCPUs Container: Docker (v1.12.0) - Linux kernel: 4.4.0-34-generic #53-Ubuntu SMP Unikernel: OSv (based on git hash: f53c0c39) - v0.24-176-g2e19ba4 (Ubuntu 16.04.1 LTS, Linux kernel: 4.4.0-34-generic #53-Ubuntu SMP), 4 vCPUs, 2GB RAM 18
  • 19. Metric Set 3a: Application Performance (nginx) 19
  • 20. Metrics Set 3: Throughput Explanation nginx-osv > nginx-linux > nginx-docker > nginx-vm Baseline: 1 thread/client Nginx-linux (bare metal) ~600 requests/sec Nginx-vm slightly lower: expected because the client request needs to traverse two I/O stacks - the hypervisor’s and the Guest OS’s Nginx-docker is close to bare metal: expected since the only thing separating the container from the workload generator is a network bridge Nginx-osv slightly better than bare metal: client requests still have to go through the unikernel’s I/O stack but the I/O stack for OSV was designed to be light/lower-overhead - influenced by a design based on Van Jacobson’s net channels 10 threads Results get slightly more than 10X better (this is mostly because of reductions in average latency - next graph) but the ordering remains the same 20
  • 21. Metrics Set 3: Response Time Explanation nginx-osv > nginx-linux > nginx-docker > nginx-vm Overall response times between 1ms and 2ms Single thread case ~1.5ms, and 10 thread case < 1.5ms Reduction in response time moving 1 to 10 threads is mostly a result of caching and multiplexing. With multiple threads, more work gets done per-unit time. While thread A is processing the results of a response, thread B, which was waiting, can quickly be given a cached copy of the static file being served. 21
  • 22. Metric Set 3b Application Performance (Apache Tomcat) 22
  • 23. Summary Many microservice tools can be deployed in a unikernel Nginx, Tomcat, JVM, Nodejs, Redis, Memcached etc. (list is growing) Performance is comparable Smaller “attack” surface (no extraneous services) Lean network-stack (e.g., OSv) Lean OS (no kernel-userspace crossing, no context-switching, heavy mem mgt etc.,) Opportunities in tooling to help flesh out the workflow for planning or 23
  • 24. Acknowledgements Thank you! OSv: Nadav Har’El Nirmata: Jim Bugwadia Microservices and Cloud Native Apps Meetup Mike Larkin, Carl Waldspurger, Anne Holler 24
  • 25. Links Tools: Ukonvrt, ukdctl Rain Workload Toolkit Nginx VirtualBox repo Nginx OSv OSv networking hack Performance evaluation of OSv 25
  • 26. Q & A Madhuri cosmokramer@gmail.com GitHub: myechuri Rean rean@caa.columbia.edu GitHub: rean 26

Editor's Notes

  • #4: Unikernels: Library Operating Systems for the Cloud - Madhavapeddy et al. (ASPLOS 2013) Note: Mention VENOM attack
  • #7: Madhuri: Add Tomcat info
  • #15: Mention ukvm and lkvm work.
  • #18: Owner: Rean Note: Refer to image size and overhead for cost estimates.
  • #19: Worker connections = #clients simultaneously served Worker processes * worker connections = anticipated upper limit on reqs/sec Workload version of Rain (git hash b0b29438) Workload configuration files: https://github.com/rean/rain-workload-toolkit/blob/master/config/rain.config.nginx.json (determines workload duration, warm up and warm down) https://github.com/rean/rain-workload-toolkit/blob/master/config/profiles.config.nginx.json (controls the IP address and port, number of threads, workload generator to use)
  • #20: Experiment description * simple HTTP GET workload, run for 5 minutes (10 sec warmup before, 10 sec rampdown afterwards) x 5 repeats * Load generator and nginx instance run on the same machine so there’s no network jitter. We’re mainly capturing I/O stack overheads/differences * Results reported = average over 5 repeats, error bars are 95% confidence intervals Response time results * 1 thread/client is the baseline case * bare metal (Nginx-linux) ~600 requests/sec, Nginx-vm slightly lower (expected because the client request needs to traverse two I/O stacks - the hypervisor’s and the Guest OS’s), Nginx-docker is close to bare metal (expected since the only thing separating the container from the workload generator is a network bridge), Nginx-osv slightly better than bare metal (client requests still have to go through the unikernel’s I/O stack but the I/O stack for OSV was designed to be light/lower-overhead - influenced by a design based on Van Jacobson’s net channels) * General ordering is nginx-osv > nginx-linux > nginx-docker > nginx-vm * 10 threads * Results get slightly more than 10X better (this is mostly because of reductions in average latency - next graph) but the ordering remains the same nginx-osv > nginx-linux > nginx-docker > nginx-vm Response time results * Overall response times between 1ms and 2ms * Single thread case ~1.5ms, and 10 thread case < 1.5ms * The reduction in response time moving 1 to 10 threads is mostly a result of caching and multiplexing. With multiple threads more work gets done per-unit time. While thread A is processing the results of a response, thread B, which was waiting, can quickly be given a cached copy of the static file being served.
  • #23: Experiment description * simple HTTP GET workload, run for 5 minutes (10 sec warmup before, 10 sec rampdown afterwards) x 5 repeats * Load generator and nginx instance run on the same machine so there’s no network jitter. We’re mainly capturing I/O stack overheads/differences * Results reported = average over 5 repeats, error bars are 95% confidence intervals
  • #24: Owner: Rean Summarize: 3 perspectives on what might be important (CIO, developer, customer). Measurements.