SlideShare a Scribd company logo
Xen Cloud Platform
                             Lars Kurth
              Xen Community Manager
                   lars.kurth@xen.org
                          @lars_kurth
                      @xen_com_mgr
A Brief History of Xen in the Cloud
Late 90s

XenoServer Project
(Cambridge Univ.)
                                               Global Public Computing
The XenoServer project is building a public    “This dissertation proposes a new distributed computing
infrastructure for wide-area distributed        paradigm, termed global public computing, which allows
computing. We envisage a world in which         any user to run any code anywhere. Such platforms price
XenoServer execution platforms will be          computing resources, and ultimately charge users for
scattered across the globe and available for    resources consumed.“
any member of the public to submit code
for execution.                                 Evangelos Kotsovinos, PhD dissertation, 2004
A Brief History of Xen in the Cloud
Late 90s       Nov ‘02      Oct ‘03         ‘06             ‘08         ‘09               ‘11

XenoServer Project                          Amazon EC2                                   XCP 1.x
(Cambridge Univ.)                           and Slicehost                           Xen in Linux
                                            launched                                     Kronos
                      Xen   Xen Presented                   Rackspace               Cloud Mgmt
               Repository   at SOSP                         Cloud
                Published
                                                                        XCP
                                                                        Announced
The Xen Hypervisor was designed for
the Cloud straight from the outset!
Xen.org
• Guardian of Xen Hypervisor and related OSS Projects
• Xen project Governance similar to Linux Kernel
• Projects
  –   Xen Hypervisor (led by Citrix)
  –   Xen Cloud Platform aka XCP (led by Citrix)
  –   Xen ARM (led by Samsung)
  –   PVOPS : Xen components and support in Linux Kernel (led by Oracle)
Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)
Community & Ecosystem Map
xen.org/community/projects

                                              Research
                                      A
                                                              Xen
                                      D
                                  Hosting
                                                            Projects
                                  Vendors
                                      D

                                        #                            XCP
                               XCP
                             Products   s                          Projects




                                     Xen                   Consulting
                                   Products                 People
                                              Consulting
                                                Firms
Xen Overview
Basic Xen Concepts
                                                              Control Domain aka Dom0
                 XL, XM (deprecated)                          •   Dom0 kernel with drivers
                                                              •   Xen Management Toolstack
                                                 VMn          •   Trusted Computing Base
                                             VM1
                                                              Guest Domains
Control domain           One or more       VM0
(dom0)                   driver, stub or                      •   Your apps
      Dom0 Kernel
                         service domains    Guest OS
                                            and Apps
                                                              •   E.g. your cloud management stack
                                                              Driver/Stub/Service Domain(s)
Scheduler, MMU                               Xen Hypervisor
                                                              •   A “driver, device model or control
                                                   Host HW        service in a box”
I/O                 Memory          CPUs
                                                              •   De-privileged and isolated
                                                              •   Lifetime: start, stop, kill
                                                                  10
PV Domains & Driver Domains
Control domain              Guest VMn        Driver Domain
                                                                   Linux PV guests have limitations:
(dom0)                                       e.g.                  • limited set of virtual hardware
                                 Apps        • Disk
                                             • Network             Advantages
  PV Back Ends               PV Front Ends     PV Back End         • Fast
                                                                   • Works on any system
      HW Drivers                                HW Driver
                                                                     (even without virt extensions)
                               Guest OS       Dom0 Kernel*
                                                                   Driver Domains
                                              Xen Hypervisor       • Security
                                                                   • Isolation
I/O                Memory           CPUs
                                                    Host HW        • Reliability and Robustness


                                                *) Can be MiniOS
                                                                                  11
HVM & Stub Domains
Dom0           Guest VMn      Stubdomn        Guest VMn
                                                              Disadvantages
                                                              • Slower than PV due to Emulation
                                                                (mainly I/O devices)
               IO Emulation                    IO Emulation
Device Model                  Device Model                    Advantages
                                                              • Install the same way as native Linux
                               IO Event
                                                              Stub Domains
   IO Event          VMEXIT     Mini OS              VMEXIT
                                                              • Security
                                          Xen Hypervisor      • Isolation
                                                              • Reliability and Robustness
                                                Host HW




                                                                            12
PV on HVM
• A mixture of PV and HVM
• Linux enables as many PV interfaces                             HVM      PV on   PV
  as possible                                                              HVM
                                         Boot Sequence            Emulated Emulated PV
• This has advantages
                                         Memory                   HW       HW      PV
   –   install the same way as native
                                         Interrupts,              Emulated PV*     PV
   –   PC-like hardware                  Timers &
   –   access to fast PV devices         Spinlocks
   –   exploit nested paging             Disk & Network           Emulated PV      PV
   –   Good performance trade-offs       Privileged               HW       HW      PV
                                         Operations
• Drivers in Linux 3.x
                                        *) Emulated for Windows
Xen and the Linux Kernel
      Xen was initially a University research project




 Invasive changes to the kernel to run Linux as a PV guest

        Even more changes to run Linux as dom0
Xen and the Linux Kernel
    Xen support in the Linux kernel not upstream


     Great maintenance effort on distributions



     Risk of distributions dropping Xen support
                   Xen harder to use
Current State
                       PVOPS Project


            Xen Domain 0 in Linux 3.0+
        (it is functional but not yet fully optimized)


 On-going work to round out the feature set in Linux 3.2 +
XCP Project
XCP
         Complete vertical stack for
          server virtualization
         Distributed as a closed appliance
          (ISO) with CentOS 5.5 Dom0,
          misc DomU’s, network & storage
          support and Xen API
         Open source distribution of Citrix
          XenServer
XCP Overview
• Open source version of Citrix XenServer
      wiki.xen.org/wiki/XCP/XenServer_Feature_Matrix
• Enterprise-ready server virtualization and cloud platform
    Extends Xen beyond one physical machine and other functionality
    Lots of other additional functionality compared to Xen
• Built-in support and templates for Windows and Linux guests
• Datacenter and cloud-ready management API
    XenAPI (XAPI) is fully open source
    CloudStack and OpenStack integration
• Open vSwitch support built-in
Project “Kronos”: XAPI on Linux
• Make the XAPI toolstack independent of CentOS 5.5
• Extend the delivery model
  – Deliver Xen, XAPI and everything in between (storage manager, network
    support, OCaml libs, etc.) via your favorite Linux distro
        “apt-get install xcp-xapi” or “yum install xcp-xapi”

• Debian
• Next: Ubuntu 12.04 LTS
• Later: other major Linux distro (Fedora, CentOS, etc.)
   – Volunteers are welcome!
Xen vs. XCP vs. XAPI on Linux
Xen                                         XCP (up to 1.1)                    XAPI on Linux
Hypervisor: latest                          lagging                            Linux distro
Dom0 OS: CentOS, Debian, Fedora,            CentOS 5.5                         Debian, Ubuntu, …
NetBSD, OpenSuse, RHEL 5.x, Solaris 11, …
Dom 0: 32 and 64 bits                       32 bits                            32 and 64 bits
Linux 3 PVOPS Dom0: Yes                     No                                 Yes
Toolstack: XM (deprecated), XL or Libvirt   XAPI + XE (lots of additional      Same as XCP
                                            functionality to Xen)
Storage, Network, Drivers: build and get    Integrated with Open vSwitch,      Get them yourself
yourself                                    multiple storage types & drivers
Configurations: Everything                  constrained by XAPI                Same as XCP
Usage Model: Do it yourself                 Shrink wrapped and tested          Do it yourself
Distribution: Source or via LinuxUnix      ISO                                Via host Linux distribution
distributions
                                                                               21
XCP/XAPI Vision & Next Steps
   XCP & XAPI for Linux are the configuration of choice for clouds
    –   Optimized for cloud use-cases
    –   Optimized for usage patterns in cloud projects
    –   XAPI toolstack is more easily consumable

   We are doing this by …
    –   XenServer is built from XCP (almost there)
    –   Track unstable Xen hypervisor and Linux kernels aggressively (almost there)
    –   Deliver into Linux distributions : more flexibility (almost there)
    –   Exploit advanced Xen security features
    –   Fully open development model (build & test capability)
XCP 1.5 (soon)
• Architectural Improvements: Xen 4.1, GPT, smaller Dom0
• GPU pass through: for VMs serving high end graphics
• Performance and Scalability:
   – 1 TB mem/host
   – 16 VCPUs/VM, 128 GB/VM

• Networking: Open vSwitch (default), Active-Backup NIC Bonding
• Virtual Appliance: multi-VM and boot sequenced, OVF support
• More guest OS templates
XAPI Overview
XAPI: What is it?
• XAPI is the backbone of XCP
   – Provides the glue between all components
   – Is the backend for all management applications

• Call it XAPI or XenAPI
• It's a XML-RPC style API, served via HTTPS
   – Provided by a service on every XCP dom0 host
   – Designed to by highly programmable
   – API bindings for many languages: .NET, Java, C, Powershell, Python

• XAPI is Extensible via plugins
   – E.g. used by OpenStack
XAPI from 30000 Feet
       xen.org/files/XenCloud/ocamldoc/apidoc
                                                                                                              Storage
                                                            SM
                                                                                                              Network

                                                                                                     BBD_
                host_cpu                                    SR                                      metrics
                                           PDB                            VDI


user        session        host                               task
                                                                                          VBD
                                                 pool
                                                                                                   crash
              Host_                                        event
                                                                                                   dump
              metrics                PIF                                        VM
                                                 network           VIF                               VM_
                                                                                                    metrics

                            PIF_                                                                VM_guest_
                           metrics                                                               metrics
                                                                                console
                                                                   task
XAPI Functionality Overview
•   VM lifecycle: live snapshots, checkpoint, migration
•   Resource pools: live migration, auto configuration, disaster recovery
•   Flexible storage and networking
•   Event tracking: progress, notification
•   Upgrade and patching capabilities
•   Real-time performance monitoring and alerting

• Full list: wiki.xen.org/wiki/XCP/XenServer_Feature_Matrix
Open vSwitch
• Software switch, similar to:
  – VMware vNetwork Distributed Switch
  – Cisco Nexus 1000V
• Distribution agnostic. Plugs right into Linux kernel.
• Reuses existing Linux kernel networking subsystems.
• Backwards-compatible with traditional userspace tools.
• Free and Open Source http://openvswitch.org/
Why use Open vSwitch with Cloud?
• Automated control: OpenFlow
• Multi-tenancy
• Monitoring and QoS
XAPI Management Options
• XAPI frontend command line tool: XE (tab-completable)
• Desktop GUIs
   o   Citrix XenCenter (Windows-only)
   o   OpenXenManager (open source cross-platform XenCenter clone)
• Web interfaces
   o   Xen VNC Proxy (XVP)
         lightweight VM console only
         user access control to VMs (multi-tenancy)
   o   XenWebManager (web-based clone of OpenXenManager
• XCP Ecosystem:
   o   xen.org/community/vendors/XCPProjectsPage.html
   o   xen.org/community/vendors/XCPProductsPage.html
OpenXenManager
Xen VNC Proxy (XVP)
XCP and Cloud Orchestration Stacks
Cloud VM vs. Cloud Package(s) in Dom0
Cloud VM (DomU)              Cloud Package(s) in Dom0

Pros                         Pros
 • Isolation of cloud VM      • Simple install
 • Security properties        • Flexibility
 • Pre-package + appliance    • Simpler overall

Cons                         Cons
 • Slightly more complex      • Less isolation
 • Less flexible              • Cloud node is a potential entry
                                point to compromise Dom0
Xen, Security, QoS and the Cloud

                            38
“Security and QoS/Reliability are amongst
 the top 3 blockers for cloud adoption”
 www.colt.net/cio-research
Security and the Next Wave of Virtualization

• Security is key requirement for Cloud
• Security is the primary goal of virtualization on the Client
  – Desktop, Laptops, Tablets & Smart Phones

• Maintaining isolation between VMs is critical
  – Spatial and Temporal isolation
  – Run multiple VMs with policy controlled information flow
     • E.g. Personal VM; Corporate VM; VM for web browsing; VM for banking
Architecture Considerations
Type 1: Bare metal Hypervisor                                Type 2: OS ‘Hosted’
A pure Hypervisor that runs directly on the                  A Hypervisor that runs within a Host OS and hosts
hardware and hosts Guest OS’s.                               Guest OS’s inside of it, using the host OS services
                                                             to provide the virtual environment.

                                              VMn                                User-level VMM                   VMn
                                          VM1                User
                                                                                                               VM1
                                                             Apps
                                        VM0                                       Device Models
                                                                                                          VM0
                                         Guest OS                                                              Guest OS
                                         and Apps                                                              and Apps
                                                             Host OS
                            Scheduler           Hypervisor
                                                                                           Ring-0 VM Monitor
 Device Drivers/Models                                        Device Drivers               “Kernel “
                              MMU


                                                  Host HW    Host HW
I/O             Memory        CPUs                                         I/O                Memory             CPUs



      Provides partition isolation + reliability,                        Low cost, no additional drivers
                  higher security                                          Ease of use & installation
Xen: Type 1 with a Twist
Control domain
                                                          Thin hypervisor
(dom0)                                                    • Functionality moved to Dom0

      Device Models                        VMn
                                                          Using Linux PVOPS
                                       VM1                • Take full advantage of PV
         Drivers
                                     VM0                  • PV on HVM
                                      Guest OS            • No additional device drivers (Linux
 Linux, BSD, etc.                     and Apps
                                                            3.x dom0)
                                             Hypervisor
 Scheduler           MMU     XSM
                                                          In other words
                                               Host HW    • low cost (drivers)
I/O                 Memory    CPUs
                                                          • Ease of use & Installation
                                                          • Isolation & Security
                                                             42
Xen Security & Robustness Advantages
• Even without Advanced Security Features
  – Well-defined trusted computing base
    (much smaller than on type-2 hypervisor)
  – No extra services in hypervisor layer

• More Robustness: Mature, Tried & Tested, Architecture
• Xen Security Modules (or XSM)
  – Developed and contributed to Xen by NSA
  – Generalized Security Framework for Xen
  – The Xen equivalent of SELinux

                                               43
Advanced Security: Disaggregation
• Split Control Domain into Driver, Stub and Service Domains
  – Each contains a specific set of control logic
  – See: ”Breaking up is hard to do” @ Xen Papers

• Unique benefit of the Xen architecture
  –   Security: Minimum privilege; Narrow interfaces
  –   Performance: lightweight, e.g. Mini OS directly on hypervisor
  –   Robustness: ability to safely restart parts of the system
  –   Scalability: more distributed system (less reliable on Dom0)
Example: Network Driver Domain for HA
• Detect failure e.g.
                                  350
  – Illegal access
                                  300
  – Timeout                       250

• Kill domain, restart            200

                                  150
  – E.g. Just 275ms outage from
                                  100
    failed Ethernet driver
                                   50
• Auto-restarts to                 0
                                        0   5   10   15      20      25   30   35   40
  enhance security                                        time (s)
Qubes OS / XenClient XT
• First products configured to take advantage of the security
  benefits of Xen’s architecture
• Isolated Driver Domains
• Virtual hardware Emulation Domains
• Service VMs (global and per-guest)
• Xen Security Modules
Advanced XenClient Architecture
                  Per host/device                      Per guest
                   Service VMs                        Service VMs




                                                VPN Isolation
 Control Domain



                       Management




                                                VPN Isolation
                                                                                           User VM                  User VM




                                                                       Emulate
                                                                      Emulation
                                                                        Device
                                    Network
                                    Isolation
                         Domain




                                                                       Device
                                                                                        Policy Granularity       Policy Granularity


                                                                  Xen Hypervisor
                                                                 Xen Security Modules


                                                                                              VT-d       TXT
                                                                Intel vPro Hardware
                                                                                              VT-x      AES-NI
BUT…
• Today, XCP and commercial Xen based Server products
   – Do not make use of XSM
   – Do not make use of Advanced Security Features (Disaggregation)
• Most of these features are poorly documented on xen wiki
• In XCP, work has started to add these features
   – Various articles of how this may be done on the xen wiki
   – Hopefully more information soon
• Commitment on improving docs for Security, Reliability & Tuning
Summary: Why Xen?
• Designed for the Cloud : many advantages for cloud use!
   – Resilience, Robustness & Scalability
   – Security: Small surface of attack, Isolation & Advanced Security Features
• Widely used by Cloud Providers
• XCP & XAPI
   – Ready for use with cloud orchestration stacks
   – XCP and XAPI on Linux: flexibility and choice
   – Lots of additional improvements for cloud coming in 2012
• Flexibility and choice of Usage Models
   – Also one of the challenges for Xen
• Catching up on “Ease of deployment and getting started”
• Open Source with a large community and eco-system
Resources
Xen Resources
• IRC: ##xen @ FREENODE
• Mailing List: xen-users & xen-api
• Wiki: wiki.xen.org
  – Beginners & User Categories
• Excellent XCP Tutorials
  – A day worth of material @ xen.org/community/xenday11
How to Contribute
• Same process as for Linux Kernel
  – Same license: GPLv2
  – Same roles: Developers, Maintainers, Committers
  – Contributions by patches + sign-off
    (Developer Certificate of Origin)
  – Details @ xen.org/projects/governance.html
Shameless Marketing
Vendors in the Xen community are hiring!
Vendors in the Xen community are hiring!
Vendors in the Xen community are hiring!


xen.org/community/jobs.html
Questions …

More Related Content

PPTX
Xen in the Cloud at SCALE 10x
PDF
Why Choose Xen For Your Cloud?
PDF
Xen Cloud Platform at Build a Cloud Day at SCALE 10x
PDF
XCP: The Art of Open Virtualization for the Enterprise and the Cloud
PPTX
Xen Project Update LinuxCon Brazil
PPTX
Xen cloud platform
PPTX
BACD July 2012 : The Xen Cloud Platform
ODP
S4 xen hypervisor_20080622
Xen in the Cloud at SCALE 10x
Why Choose Xen For Your Cloud?
Xen Cloud Platform at Build a Cloud Day at SCALE 10x
XCP: The Art of Open Virtualization for the Enterprise and the Cloud
Xen Project Update LinuxCon Brazil
Xen cloud platform
BACD July 2012 : The Xen Cloud Platform
S4 xen hypervisor_20080622

What's hot (20)

PPTX
Linuxcon EU : Virtualization in the Cloud featuring Xen and XCP
PDF
BSDcon Asia 2015: Xen on FreeBSD
PDF
XS Boston 2008 XenLoop
PPTX
Xen Cloud Platform Update
PDF
XS Boston 2008 Memory Overcommit
PDF
XS Boston 2008 Security
PDF
Securing Your Cloud With the Xen Hypervisor by Russell Pavlicek
PDF
Scaling Xen within Rackspace Cloud Servers
PDF
XS Boston 2008 ARM
PPTX
Scale11x : Virtualization with Xen and XCP
PDF
Securing your cloud with Xen's advanced security features
PDF
Oscon 2012 : From Datacenter to the Cloud - Featuring Xen and XCP
PPTX
Xen and Apache cloudstack
PDF
Windsor: Domain 0 Disaggregation for XenServer and XCP
PDF
Aplura virtualization slides
PDF
Xen ATG case study
PDF
Linaro connect : Introduction to Xen on ARM
PPTX
Virtualization in the Cloud @ Build a Cloud Day SFO May 2012
PDF
Linaro Connect Asia 13 : Citrix - Xen on ARM plenary session
PDF
Xen PV Performance Status and Optimization Opportunities
Linuxcon EU : Virtualization in the Cloud featuring Xen and XCP
BSDcon Asia 2015: Xen on FreeBSD
XS Boston 2008 XenLoop
Xen Cloud Platform Update
XS Boston 2008 Memory Overcommit
XS Boston 2008 Security
Securing Your Cloud With the Xen Hypervisor by Russell Pavlicek
Scaling Xen within Rackspace Cloud Servers
XS Boston 2008 ARM
Scale11x : Virtualization with Xen and XCP
Securing your cloud with Xen's advanced security features
Oscon 2012 : From Datacenter to the Cloud - Featuring Xen and XCP
Xen and Apache cloudstack
Windsor: Domain 0 Disaggregation for XenServer and XCP
Aplura virtualization slides
Xen ATG case study
Linaro connect : Introduction to Xen on ARM
Virtualization in the Cloud @ Build a Cloud Day SFO May 2012
Linaro Connect Asia 13 : Citrix - Xen on ARM plenary session
Xen PV Performance Status and Optimization Opportunities
Ad

Similar to Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp) (20)

PPTX
PPSX
LinuxCon NA 2012: Virtualization in the cloud featuring xen
ODP
UDS 2012 Xen
PPTX
vBACD July 2012 - Xen Cloud Platform
PDF
Scale11x : Virtualization with Xen and XCP
PDF
Xen Community Update 2011
PDF
Mobile Virtualization using the Xen Technologies
PDF
Xen and Client Virtualization: the case of XenClient XT
PDF
Xen & the Art of Virtualization
PDF
S4 xen hypervisor_20080622
PDF
12 christian ferber xen_server_advanced
PDF
Ina Pratt Fosdem Feb2008
PDF
What is new in Citrix xen Client
PDF
opensourceiaas
PDF
Nakajima hvm-be final
KEY
Finadmin virtualization2
PDF
XS Japan 2008 Project Status English
PDF
Linux Foundation Collaboration Summit 13 :10 years of Xen and Beyond
PDF
Workshop: XenClient Serve & Manage your road warriors with local virtual desktop
PDF
Xen community update
LinuxCon NA 2012: Virtualization in the cloud featuring xen
UDS 2012 Xen
vBACD July 2012 - Xen Cloud Platform
Scale11x : Virtualization with Xen and XCP
Xen Community Update 2011
Mobile Virtualization using the Xen Technologies
Xen and Client Virtualization: the case of XenClient XT
Xen & the Art of Virtualization
S4 xen hypervisor_20080622
12 christian ferber xen_server_advanced
Ina Pratt Fosdem Feb2008
What is new in Citrix xen Client
opensourceiaas
Nakajima hvm-be final
Finadmin virtualization2
XS Japan 2008 Project Status English
Linux Foundation Collaboration Summit 13 :10 years of Xen and Beyond
Workshop: XenClient Serve & Manage your road warriors with local virtual desktop
Xen community update
Ad

More from The Linux Foundation (20)

PDF
ELC2019: Static Partitioning Made Simple
PDF
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
PDF
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
PDF
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
PDF
XPDDS19 Keynote: Unikraft Weather Report
PDF
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
PDF
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
PDF
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
PDF
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
PPTX
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
PPTX
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
PDF
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
PDF
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
PDF
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
PDF
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
PDF
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
PDF
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
PDF
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
PDF
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
PDF
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
ELC2019: Static Partitioning Made Simple
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
XPDDS19 Keynote: Unikraft Weather Report
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE

Recently uploaded (20)

PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Empathic Computing: Creating Shared Understanding
PDF
Encapsulation theory and applications.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
Spectroscopy.pptx food analysis technology
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Electronic commerce courselecture one. Pdf
PPTX
Big Data Technologies - Introduction.pptx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Approach and Philosophy of On baking technology
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
“AI and Expert System Decision Support & Business Intelligence Systems”
Dropbox Q2 2025 Financial Results & Investor Presentation
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Empathic Computing: Creating Shared Understanding
Encapsulation theory and applications.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Spectroscopy.pptx food analysis technology
MIND Revenue Release Quarter 2 2025 Press Release
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Electronic commerce courselecture one. Pdf
Big Data Technologies - Introduction.pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Digital-Transformation-Roadmap-for-Companies.pptx
Approach and Philosophy of On baking technology
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Review of recent advances in non-invasive hemoglobin estimation
NewMind AI Weekly Chronicles - August'25 Week I
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Spectral efficient network and resource selection model in 5G networks
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...

Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)

  • 1. Xen Cloud Platform Lars Kurth Xen Community Manager lars.kurth@xen.org @lars_kurth @xen_com_mgr
  • 2. A Brief History of Xen in the Cloud Late 90s XenoServer Project (Cambridge Univ.) Global Public Computing The XenoServer project is building a public “This dissertation proposes a new distributed computing infrastructure for wide-area distributed paradigm, termed global public computing, which allows computing. We envisage a world in which any user to run any code anywhere. Such platforms price XenoServer execution platforms will be computing resources, and ultimately charge users for scattered across the globe and available for resources consumed.“ any member of the public to submit code for execution. Evangelos Kotsovinos, PhD dissertation, 2004
  • 3. A Brief History of Xen in the Cloud Late 90s Nov ‘02 Oct ‘03 ‘06 ‘08 ‘09 ‘11 XenoServer Project Amazon EC2 XCP 1.x (Cambridge Univ.) and Slicehost Xen in Linux launched Kronos Xen Xen Presented Rackspace Cloud Mgmt Repository at SOSP Cloud Published XCP Announced
  • 4. The Xen Hypervisor was designed for the Cloud straight from the outset!
  • 5. Xen.org • Guardian of Xen Hypervisor and related OSS Projects • Xen project Governance similar to Linux Kernel • Projects – Xen Hypervisor (led by Citrix) – Xen Cloud Platform aka XCP (led by Citrix) – Xen ARM (led by Samsung) – PVOPS : Xen components and support in Linux Kernel (led by Oracle)
  • 7. Community & Ecosystem Map xen.org/community/projects Research A Xen D Hosting Projects Vendors D # XCP XCP Products s Projects Xen Consulting Products People Consulting Firms
  • 9. Basic Xen Concepts Control Domain aka Dom0 XL, XM (deprecated) • Dom0 kernel with drivers • Xen Management Toolstack VMn • Trusted Computing Base VM1 Guest Domains Control domain One or more VM0 (dom0) driver, stub or • Your apps Dom0 Kernel service domains Guest OS and Apps • E.g. your cloud management stack Driver/Stub/Service Domain(s) Scheduler, MMU Xen Hypervisor • A “driver, device model or control Host HW service in a box” I/O Memory CPUs • De-privileged and isolated • Lifetime: start, stop, kill 10
  • 10. PV Domains & Driver Domains Control domain Guest VMn Driver Domain Linux PV guests have limitations: (dom0) e.g. • limited set of virtual hardware Apps • Disk • Network Advantages PV Back Ends PV Front Ends PV Back End • Fast • Works on any system HW Drivers HW Driver (even without virt extensions) Guest OS Dom0 Kernel* Driver Domains Xen Hypervisor • Security • Isolation I/O Memory CPUs Host HW • Reliability and Robustness *) Can be MiniOS 11
  • 11. HVM & Stub Domains Dom0 Guest VMn Stubdomn Guest VMn Disadvantages • Slower than PV due to Emulation (mainly I/O devices) IO Emulation IO Emulation Device Model Device Model Advantages • Install the same way as native Linux IO Event Stub Domains IO Event VMEXIT Mini OS VMEXIT • Security Xen Hypervisor • Isolation • Reliability and Robustness Host HW 12
  • 12. PV on HVM • A mixture of PV and HVM • Linux enables as many PV interfaces HVM PV on PV as possible HVM Boot Sequence Emulated Emulated PV • This has advantages Memory HW HW PV – install the same way as native Interrupts, Emulated PV* PV – PC-like hardware Timers & – access to fast PV devices Spinlocks – exploit nested paging Disk & Network Emulated PV PV – Good performance trade-offs Privileged HW HW PV Operations • Drivers in Linux 3.x *) Emulated for Windows
  • 13. Xen and the Linux Kernel Xen was initially a University research project Invasive changes to the kernel to run Linux as a PV guest Even more changes to run Linux as dom0
  • 14. Xen and the Linux Kernel Xen support in the Linux kernel not upstream Great maintenance effort on distributions Risk of distributions dropping Xen support Xen harder to use
  • 15. Current State PVOPS Project Xen Domain 0 in Linux 3.0+ (it is functional but not yet fully optimized) On-going work to round out the feature set in Linux 3.2 +
  • 17. XCP  Complete vertical stack for server virtualization  Distributed as a closed appliance (ISO) with CentOS 5.5 Dom0, misc DomU’s, network & storage support and Xen API  Open source distribution of Citrix XenServer
  • 18. XCP Overview • Open source version of Citrix XenServer  wiki.xen.org/wiki/XCP/XenServer_Feature_Matrix • Enterprise-ready server virtualization and cloud platform  Extends Xen beyond one physical machine and other functionality  Lots of other additional functionality compared to Xen • Built-in support and templates for Windows and Linux guests • Datacenter and cloud-ready management API  XenAPI (XAPI) is fully open source  CloudStack and OpenStack integration • Open vSwitch support built-in
  • 19. Project “Kronos”: XAPI on Linux • Make the XAPI toolstack independent of CentOS 5.5 • Extend the delivery model – Deliver Xen, XAPI and everything in between (storage manager, network support, OCaml libs, etc.) via your favorite Linux distro “apt-get install xcp-xapi” or “yum install xcp-xapi” • Debian • Next: Ubuntu 12.04 LTS • Later: other major Linux distro (Fedora, CentOS, etc.) – Volunteers are welcome!
  • 20. Xen vs. XCP vs. XAPI on Linux Xen XCP (up to 1.1) XAPI on Linux Hypervisor: latest lagging Linux distro Dom0 OS: CentOS, Debian, Fedora, CentOS 5.5 Debian, Ubuntu, … NetBSD, OpenSuse, RHEL 5.x, Solaris 11, … Dom 0: 32 and 64 bits 32 bits 32 and 64 bits Linux 3 PVOPS Dom0: Yes No Yes Toolstack: XM (deprecated), XL or Libvirt XAPI + XE (lots of additional Same as XCP functionality to Xen) Storage, Network, Drivers: build and get Integrated with Open vSwitch, Get them yourself yourself multiple storage types & drivers Configurations: Everything constrained by XAPI Same as XCP Usage Model: Do it yourself Shrink wrapped and tested Do it yourself Distribution: Source or via LinuxUnix ISO Via host Linux distribution distributions 21
  • 21. XCP/XAPI Vision & Next Steps  XCP & XAPI for Linux are the configuration of choice for clouds – Optimized for cloud use-cases – Optimized for usage patterns in cloud projects – XAPI toolstack is more easily consumable  We are doing this by … – XenServer is built from XCP (almost there) – Track unstable Xen hypervisor and Linux kernels aggressively (almost there) – Deliver into Linux distributions : more flexibility (almost there) – Exploit advanced Xen security features – Fully open development model (build & test capability)
  • 22. XCP 1.5 (soon) • Architectural Improvements: Xen 4.1, GPT, smaller Dom0 • GPU pass through: for VMs serving high end graphics • Performance and Scalability: – 1 TB mem/host – 16 VCPUs/VM, 128 GB/VM • Networking: Open vSwitch (default), Active-Backup NIC Bonding • Virtual Appliance: multi-VM and boot sequenced, OVF support • More guest OS templates
  • 24. XAPI: What is it? • XAPI is the backbone of XCP – Provides the glue between all components – Is the backend for all management applications • Call it XAPI or XenAPI • It's a XML-RPC style API, served via HTTPS – Provided by a service on every XCP dom0 host – Designed to by highly programmable – API bindings for many languages: .NET, Java, C, Powershell, Python • XAPI is Extensible via plugins – E.g. used by OpenStack
  • 25. XAPI from 30000 Feet xen.org/files/XenCloud/ocamldoc/apidoc Storage SM Network BBD_ host_cpu SR metrics PDB VDI user session host task VBD pool crash Host_ event dump metrics PIF VM network VIF VM_ metrics PIF_ VM_guest_ metrics metrics console task
  • 26. XAPI Functionality Overview • VM lifecycle: live snapshots, checkpoint, migration • Resource pools: live migration, auto configuration, disaster recovery • Flexible storage and networking • Event tracking: progress, notification • Upgrade and patching capabilities • Real-time performance monitoring and alerting • Full list: wiki.xen.org/wiki/XCP/XenServer_Feature_Matrix
  • 27. Open vSwitch • Software switch, similar to: – VMware vNetwork Distributed Switch – Cisco Nexus 1000V • Distribution agnostic. Plugs right into Linux kernel. • Reuses existing Linux kernel networking subsystems. • Backwards-compatible with traditional userspace tools. • Free and Open Source http://openvswitch.org/
  • 28. Why use Open vSwitch with Cloud? • Automated control: OpenFlow • Multi-tenancy • Monitoring and QoS
  • 29. XAPI Management Options • XAPI frontend command line tool: XE (tab-completable) • Desktop GUIs o Citrix XenCenter (Windows-only) o OpenXenManager (open source cross-platform XenCenter clone) • Web interfaces o Xen VNC Proxy (XVP)  lightweight VM console only  user access control to VMs (multi-tenancy) o XenWebManager (web-based clone of OpenXenManager • XCP Ecosystem: o xen.org/community/vendors/XCPProjectsPage.html o xen.org/community/vendors/XCPProductsPage.html
  • 31. Xen VNC Proxy (XVP)
  • 32. XCP and Cloud Orchestration Stacks
  • 33. Cloud VM vs. Cloud Package(s) in Dom0 Cloud VM (DomU) Cloud Package(s) in Dom0 Pros Pros • Isolation of cloud VM • Simple install • Security properties • Flexibility • Pre-package + appliance • Simpler overall Cons Cons • Slightly more complex • Less isolation • Less flexible • Cloud node is a potential entry point to compromise Dom0
  • 34. Xen, Security, QoS and the Cloud 38
  • 35. “Security and QoS/Reliability are amongst the top 3 blockers for cloud adoption” www.colt.net/cio-research
  • 36. Security and the Next Wave of Virtualization • Security is key requirement for Cloud • Security is the primary goal of virtualization on the Client – Desktop, Laptops, Tablets & Smart Phones • Maintaining isolation between VMs is critical – Spatial and Temporal isolation – Run multiple VMs with policy controlled information flow • E.g. Personal VM; Corporate VM; VM for web browsing; VM for banking
  • 37. Architecture Considerations Type 1: Bare metal Hypervisor Type 2: OS ‘Hosted’ A pure Hypervisor that runs directly on the A Hypervisor that runs within a Host OS and hosts hardware and hosts Guest OS’s. Guest OS’s inside of it, using the host OS services to provide the virtual environment. VMn User-level VMM VMn VM1 User VM1 Apps VM0 Device Models VM0 Guest OS Guest OS and Apps and Apps Host OS Scheduler Hypervisor Ring-0 VM Monitor Device Drivers/Models Device Drivers “Kernel “ MMU Host HW Host HW I/O Memory CPUs I/O Memory CPUs Provides partition isolation + reliability, Low cost, no additional drivers higher security Ease of use & installation
  • 38. Xen: Type 1 with a Twist Control domain Thin hypervisor (dom0) • Functionality moved to Dom0 Device Models VMn Using Linux PVOPS VM1 • Take full advantage of PV Drivers VM0 • PV on HVM Guest OS • No additional device drivers (Linux Linux, BSD, etc. and Apps 3.x dom0) Hypervisor Scheduler MMU XSM In other words Host HW • low cost (drivers) I/O Memory CPUs • Ease of use & Installation • Isolation & Security 42
  • 39. Xen Security & Robustness Advantages • Even without Advanced Security Features – Well-defined trusted computing base (much smaller than on type-2 hypervisor) – No extra services in hypervisor layer • More Robustness: Mature, Tried & Tested, Architecture • Xen Security Modules (or XSM) – Developed and contributed to Xen by NSA – Generalized Security Framework for Xen – The Xen equivalent of SELinux 43
  • 40. Advanced Security: Disaggregation • Split Control Domain into Driver, Stub and Service Domains – Each contains a specific set of control logic – See: ”Breaking up is hard to do” @ Xen Papers • Unique benefit of the Xen architecture – Security: Minimum privilege; Narrow interfaces – Performance: lightweight, e.g. Mini OS directly on hypervisor – Robustness: ability to safely restart parts of the system – Scalability: more distributed system (less reliable on Dom0)
  • 41. Example: Network Driver Domain for HA • Detect failure e.g. 350 – Illegal access 300 – Timeout 250 • Kill domain, restart 200 150 – E.g. Just 275ms outage from 100 failed Ethernet driver 50 • Auto-restarts to 0 0 5 10 15 20 25 30 35 40 enhance security time (s)
  • 42. Qubes OS / XenClient XT • First products configured to take advantage of the security benefits of Xen’s architecture • Isolated Driver Domains • Virtual hardware Emulation Domains • Service VMs (global and per-guest) • Xen Security Modules
  • 43. Advanced XenClient Architecture Per host/device Per guest Service VMs Service VMs VPN Isolation Control Domain Management VPN Isolation User VM User VM Emulate Emulation Device Network Isolation Domain Device Policy Granularity Policy Granularity Xen Hypervisor Xen Security Modules VT-d TXT Intel vPro Hardware VT-x AES-NI
  • 44. BUT… • Today, XCP and commercial Xen based Server products – Do not make use of XSM – Do not make use of Advanced Security Features (Disaggregation) • Most of these features are poorly documented on xen wiki • In XCP, work has started to add these features – Various articles of how this may be done on the xen wiki – Hopefully more information soon • Commitment on improving docs for Security, Reliability & Tuning
  • 46. • Designed for the Cloud : many advantages for cloud use! – Resilience, Robustness & Scalability – Security: Small surface of attack, Isolation & Advanced Security Features • Widely used by Cloud Providers • XCP & XAPI – Ready for use with cloud orchestration stacks – XCP and XAPI on Linux: flexibility and choice – Lots of additional improvements for cloud coming in 2012 • Flexibility and choice of Usage Models – Also one of the challenges for Xen • Catching up on “Ease of deployment and getting started” • Open Source with a large community and eco-system
  • 48. Xen Resources • IRC: ##xen @ FREENODE • Mailing List: xen-users & xen-api • Wiki: wiki.xen.org – Beginners & User Categories • Excellent XCP Tutorials – A day worth of material @ xen.org/community/xenday11
  • 49. How to Contribute • Same process as for Linux Kernel – Same license: GPLv2 – Same roles: Developers, Maintainers, Committers – Contributions by patches + sign-off (Developer Certificate of Origin) – Details @ xen.org/projects/governance.html
  • 50. Shameless Marketing Vendors in the Xen community are hiring! Vendors in the Xen community are hiring! Vendors in the Xen community are hiring! xen.org/community/jobs.html

Editor's Notes

  • #3: XenoServer : enablers as well the concept
  • #4: Note: 10th birthday of the project is coming up
  • #5: Hold this thought! We will come back to this later….!
  • #9: Key notes:Just a subset of vendors, projects, etc. that build, use or provide services on top of Xen
  • #10: PVOPS is the Kernel Infrastructure to run a PV Hypervisor on top of Linux
  • #11: Dom 0:In a typical Xen set-up Dom0 contains a smorgasboard of functionality:System bootDevice emulation & multiplexingAdministrative toolstackDrivers (e.g. Storage & Network)Etc.LARGE TCB – BUT, Smaller as in a Type 2 hypervisorDriver/Stub/Service Domains: also known as Disaggregation
  • #13: Device Model emulated in QEMUModels for newer devices are much faster, but for now PV is even faster
  • #14: Automatic PerformancePV on HVM guests are very close to PV guests in benchmarks that favour PV MMUsPV on HVM guests are far ahead of PV guests in benchmarks that favour nested paging
  • #17: Where are we?1) Linux 3 contains everything needed to run Xen on a Vanilla Kernel, both as Dom0 and DomU2) That’s of course a little bit of an old hat now3) But it is worth mentioning that it only took 5 years to upstream that PVOPS into the kernel
  • #24: * Host Architectural Improvements. XCP 1.5 now runs on the Xen 4.1 hypervisor, provides GPT (new partition table type) support and a smaller, more scalable Dom0. * GPU Pass-Through. Enables a physical GPU to be assigned to a VM providing high-end graphics. * Increased Performance and Scale. Supported limits have been increased to 1 TB memory for XCP hosts, and up to16 virtual processors and 128 GB virtual memory for VMs. Improved XCP Tools with smaller footprint. * Networking Improvements. Open vSwitch is now the default networking stack in XCP 1.5 and now provides formal support for Active-Backup NIC bonding. * Enhanced Guest OS Support. Support for Ubuntu 10.04 (32/64-bit).Updated support for Debian Squeeze 6.0 64-bit, Oracle Enterprise Linux6.0 (32/64-bit) and SLES 10 SP4 (32/64-bit). Experimental VM templates for CentOS 6.0 (32/64-bit), Ubuntu 10.10 (32/64-bit) and Solaris 10. * Virtual Appliance Support (vApp). Ability to create multi-VM and boot sequenced virtual appliances (vApps) that integrate with Integrated Site Recovery and High Availability. vApps can be easily imported and exported using the Open Virtualization Format (OVF) standard.
  • #26: Note: not exactly 1:1 with XEComparisons to other APIs in the virtualization space (source: Steven Maresca)Generally speaking XAPI is well-designed and well-executedXAPI makes it pleasantly easy to achieve quick productivityXAPI is set up to work with frameworkssuch as CloudStack and OpenStack. Some SOAPy lovers of big XML envelopes and WSDLs scoff at XML-RPC, but it certainly gets the job done with few complaintsExample codehttp://bazaar.launchpad.net/~nova-core/nova/github/files/head:/plugins/xenserver/xenapi/etc/xapi.d/plugins/   https://github.com/xen-org/xen-api/blob/master/scripts/examples/python/XenAPIPlugin.py
  • #27: All elements on the diagram just shown are called classes:diagram omits another twenty or more minor classes.Visit the SDK documentation for documentation of all classes,Classes are the objects XCP knows about and exposes through API bindingsEach class has attributes called fields and functions called messages. We'll stick with 'attributes' and 'functions.'Classes on diagram:VM: A virtual machineHost: A physical XCP host systemPBD: physical block device through which an SR is accessedSR: Storage repositoryVDI: Virtual disk imageVDB: Virtual block deviceNetwork: A virtual networkVIF: A virtual network interfacePIF: A physical network interface
  • #28: VM lifecycle (start, stop, resume) ... automation is the key pointLive snapshots: Takes a snapshot of a live VM (e.g. for disaster recovery or migration)Resource pools (multiple physical machines): XS & XCP onlylive migration: VM is backed up while running, onto shared storage (e.g. NFS) in a pool and when completed restarted elsewhere in that pool. disaster recovery: you can find lots of information on how this works at http://support.citrix.com/servlet/KbServlet/download/17141-102-19301/XenServer_Pool_Replication_-_Disaster_Recovery.pdf (the key point is that I can back up the metadata for the entire VM)Flexible storage: XAPI does hide details for storage and networkingI.e. I apply generic commands (NFS, NETAPP, iSCSI ... once its created they all appear the same) from XAPI. I only need to know the storage type when I create storage and network objects (OOL)Upgrading a host to a later version of XCP (all my configs and VMs stay the same) …and patching (broken now - bug, can apply security patches to XCP/XS or Dom0 but not DomU)
  • #30: Automated Control/OpenFlow: e.g. Firewall rules, Access Control Rules (does help with things like Multi Tenancy – program visibility of a switch), Ford Locking (security mechanism – a VM can only use a particular MAC address, if you tamper with it can’t connect to the switch).Multi-tenancy: separate virtual networks for different cloud customersMonitoring: of course for charging per useQoS: rate limiting (customer pays for a specific amount of bandwidth)
  • #37: Earlier this year, we released Xen 4.1I just put up the feature list, but I wont go through it in detail. I did want to point out that the focus of this release was onSupport for large systems and easier management of large systems with CPU poolksAs well as on securityAnd that is starting a trend to optimize the hypervisor for cloud use cases
  • #38: Detailed ListGeneralDocumentation improvements (e.g. man pages)Lots of bug fixing of course.[edit]Toolsxl is now default toolstack and xend is formally deprecatedlots of xl improvements.we should highlight xend deprecation (not effectively maintained since 2008)Remus compression (compression of memory image improves performance)Prefer oxenstored when available (improves scalability!)Support for upstream qemu; nearing feature parity (non default still, but we want people to be testing it)Added libvchan to xen mainline(cross domain comms)[edit]XenImprovements to paging and sharing, enabling higher VM density for VDI use-casesEFI (extensible Firmware Interface) support for HV (i.e. if I have a machine that has EFI, I can use Xen on it)Support up to 256 Host CPUs for 64 bit h/v (from 128)Support dom0 kernels compressed with xzPer-device interrupt remapping (increases scalability)Support for pvhvm guest direct pirq injection (Performance improvement for PCI passthrough for Linux Guests)Intel SMEP (Supervisor Mode Execution Protection) supportMem event stuff? (Allows to externally observe what guests are up to and can be used for external virus checking - not sure what the right terminology is)Multiple PCI segment supportAdded xsave support(floating point)Lots of XSM / Flask fixes (security)AMD SVM "DecodeAssist" support (AMD CPU feature that avoids emulation and increases performance)[edit]Removed FunctionalityACM (alternative XSM to Flask) was removed (unmaintained)Removed vnet (unmaintained)[edit]Xen Development SupportCan build with clangAdded "make deb" targetLots of xentrace improvementsupdate ocaml bindings and make them usable by xapi (which previously had it's own fork of the same codebase)
  • #40: Just one example of a survey, many morehttp://www.colt.net/cio-research/z2-cloud-2.htmlAccording to many surveys, security is actually the main reason which makes or breaks cloud adoptionBetter security means more adoptionConcerns about security means slowed adoption
  • #41: So for a hypervisor, as Xen which is powering 80% of the public cloud – rackspace, AWS and many other VPS providers use Xen and with cloud computing becoming mainstream, furthering security is really importantOne of the key things there is isolation between VMs, but also simplicity as I pointed out earlierBut there are also a number of advanced features in Xen, which are not that widely know. So I wanted to give you a short overview of two of them
  • #42: At this point I want to make a quick detour into the different hypervisor architectures from a viewpoint of security.Let’s look at type 1 hypervisor:Basically a very simple architecture, where the Hypervisor replaces the kernelThe architecture is significantly simpler that a Type 2 hypervisor, because it does not need to provide rich “process” semantics, like “user”, filesystems, etc.BUT: the trade-off is that all the device drivers need to be rewritten for each hardware platformType 2 is hosted- The hypervisor is just a driver that typically works with user-level monitor.HW access is intercepted by the ring 0- VM monitor passed to the User level Virtual Monitor, which passes requests to the kernelRe-use of device drivers is traded off against security and a large trusted computing base (green)
  • #43: Dom 0:In a typical Xen set-up Dom0 contains a smorgasboard of functionality:System bootDevice emulation & multiplexingAdministrative toolstackDrivers (e.g. Storage & Network)Etc.LARGE TCB – BUT, Smaller as in a Type 2 hypervisor
  • #44: Ask some questions
  • #45: Example: XOARSelf-destructing VMs (destroyed after initialization): PCIBack = virtualize access to PCI Bus configRestartable VMs (periodic restarts): NetBack (Physical network driver exposed to guest) = restarted on timerBuilder (instantiate other VMs) = Restarted on each request
  • #47: This is not pie in the sky: these features are actually in use already for desktop virtualization.I wanted to point out Qubes OS which is an OSS project in its second betaAnd a commercial Citrix product which makes use of these featuresIn the last 6 months there has been lots of talk in the community, how these features can be adopted for Server virtualization and I expect that we see adoption of these in XCP and commercial Xen products.But of course this is not easy: there are challenges around Configuartion and Usability
  • #49: What about domain 0 itself?Once we've disaggregated domain 0, what will be left? The answer is: very little! We'll still have the logic for booting the host, for starting and stopping VMs, and for deciding which VM should control which piece of hardware... but that's about it. At this point domain 0 could be considered as a small "embedded" system, like a home NAT box or router.
  • #50: PVOPS is the Kernel Infrastructure to run a PV Hypervisor on top of Linux
  • #51: Let’s have a quick look at what’s new in the kernel 3.1Mainly usability improvementsThe most significant addition is the PCI back module which enables the kernel to pass PCI/PCIe devices to Xen guests3.2: see http://www.gossamer-threads.com/lists/xen/users/229720Quite a lot of features are planned to go into Linux 3.2 and beyond. I will just explain a few. For the rest, do talk to me afterwards3.2 Feature Discard: Tells the HW that disk sectors are unusedThis is good for Solid State DrivesBut it is also good for filers doing thin provisioning3.3 PV Spinlocks: Better performance under contention ACPI S3: Which gives us Suspend to RAM, which is good for Xen on Laptop Use casesThe key is that all this is about optimization and rounding out of features------------------3.2: hwclock -s: Makes time (i.e. wallclock, RTC time) handling work as expected for the dom0 administrator. "feature-barrier": Required for correctness by certain guests (SLES10, I think). AIUI various filesystem implementations rely on barriers to Yup (and Oracle's guests applicance thingies) actually do something for correctness. Without this there coudl be corruption (I wouldn't necessarily stand on stage and say that out loud though) Some form of it can appear if you unplug the machine right as it is writting data out. But I am not entirely sure how to reproduce that 100%.. But yes - barries and flushes provide a mechanism to do 'write meta data NOW', and then the normal data can be written. So that in case of power failure you can go back to the meta data and figure stuff out. So yes. corruption averted! "feature-discard": Used to indicate to h/w that disk sectors are unused. This is good for SSDs and also for filers which do thin provisioning since it improves wear-levelling and allows the space to be recovered respectively. <nods> Not sure what there is to say about the others.
  • #52: http://www.gossamer-threads.com/lists/xen/users/2297063.3: PV spinlocks: PV spinlocks improve performance under contention by allowing lock takers to sleep/yield instead of spinning wasting scheduling quanta. The current PV spinlock replaces the entire spinlock algorithm/datastructure. The new work simply adds a PV slow path but keeps the core "ticket lock" algorithm. This is beneficial in terms of sharing more code (and hence perf analysis etc) with native. The benefit is that during high contention (so more guests using VCPUs that there are physical CPUS - think four guests, each using 2 VCPU, and the box only has four physical) we can schedule the guests appropiately instead of spinning uselessly wasting CPU resources. ACPI S3: Suspend to RAM. Useful for Xen on laptop type scenarios. Yeah, that one is taking longer than I thought. Got some feedback and will post a new set shortly. 3D graphics: THe RAM used by graphics cards has various constraints on the RAM which they use which we need to work with in order to make those cards work correctly It pretty much is in. Another graphics guy picked up the patches, reworked them even further and they are definitly in for 3.3. ACPU cpufreq: Power management, useful in servers etc. I suspect this also improves performance in some cases by allowing access to faster states. Still a bit behind sadly. The patches are "nicer" but still not sure how to make them even more nicer for Len. Will have to post them at some point. Maybe I shoudl do it on Dec 24th
  • #53: So what does all that mean?Firstly, you can just Download the latest Linux distro, get the Xen package and you can get startedOf course that requires distros which have the Linux 3 kernel in itDebian Squeeze, Fedora 16, Ubuntu 10.10
  • #54: Why should you help?There are lots of HW variants and our developers cant possibly test all of themSo if you do wangt to help ...If you see any issues: and that may mean bugs, unexpected behavior, unexpected performance: let is know such that we can fix it.
  • #56: The key point here isXen ARM has a long historyUses paravirtualizationAnd supports a wide range of ARM processor featuresAs Kiko pointed out, there are quite few challenges in the ARM space such as complexity and Linux ports and that also affects XenSo for example to build Xen ARM, it was necessary to modify the ARM Linux kernel and we are facing questions such as a) do we try and upstream, or do we go for a clean start with newer ARM architectures
  • #57: This slide is all about pointing out that there is pull for the ARM architectureCALXEDA| HP
  • #58: Goals:Mobile, Client and Server for ARM Realtime capabilityMake sure that we have an architecturally clean start moving forward
  • #60: Hold this thought! We will come back to this later….!
  • #61: Performance : similar to other hypervisorsMaturity: Tried & Tested, Most Problems that are Problems are well knownOpen source: Good body of Knowledge, Tools