SlideShare a Scribd company logo
Yabusame:
  Postcopy Live migration for QEmu/KVM




* Isaku Yamahata, VALinux Systems Japan K.K. <yamahata@private.email.ne.jp>
  Takahiro Hirofuchi, AIST <t.hirofuchi@aist.go.jp>

  LinuxConJapan June 7th, 2012
Agenda
    ●
        Demo
    ●
        Precopy vs Postcopy
    ●
        Implementation
    ●
        Evaluation
    ●
        Future work
    ●
        Summary
                                                                          From wikipedia

Yabusame is a joint project with Takahiro Hirofuchi, AIST and Satoshi Itoh, AIST.
This work is partly supported by JST/CREST ULP and KAKENHI (23700048).
The development of Yabusame was partly funded by METI (Minister of Economy,
Trade and Industry) and supported by NTT Communications Corporation.
Demo
Precopy vs Postcopy
Precopy live migration
1.Enable dirty page tracking
2.Copy all memory pages to destination
3.Copy memory pages dirtied during the
                                          Repeat this step
previous copy again
4.Repeat the 3rd step until the rest of
memory pages are enough small.
5.Stop VM
6.Copy the rest of memory pages and
non-memory VM states
7.Resume VM at destination
Postcopy live migration
1.Stop VM
2.Copy non-memory VM states
  to destination
3.Resume VM at destination
4.Copy memory pages on-
  demand/backgroundly
  •   Async PF can be utilized

                                 Copy memory pages
                                 ●
                                   On-demand(network fault)
                                 ●
                                   background(precache)
Asynchronous Page Fault(APF)
                                                            4. switch
 AFP can be utilized for postcopy
1.Guest RAM as host page can be                  Thread A               Thread B
  swapped out(or on the src machine                         8.unblock
  in case of postcopy)
2.When the page is faulted, worker                      3 notify           7. notify
                                      1. Page fault
  threads starts IO                                     APF to guest       IO completion
3.Notify it to (PV) guest
                                                            Guest RAM
4.Guest blocks the faulting thread
  and switches to another thread
5.When IO is completed, it is
                                                             KVM
  notified to guest
6.Unblock the previously blocked
  thread                                 2. pass page
                                         Request to
                                                              Work queue
                                         work queue



                                                        5.Start IO 6. IO completion
Total migration/down time
           Copy VM memory before switching the execution host

                                    Round 2            Round N         stop                           resume


                                                           ...




                                                                                            restart
                                Precopy                             Remaining
              prepara




                                                                                 VM state
Precopy
              tion


                                Round 1                            Dirty page

                        Performance degradation
                                                                     Down time
                        Due to dirty page tracking
                                                Total migration time
time
       stop                                   resume
                                    restart




                                                           Postcopy
              prepara


                         VM state




Postcopy
              tion




                                               Demand/pre paging(with async PF)

                  Down time                            Performance degradation
                                                       Due to network fault

                                                Total migration time
            Copy VM memory after switching the execution host
Precopy vs Postcopy

                         precopy                        postcopy

Total migration time     (RAM size / link speed) +      (RAM size / link speed) +
                         overhead +                     ovedrhead
                         Non-deterministic
                         Depending on precopy rounds

Worst downtime           (VMState time) +               VMState time
                         (RAM size/link speed)          Followed by postcopy phase

                         The number of dirtied pages    Postcopy time is limited by
                         can be optimized by ordering   (RAM size/link speed)
                         pages sent in precopy phase    Alleviated by
                                                        Prepaging + AsyncPF
Precopy characteristic
●
    Total migration time and downtime depend
    on memory dirtying speed
    ●
        Especially the number of dirty pages doesn't
        converge when dirtying speed > link speed




                                  Downtime Analysis for Pro-Active Virtual Machine
                                  Live Migration : Felix Salfner
                                  http://www.tele-task.de/archive/video/flash/13549/
Postcopy characteristic
●
    network bandwidth friendly
    ●
        Postcopy transfer a page only once
●
    reliability
    ●
        VM can be lost if network failure occurs
        during migration
Postcopy is applicable for
●   Planned maintenance
    ●   Predictable total migration time is important
●   Dynamic consolidation
    ●   In cloud use case, usually resources are over-committed
    ●   If machine load becomes high, evacuate the VM to other machine promptly
        –   Precopy optimization (= CPU cycle) may make things worse
●   Wide area migration
    ●   Inter-Datacenter live-migration
        –   L2 connectivity among datacenters with L2 over L3 has becoming common
        –   VM migration over DCs as Disaster Recovery
●   LAN case
    ●   Not all network bandwidth can be used for migration
    ●   network bandwidth might be reserved by QoS
Implementation
Hooking guest RAM access
  ●
      Design choice
      ●
          Insert hooks all accesses to guest RAM
      ●
          Character device driver (umem char device)
           –    Async page fault support
      ●
          Backing store(block device or file)
      ●
          Swap device
                     Pros                         Cons
Modify VMM           portability                  impractical
Backing store        No new device driver         Difficult future improvement
                                                  Some kvm host features wouldn't work

Character Device     Straight forward             Need to fix kvm host features
                     Future improvement
Swap device          Everything is normal after   Administration
                     migration                    Difficult future improvement
Implementation
              4. page contents is sent back
              Connection for live-migration
              is reused

         qemu-kvm                            daemon                      qemu-kvm

                                                  5. page contents             0. mmap()
                     3. Request for page
                                                  is passed down to            1. access to
                                                  the driver                   guest RAM

                                                       6. resolve page fault

         guest RAM                         character
                                                                         guest RAM
                                             device
                                                           vma::fault
                                                           2. page fault is hooked by
Host kernel                           Host kernel          Character device.


source                                                                         destination
Evaluation
This evaluation is done with the experimental implementation
Dynamic consolidation




This evaluation is done with the experimental implementation
This evaluation is done with the experimental implementation
Memory scanning with postcopy
●   6GB memory Guest RAM
●   4thread
●   Per-thread
    ●   1GB
    ●   Each thread accessing all pages                      Guest VM after switching execution
    ●   Time from the first page access to
                                                              Thread 1      ...     Thread 4
        the last page access
●   Start each thread right after
                                                                      Memory scan
    starting post-copy migration                                         ...
●   Background transfer is disabled                                1GB                  1GB

                                             Host OS(src)             Host OS(dst)
                                                       1GB ether
Memory scan time(real)
Total CPU time allocated to guest VM




 VCPU execution efficiency is improved cpu-time/real-time
 APF enabled: 0.7
 APF disabled: 0.39
Analyze with SystemTap
vCPU is executing during page is being served


                                                Serving page fault

                                                  vCPU execution




                                                Serving page fault


                                                  vCPU execution
          vCPU can't be executed
          while page is being served
Siege benchmark with Apache
●   Host
        Core2 quad CPU Q9400 4core
                                                                                                    GB
    ●


    ●   Memory 16GB
●   Qemu/kvm
    ●   Vcpu = 4, kernel_irqchip=on                        migration
    ●   Memory about 6G(-m 6000)
    ●   virtio
                                                                           Guest VM
●   Apache                                                                               Apache
    ●   Prefork 128 process fixed                                                       Prefork
                                                                                      128 process
    ●   Data: 100K * 10000files = about 1GB
    ●   Warm up by siege before migration            Host(src)           Host(dst)
        –   So all data are cached on memory
●   Siege
                                                                                      https
    ●   http load testing and benchmarking utility
    ●   http://www.joedog.org/siege-home/                                                            GB
    ●   64 parallel (-c 64)
    ●   Random URL to 10000 URLs                                 siege

                                                          Client machine
Precopy                 Postcopy




Precopy                Postcopy w/o background transfer
Migrate set_speed=1G   Prefault forward=100
                       migrate -p -n tcp:10.0.0.18:4444 100 0
Future work
●   Upstream merge
    ●   QEmu/KVM Live-migration code needs more love
        –   Code clean up, Feature negotiation...
    ●   Investigate for fuse version of umem device and evaluation
        –   See if it's possible and its performance is acceptable
    ●   Evaluation/benchmark
●   KSM and THP
●   Threading
    ●   Page compression(XBRZLE) is coming. Non-blocking read + checking if all data is ready is impractical for
        compressed page any more.
●   Mix precopy/postcopy
●   Avoid memory copy
●   Not to fetch pages when writing/clearing whole page
    ●   cleancache/frontswap might be good candidate
    ●   Free page aren't necessary to be transferred
        –   Self-ballooning?
    ●   Hint via PV interface?
●   Libvirt support?(and Openstack?)
●   Cooperate with Kemari
Thank you
●   Questions?
●   Resources
    ●   Project page
        –   http://grivon.apgrid.org/quick-kvm-migration
        –   http://sites.google.com/site/grivonhome/quick-kvm-migration
    ●   Enabling Instantaneous Relocation of Virtual Machines with a Lightweight VMM Extension:
        proof-of-concept, ad-hoc prototype. not a new design
        –   http://grivon.googlecode.com/svn/pub/docs/ccgrid2010-hirofuchi-paper.pdf
        –   http://grivon.googlecode.com/svn/pub/docs/ccgrid2010-hirofuchi-talk.pdf
    ●   Reactive consolidation of virtual machines enabled by postcopy live migration: advantage
        for VM consolidation
        –   http://portal.acm.org/citation.cfm?id=1996125
        –   http://www.emn.fr/x-info/ascola/lib/exe/fetch.php?media=internet:vtdc-postcopy.pdf
    ●   Qemu Wiki
        –   http://wiki.qemu.org/Features/PostCopyLiveMigration
    ●   Demo video
        –   http://www.youtube.com/watch?v=lo2JJ2KWrlA
Backup slides
Real-Time Issues in Live Migration of Virtual Machines
http://retis.sssup.it/~tommaso/publications/VHPC09.pdf

More Related Content

PPTX
6. Live VM migration
PPTX
Hyper-V High Availability and Live Migration
PPTX
Demand-Based Coordinated Scheduling for SMP VMs
PPT
Application Live Migration in LAN/WAN Environment
PPTX
CPU Scheduling for Virtual Desktop Infrastructure
PDF
VM Live Migration Speedup in Xen
PPTX
3. CPU virtualization and scheduling
PPTX
Openstack vm live migration
6. Live VM migration
Hyper-V High Availability and Live Migration
Demand-Based Coordinated Scheduling for SMP VMs
Application Live Migration in LAN/WAN Environment
CPU Scheduling for Virtual Desktop Infrastructure
VM Live Migration Speedup in Xen
3. CPU virtualization and scheduling
Openstack vm live migration

What's hot (20)

PDF
Live VM Migration
PPTX
2. OS vs. VMM
PDF
Xen Memory Management
PDF
Scheduler Support for Video-oriented Multimedia on Client-side Virtualization
PDF
Virtual Machine Migration Techniques in Cloud Environment: A Survey
PPTX
4. Memory virtualization and management
PDF
Memory Virtualization
PPTX
5. IO virtualization
PPTX
Vm migration techniques
PDF
Virtual Asymmetric Multiprocessor for Interactive Performance of Consolidated...
PPTX
Building a KVM-based Hypervisor for a Heterogeneous System Architecture Compl...
PDF
XS Boston 2008 Quantitative
PPSX
Redesigning Xen Memory Sharing (Grant) Mechanism
PPTX
cloud computing: Vm migration
KEY
Hardware supports for Virtualization
PPTX
Nimbus project
PPTX
webinar vmware v-sphere performance management Challenges and Best Practices
PDF
Enhanced Live Migration for Intensive Memory Loads
PDF
Survey on virtual machine placement techniques in cloud computing environment
PDF
XS Boston 2008 Memory Overcommit
Live VM Migration
2. OS vs. VMM
Xen Memory Management
Scheduler Support for Video-oriented Multimedia on Client-side Virtualization
Virtual Machine Migration Techniques in Cloud Environment: A Survey
4. Memory virtualization and management
Memory Virtualization
5. IO virtualization
Vm migration techniques
Virtual Asymmetric Multiprocessor for Interactive Performance of Consolidated...
Building a KVM-based Hypervisor for a Heterogeneous System Architecture Compl...
XS Boston 2008 Quantitative
Redesigning Xen Memory Sharing (Grant) Mechanism
cloud computing: Vm migration
Hardware supports for Virtualization
Nimbus project
webinar vmware v-sphere performance management Challenges and Best Practices
Enhanced Live Migration for Intensive Memory Loads
Survey on virtual machine placement techniques in cloud computing environment
XS Boston 2008 Memory Overcommit
Ad

Similar to Yabusame: postcopy live migration for qemu/kvm (20)

PDF
SAP Virtualization Week 2012 - The Lego Cloud
ODP
Malware analysis
PDF
Improving MeeGo boot-up time
PDF
Live migration: pros, cons and gotchas -- Pavel Emelyanov
PDF
Streaming Replication (Keynote @ PostgreSQL Conference 2009 Japan)
PDF
XS Boston 2008 Fault Tolerance
PPT
The Pensions Trust - VM Backup Experiences
PDF
Evaluation and Enhancement to Memory Sharing and Swapping in Xen 4.1
PPTX
Virtual Infrastructure Disaster Recovery
PDF
vSphere APIs for performance monitoring
PPTX
Virtual Machine Migration & Hypervisors
PDF
Kvm optimizations
PDF
Live migrating a container: pros, cons and gotchas
PDF
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
PDF
Live migrating a container: pros, cons and gotchas -- Pavel Emelyanov
PDF
Page Fault Support for Network Controllers (CATC'17)
PDF
Best Practices of HA and Replication of PostgreSQL in Virtualized Environments
PDF
Meetup 23 - 01 - The things I wish I would have known before doing OpenStack ...
PDF
Buiding a better Userspace - The current and future state of QEMU and KVM int...
PDF
Enabling POWER 8 advanced features on Linux
SAP Virtualization Week 2012 - The Lego Cloud
Malware analysis
Improving MeeGo boot-up time
Live migration: pros, cons and gotchas -- Pavel Emelyanov
Streaming Replication (Keynote @ PostgreSQL Conference 2009 Japan)
XS Boston 2008 Fault Tolerance
The Pensions Trust - VM Backup Experiences
Evaluation and Enhancement to Memory Sharing and Swapping in Xen 4.1
Virtual Infrastructure Disaster Recovery
vSphere APIs for performance monitoring
Virtual Machine Migration & Hypervisors
Kvm optimizations
Live migrating a container: pros, cons and gotchas
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Live migrating a container: pros, cons and gotchas -- Pavel Emelyanov
Page Fault Support for Network Controllers (CATC'17)
Best Practices of HA and Replication of PostgreSQL in Virtualized Environments
Meetup 23 - 01 - The things I wish I would have known before doing OpenStack ...
Buiding a better Userspace - The current and future state of QEMU and KVM int...
Enabling POWER 8 advanced features on Linux
Ad

Recently uploaded (20)

PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPT
Teaching material agriculture food technology
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
cuic standard and advanced reporting.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
Big Data Technologies - Introduction.pptx
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Spectral efficient network and resource selection model in 5G networks
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
MIND Revenue Release Quarter 2 2025 Press Release
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Mobile App Security Testing_ A Comprehensive Guide.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Teaching material agriculture food technology
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
cuic standard and advanced reporting.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
sap open course for s4hana steps from ECC to s4
Agricultural_Statistics_at_a_Glance_2022_0.pdf
MYSQL Presentation for SQL database connectivity
NewMind AI Weekly Chronicles - August'25 Week I
Big Data Technologies - Introduction.pptx
“AI and Expert System Decision Support & Business Intelligence Systems”
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx

Yabusame: postcopy live migration for qemu/kvm

  • 1. Yabusame: Postcopy Live migration for QEmu/KVM * Isaku Yamahata, VALinux Systems Japan K.K. <yamahata@private.email.ne.jp> Takahiro Hirofuchi, AIST <t.hirofuchi@aist.go.jp> LinuxConJapan June 7th, 2012
  • 2. Agenda ● Demo ● Precopy vs Postcopy ● Implementation ● Evaluation ● Future work ● Summary From wikipedia Yabusame is a joint project with Takahiro Hirofuchi, AIST and Satoshi Itoh, AIST. This work is partly supported by JST/CREST ULP and KAKENHI (23700048). The development of Yabusame was partly funded by METI (Minister of Economy, Trade and Industry) and supported by NTT Communications Corporation.
  • 5. Precopy live migration 1.Enable dirty page tracking 2.Copy all memory pages to destination 3.Copy memory pages dirtied during the Repeat this step previous copy again 4.Repeat the 3rd step until the rest of memory pages are enough small. 5.Stop VM 6.Copy the rest of memory pages and non-memory VM states 7.Resume VM at destination
  • 6. Postcopy live migration 1.Stop VM 2.Copy non-memory VM states to destination 3.Resume VM at destination 4.Copy memory pages on- demand/backgroundly • Async PF can be utilized Copy memory pages ● On-demand(network fault) ● background(precache)
  • 7. Asynchronous Page Fault(APF) 4. switch AFP can be utilized for postcopy 1.Guest RAM as host page can be Thread A Thread B swapped out(or on the src machine 8.unblock in case of postcopy) 2.When the page is faulted, worker 3 notify 7. notify 1. Page fault threads starts IO APF to guest IO completion 3.Notify it to (PV) guest Guest RAM 4.Guest blocks the faulting thread and switches to another thread 5.When IO is completed, it is KVM notified to guest 6.Unblock the previously blocked thread 2. pass page Request to Work queue work queue 5.Start IO 6. IO completion
  • 8. Total migration/down time Copy VM memory before switching the execution host Round 2 Round N stop resume ... restart Precopy Remaining prepara VM state Precopy tion Round 1 Dirty page Performance degradation Down time Due to dirty page tracking Total migration time time stop resume restart Postcopy prepara VM state Postcopy tion Demand/pre paging(with async PF) Down time Performance degradation Due to network fault Total migration time Copy VM memory after switching the execution host
  • 9. Precopy vs Postcopy precopy postcopy Total migration time (RAM size / link speed) + (RAM size / link speed) + overhead + ovedrhead Non-deterministic Depending on precopy rounds Worst downtime (VMState time) + VMState time (RAM size/link speed) Followed by postcopy phase The number of dirtied pages Postcopy time is limited by can be optimized by ordering (RAM size/link speed) pages sent in precopy phase Alleviated by Prepaging + AsyncPF
  • 10. Precopy characteristic ● Total migration time and downtime depend on memory dirtying speed ● Especially the number of dirty pages doesn't converge when dirtying speed > link speed Downtime Analysis for Pro-Active Virtual Machine Live Migration : Felix Salfner http://www.tele-task.de/archive/video/flash/13549/
  • 11. Postcopy characteristic ● network bandwidth friendly ● Postcopy transfer a page only once ● reliability ● VM can be lost if network failure occurs during migration
  • 12. Postcopy is applicable for ● Planned maintenance ● Predictable total migration time is important ● Dynamic consolidation ● In cloud use case, usually resources are over-committed ● If machine load becomes high, evacuate the VM to other machine promptly – Precopy optimization (= CPU cycle) may make things worse ● Wide area migration ● Inter-Datacenter live-migration – L2 connectivity among datacenters with L2 over L3 has becoming common – VM migration over DCs as Disaster Recovery ● LAN case ● Not all network bandwidth can be used for migration ● network bandwidth might be reserved by QoS
  • 14. Hooking guest RAM access ● Design choice ● Insert hooks all accesses to guest RAM ● Character device driver (umem char device) – Async page fault support ● Backing store(block device or file) ● Swap device Pros Cons Modify VMM portability impractical Backing store No new device driver Difficult future improvement Some kvm host features wouldn't work Character Device Straight forward Need to fix kvm host features Future improvement Swap device Everything is normal after Administration migration Difficult future improvement
  • 15. Implementation 4. page contents is sent back Connection for live-migration is reused qemu-kvm daemon qemu-kvm 5. page contents 0. mmap() 3. Request for page is passed down to 1. access to the driver guest RAM 6. resolve page fault guest RAM character guest RAM device vma::fault 2. page fault is hooked by Host kernel Host kernel Character device. source destination
  • 17. This evaluation is done with the experimental implementation
  • 18. Dynamic consolidation This evaluation is done with the experimental implementation
  • 19. This evaluation is done with the experimental implementation
  • 20. Memory scanning with postcopy ● 6GB memory Guest RAM ● 4thread ● Per-thread ● 1GB ● Each thread accessing all pages Guest VM after switching execution ● Time from the first page access to Thread 1 ... Thread 4 the last page access ● Start each thread right after Memory scan starting post-copy migration ... ● Background transfer is disabled 1GB 1GB Host OS(src) Host OS(dst) 1GB ether
  • 22. Total CPU time allocated to guest VM VCPU execution efficiency is improved cpu-time/real-time APF enabled: 0.7 APF disabled: 0.39
  • 23. Analyze with SystemTap vCPU is executing during page is being served Serving page fault vCPU execution Serving page fault vCPU execution vCPU can't be executed while page is being served
  • 24. Siege benchmark with Apache ● Host Core2 quad CPU Q9400 4core GB ● ● Memory 16GB ● Qemu/kvm ● Vcpu = 4, kernel_irqchip=on migration ● Memory about 6G(-m 6000) ● virtio Guest VM ● Apache Apache ● Prefork 128 process fixed Prefork 128 process ● Data: 100K * 10000files = about 1GB ● Warm up by siege before migration Host(src) Host(dst) – So all data are cached on memory ● Siege https ● http load testing and benchmarking utility ● http://www.joedog.org/siege-home/ GB ● 64 parallel (-c 64) ● Random URL to 10000 URLs siege Client machine
  • 25. Precopy Postcopy Precopy Postcopy w/o background transfer Migrate set_speed=1G Prefault forward=100 migrate -p -n tcp:10.0.0.18:4444 100 0
  • 26. Future work ● Upstream merge ● QEmu/KVM Live-migration code needs more love – Code clean up, Feature negotiation... ● Investigate for fuse version of umem device and evaluation – See if it's possible and its performance is acceptable ● Evaluation/benchmark ● KSM and THP ● Threading ● Page compression(XBRZLE) is coming. Non-blocking read + checking if all data is ready is impractical for compressed page any more. ● Mix precopy/postcopy ● Avoid memory copy ● Not to fetch pages when writing/clearing whole page ● cleancache/frontswap might be good candidate ● Free page aren't necessary to be transferred – Self-ballooning? ● Hint via PV interface? ● Libvirt support?(and Openstack?) ● Cooperate with Kemari
  • 27. Thank you ● Questions? ● Resources ● Project page – http://grivon.apgrid.org/quick-kvm-migration – http://sites.google.com/site/grivonhome/quick-kvm-migration ● Enabling Instantaneous Relocation of Virtual Machines with a Lightweight VMM Extension: proof-of-concept, ad-hoc prototype. not a new design – http://grivon.googlecode.com/svn/pub/docs/ccgrid2010-hirofuchi-paper.pdf – http://grivon.googlecode.com/svn/pub/docs/ccgrid2010-hirofuchi-talk.pdf ● Reactive consolidation of virtual machines enabled by postcopy live migration: advantage for VM consolidation – http://portal.acm.org/citation.cfm?id=1996125 – http://www.emn.fr/x-info/ascola/lib/exe/fetch.php?media=internet:vtdc-postcopy.pdf ● Qemu Wiki – http://wiki.qemu.org/Features/PostCopyLiveMigration ● Demo video – http://www.youtube.com/watch?v=lo2JJ2KWrlA
  • 29. Real-Time Issues in Live Migration of Virtual Machines http://retis.sssup.it/~tommaso/publications/VHPC09.pdf