Changes between Version 25 and Version 26 of VDEchp


Ignore:
Timestamp:
10/06/11 02:36:19 (13 years ago)
Author:
lvpeng
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • VDEchp

    v25 v26  
    3737
    3838== Distributed Checkpoint Algorithm in [wiki:VDEchp VDEchp] ==
    39 We develop a variant of the simplified version of Mattern’s algorithm used in VNsnap, as the basis of our lightweight checkpoint mechanism. As illustrated before, type (3) messages are unwanted, because they are not recorded in any source VM’s checkpoints, but they are already recorded in some checkpoint of a destination VM. In the [wiki:VDEchp VDEchp] design, there is always a correct state for the VM, recorded as the stable copy in the disk. The state of stable copy is one checkpoint interval behind the current VM’s state, because we copy the last checkpoint to the stable copy only when we get a new checkpoint. Therefore, before a checkpoint is committed by copying to the stable copy, we buffer all the outgoing messages in the VM during the corresponding checkpoint interval. Thus, type (3) messages are never generated, because the buffered messages are unblocked only after saving their information by copying the checkpoint to the in-disk stable copy. Our algorithm works under the assumption that the buffered messages will not be lost or duplicated.
     39We develop a variant of the simplified version of Mattern’s algorithm used in [http://friends.cs.purdue.edu/dokuwiki/doku.php?id=vnsnap VNsnap], as the basis of our lightweight checkpoint mechanism. As illustrated before, type (3) messages are unwanted, because they are not recorded in any source VM’s checkpoints, but they are already recorded in some checkpoint of a destination VM. In the [wiki:VDEchp VDEchp] design, there is always a correct state for the VM, recorded as the stable copy in the disk. The state of stable copy is one checkpoint interval behind the current VM’s state, because we copy the last checkpoint to the stable copy only when we get a new checkpoint. Therefore, before a checkpoint is committed by copying to the stable copy, we buffer all the outgoing messages in the VM during the corresponding checkpoint interval. Thus, type (3) messages are never generated, because the buffered messages are unblocked only after saving their information by copying the checkpoint to the in-disk stable copy. Our algorithm works under the assumption that the buffered messages will not be lost or duplicated.
    4040
    4141In the [wiki:VDEchp VDEchp] design, there are multiple VMs running on different hosts connected within the network. One host is the backup host where we deploy the [wiki:VDEchp VDEchp] Initiator, and others are primary hosts where we run the protected VMs. The Initiator can be running on a VM which is dedicated to the checkpointing service. It doesn’t need to be deployed on the privileged guest system like the Domain 0 in Xen. When [wiki:VDEchp VDEchp] starts to record the globally consistent checkpoint, the Initiator broadcasts the checkpoint request and waits for acknowledgements from all the recipients. Upon receiving a checkpoint request, each VM checks the latest recorded in-disk stable copy (not the in-memory checkpoint), marks this stable copy as part of the global checkpoint, and sends a “success” acknowledgement back to the Initiator. The algorithm terminates when the Initiator receives the acknowledgements from all the VMs. For example, if the Initiator sends a request (marked as rn) to checkpoint the entire VDE, a VM named VM1 in the VDE will record a stable copy named “vm1 global rn”. All of the stable copies from each VM compose a globally consistent checkpoint for the entire VDE. Besides, if the [wiki:VDEchp VDEchp] Initiator sends the checkpoint request at a user-specified frequency, the correct state of the entire VDE is recorded periodically.
     
    4747             Table 1. Solo VM downtime comparison.
    4848
    49 Table I shows the downtime results under different mechanisms. We compare [wiki:VDEchp VDEchp] with [http://nss.cs.ubc.ca/remus/ Remus] and the VNsnap-memory daemon, under the same checkpoint interval. We measure the downtime of all three mechanisms, with the same VM (with 512MB of RAM), for three cases: a) when the VM is idle, b) when the VM runs the NPB-EP benchmark program, and c) when the VM runs the Apache web server workload.
     49Table I shows the downtime results under different mechanisms. We compare [wiki:VDEchp VDEchp] with [http://nss.cs.ubc.ca/remus/ Remus] and the [http://friends.cs.purdue.edu/dokuwiki/doku.php?id=vnsnap VNsnap]-memory daemon, under the same checkpoint interval. We measure the downtime of all three mechanisms, with the same VM (with 512MB of RAM), for three cases: a) when the VM is idle, b) when the VM runs the NPB-EP benchmark program, and c) when the VM runs the Apache web server workload.
    5050
    5151Several observations are in order regarding the downtime measurements.
     
    5555Second, the downtime of both [wiki:VDEchp VDEchp] and [http://nss.cs.ubc.ca/remus/ Remus] remain almost the same when running NPB-EP and Apache. This is because, the downtime depends on the amount of memory remaining to be copied when the guest VM is suspended. Since both [wiki:VDEchp VDEchp] and [http://nss.cs.ubc.ca/remus/ Remus] use a high-frequency methodology, the dirty pages in the last round are almost the same.
    5656
    57 Third, when running the NPB-EP program, [wiki:VDEchp VDEchp] has lesser downtime than the VNsnap-memory daemon (reduction is more than 20%). This is because, NPB-EP is a computationally intensive workload. Thus, the guest VM memory is updated at high frequency. When saving the checkpoint, compared with other high-frequency checkpoint solutions, the VNsnap-memory daemon takes more time to save larger dirty data due to its low memory transfer frequency.
     57Third, when running the NPB-EP program, [wiki:VDEchp VDEchp] has lesser downtime than the [http://nss.cs.ubc.ca/remus/ Remus] and the [http://friends.cs.purdue.edu/dokuwiki/doku.php?id=vnsnap VNsnap]-memory daemon (reduction is more than 20%). This is because, NPB-EP is a computationally intensive workload. Thus, the guest VM memory is updated at high frequency. When saving the checkpoint, compared with other high-frequency checkpoint solutions, the [http://nss.cs.ubc.ca/remus/ Remus] and the [http://friends.cs.purdue.edu/dokuwiki/doku.php?id=vnsnap VNsnap]-memory daemon takes more time to save larger dirty data due to its low memory transfer frequency.
    5858
    59 Finally, when running the Apache application, the memory update is not so much as that when running NPB. But the memory update is more than the idle run. The results show that [wiki:VDEchp VDEchp] has lower downtime than VNsnap-memory daemon (downtime is reduced by roughly 16%).
     59Finally, when running the Apache application, the memory update is not so much as that when running NPB. But the memory update is more than the idle run. The results show that [wiki:VDEchp VDEchp] has lower downtime than [http://nss.cs.ubc.ca/remus/ Remus] and the [http://friends.cs.purdue.edu/dokuwiki/doku.php?id=vnsnap VNsnap]-memory daemon (downtime is reduced by roughly 16%).
    6060
    6161=== VDE Downtime ===