| 14 | |
| 15 | == Two Execution Cases Under VDEchp == |
| 16 | In the VDEchp design, for each VM, the state of its stablecopy is always one checkpoint interval behind the current VM’s state except the initial state. This means that, when a new checkpoint is generated, it is not copied to the stable-copy immediately. Instead, the last checkpoint will be copied to the stable-copy. The reason is that, there is latency between when an error occurs and when the failure caused by this error is detected. |
| 17 | |
| 18 | For example, in Figure 3, an error happens at time t0 and causes the system to fail at time t1. Since most error latency is small, in most cases, t1 - t0 < Te. In the case A, the latest checkpoint is chp1, and the system needs to roll back to the state S1 by resuming from the checkpoint chp1. However, in the second case, an error happens at time t2, and then a new checkpoint chp3 is saved. After system moves to the state S3, this error causes the system to fail at time t3. Here, we assume that t3 - t2 < Te. But, if we choose chp3 as the latest correct checkpoint and roll the system back to state S3, after resuming, the system will fail again. We can see that, in this case, the latest checkpoint should be chp2, and when the system crashes, we should roll it back to state S2, by resuming from the checkpoint chp2. |
| 19 | |
| 20 | == The Definition Of The Global Checkpoint == |
| 21 | To compose a globally consistent state of all the VMs, the checkpoint of each VM must be coordinated. Besides checkpointing each VM’s correct state, it’s also essential to guarantee the consistency of all communication states within the virtual network. In the figure 4, the messages exchanged among the VMs are marked by arrows going from the sender to the receiver. The execution line of the VMs is separated by their corresponding checkpoints. The upper part of each checkpoint corresponds to the state before the checkpoint and the lower part of each checkpoint corresponds to the state after the checkpoint. A global checkpoint (consistent or not) is marked as the “cut” line, which separates each VM’s timeline into two parts. We can label the messages exchanged in the virtual network into three categories: |
| 22 | (1) The state of the message’s source and the destination are on the same side of the cut line. For example, in Figure 4, both the source state and the destination state of message m1 are above the cut line. Similarly, both the source state and the destination state of messages m2 are under the cut line. |
| 23 | (2) The message’s source state is above the cut line while the destination state is under the cut line, like the message m3. |
| 24 | (3) The message’s source state is under the cut line while the destination state is above the cut line, like the message m4. |
| 25 | |
| 26 | For these three types of messages, we can see that a globally consistent cut must ensure the delivery of the type (1) and type (2) messages, but avoid the type (3) messages. For example, consider the message m4. In VM3’s checkpoint saved on the cut line, m4 is already recorded as being received. However, in VM4’s checkpoint saved on the same cut line, it has no record that m4 has been sent out. Therefore, the state saved on VM4’s global cut is inconsistent, because in VM4’s view, VM3 receives a message m4, which is sent by no one. |
| 27 | |
| 28 | == Distributed Checkpoint Algorithm in VDEchp == |
| 29 | We develop a variant of the simplified version of Mattern’s algorithm used in VNsnap, as the basis of our lightweight checkpoint mechanism. As illustrated before, type (3) messages are unwanted, because they are not recorded in any source VM’s checkpoints, but they are already recorded in some checkpoint of a destination VM. In the VDEchp design, there is always a correct state for the VM, recorded as the stable-copy in the disk. The state of stable-copy is one checkpoint interval behind the current VM’s state, because we copy the last checkpoint to the stable-copy only when we get a new checkpoint. Therefore, before a checkpoint is committed by copying to stable-copy, we buffer all the outgoing messages in the VM during the corresponding checkpoint interval. Thus, type (3) messages are never generated, because the buffered messages are unblocked only after saving their information by copying the checkpoint to the in-disk stable-copy. Our algorithm works under the assumption that the buffering messages will not be lost or duplicated in VDEchp. |
| 30 | |
| 31 | In the VDEchp design, there are multiple VMs running on different hosts connected within the network. One host is the backup host where we deploy the VDEchp Initiator, and others are primary hosts where we run the protected VMs. The Initiator can be running on a VM which is dedicated to |
| 32 | the checkpointing service. It doesn’t need to be deployed on the privileged guest system like the Domain 0 in Xen. When VDEchp starts to record the globally consistent checkpoint, the Initiator broadcasts the checkpoint request and waits for acknowledgements from all the recipients. Upon receiving a |
| 33 | checkpoint request, each VM checks the latest recorded in-disk stable-copy (not the in-memory checkpoint), marks this stablecopy as part of the global checkpoint, and sends a “success” acknowledgement back to the Initiator. The algorithm terminates when the Initiator receives the acknowledgements from all the VMs. For example, if the Initiator sends a request (marked as rn) to checkpoint the entire VDE, a VM named VM1 in the VDE will record a stable-copy named “vm1 global rn”. All of the table-copies from each VM compose a globally consistent checkpoint for the entire VDE. Besides, if the |
| 34 | VDEchp Initiator sends the checkpoint request at a userspecified frequency, the correct state of the entire VDE is recorded periodically. |