Changes between Version 11 and Version 12 of VDEchp
- Timestamp:
- 10/04/11 02:28:08 (13 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
VDEchp
v11 v12 15 15 == Different Execution Cases Under VDEchp == 16 16 [[Image(figure3.jpg)]] 17 17 18 In the VDEchp design, for each VM, the state of its stablecopy is always one checkpoint interval behind the current VM’s state except the initial state. This means that, when a new checkpoint is generated, it is not copied to the stable-copy immediately. Instead, the last checkpoint will be copied to the stable-copy. The reason is that, there is latency between when an error occurs and when the failure caused by this error is detected. 18 19 … … 21 22 == The Definition Of The Global Checkpoint == 22 23 [[Image(figure4.jpg)]] 24 23 25 To compose a globally consistent state of all the VMs, the checkpoint of each VM must be coordinated. Besides checkpointing each VM’s correct state, it’s also essential to guarantee the consistency of all communication states within the virtual network. In the figure 4, the messages exchanged among the VMs are marked by arrows going from the sender to the receiver. The execution line of the VMs is separated by their corresponding checkpoints. The upper part of each checkpoint corresponds to the state before the checkpoint and the lower part of each checkpoint corresponds to the state after the checkpoint. A global checkpoint (consistent or not) is marked as the “cut” line, which separates each VM’s timeline into two parts. We can label the messages exchanged in the virtual network into three categories: 26 24 27 (1) The state of the message’s source and the destination are on the same side of the cut line. For example, in Figure 4, both the source state and the destination state of message m1 are above the cut line. Similarly, both the source state and the destination state of messages m2 are under the cut line. 28 25 29 (2) The message’s source state is above the cut line while the destination state is under the cut line, like the message m3. 30 26 31 (3) The message’s source state is under the cut line while the destination state is above the cut line, like the message m4. 27 32 … … 39 44 === Downtime Evaluation for Solo VM === 40 45 [[Image(table1.jpg)]] 41 Table I shows the downtime results under different mechanisms. We compare VDEchp with Remus and the VNsnap-memory daemon, under the same checkpoint interval. We measure the downtime of all three mechanisms, with the same VM (with 512MB of RAM), for three cases: a) when the VM is idle, b) when the VM runs the NPB-EP benchmark program [5], and c) when the VM runs the Apache web server workload [2]. 46 47 Table I shows the downtime results under different mechanisms. We compare VDEchp with Remus and the VNsnap-memory daemon, under the same checkpoint interval. We measure the downtime of all three mechanisms, with the same VM (with 512MB of RAM), for three cases: a) when the VM is idle, b) when the VM runs the NPB-EP benchmark program, and c) when the VM runs the Apache web server workload. 42 48 43 49 Several observations are in order regarding the downtime measurements. … … 53 59 === VDE Downtime === 54 60 [[Image(figure7.jpg)]] 61 55 62 The VDE downtime is the time from when the failure was detected in the VDE until the entire VDE resumes from the last globally consistent checkpoint. We conducted experiments to measure the downtime. To induce failures in the VDE, we developed an application program that causes a segmentation failure after executing for a while. This program is launched on several VMs to generate a failure while the distributed application workload is running in the VDE. The protected VDE is then rolled back to the last globally consistent checkpoint. We use the NPB-EP program (MPI task in the VDE) and the Apache web server benchmark as the distributed workload on the protected VMs. 56 63