| 38 | Table I shows the downtime results under different mechanisms. We compare VDEchp with Remus and the VNsnap-memory daemon, under the same checkpoint interval. We measure the downtime of all three mechanisms, with the same VM (with 512MB of RAM), for three cases: a) when the VM is idle, b) when the VM runs the NPB-EP benchmark program [5], and c) when the VM runs the Apache web server workload [2]. |
| 39 | Several observations are in order regarding the downtime measurements. |
| 40 | First, the downtime results of all three mechanisms are short and very similar for the idle case. This is not surprising, as memory updates are rare during idle runs, so the downtime of all mechanisms is short and similar. |
| 41 | Second, the downtime of both VDEchp and Remus remain almost the same when running NPB-EP and Apache. This is because, the downtime depends on the amount of memory remaining to be copied when the guest VM is suspended. Since both VDEchp and Remus use a high-frequency methodology, the dirty pages in the last round are almost the same. |
| 42 | Third, when running the NPB-EP program, VDEchp has lesser downtime than the VNsnap-memory daemon (reduction is more than 20%). This is because, NPB-EP is a computationally intensive workload. Thus, the guest VM memory is updated at high frequency. When saving the checkpoint, compared with other high-frequency checkpoint solutions, the VNsnap-memory daemon takes more time to save larger dirty data due to its low memory transfer frequency. |
| 43 | Finally, when running the Apache application, the memory update is not so much as that when running NPB. But the memory update is definitely more than the idle run. The results show that VDEchp has lower downtime than VNsnap-memory daemon (downtime is reduced by roughly 16%). |