| 86 | == Overhead == |
| 87 | Figure 5a shows the overhead during VM migration. The figure compares the |
| 88 | applications' runtime with and without migration, under Apache, SPECweb, |
| 89 | NPB-EP, and SPECsys, with the size of the fine-grained blocks varies from 64 |
| 90 | bytes to 128 bytes and 256 bytes. We observe that in all cases the overhead is |
| 91 | low, no more than 13% (Apache with 64 bytes block). As we discuss in Section 3, |
| 92 | the smaller the block size that FGBI chooses, the greater is the memory overhead |
| 93 | that it introduces. In our experiments, the smaller block size that we chose is 64 |
| 94 | bytes, so this is the worst case overhead compared with the other block sizes. |
| 95 | Even in this "worst" case, under all these benchmarks, the overhead is less than |
| 96 | 8.21%, on average. |
| 97 | |
| 98 | In order to understand the respective contributions of the three proposed |
| 99 | techniques (i.e., FGBI, sharing, and compression), Figure 5b shows the break- |
| 100 | down of the performance improvement among them under the NPB-EP bench- |
| 101 | mark. It compares the downtime between integrated FGBI (which we use for |
| 102 | evaluation in this Section), FGBI with sharing but no compression support, |
| 103 | FGBI with compression but no sharing support, and FGBI without sharing nor |
| 104 | compression support, under the NPB-EP benchmark. As previously discussed, |
| 105 | since NPB-EP is a memory-intensive workload, it should present a clear differ- |
| 106 | ence among the three techniques, all of which focus on reducing the memory- |
| 107 | related overhead. We do not include the downtime of LLM here, since for this |
| 108 | compute-intensive benchmark, LLM incurs a very long downtime, which is more |
| 109 | than 10 times the downtime that FGBI incurs. |
| 110 | |
| 111 | We observe from Figure 5b that if we just apply the FGBI mechanism without |
| 112 | integrating sharing or compression support, the downtime is reduced, compared |
| 113 | with that of Remus in Figure 4b, but it is not significant (reduction is no more |
| 114 | than twenty percent). However, compared with FGBI with no support, after in- |
| 115 | tegrating hybrid compression, FGBI further reduces the downtime, by as much |
| 116 | as 22%. We also obtain a similar benefit after adding the sharing support (down- |
| 117 | time reduction is a further 26%). If we integrate both sharing and compression |
| 118 | support, the downtime is reduced by as much as 33%, compared to FGBI without |
| 119 | sharing or compression support. |