Changes between Version 16 and Version 17 of FGBI


Ignore:
Timestamp:
09/28/11 00:00:58 (13 years ago)
Author:
lvpeng
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • FGBI

    v16 v17  
    8484LLM.
    8585
     86== Overhead ==
     87Figure 5a shows the overhead during VM migration. The figure compares the
     88applications' runtime with and without migration, under Apache, SPECweb,
     89NPB-EP, and SPECsys, with the size of the fine-grained blocks varies from 64
     90bytes to 128 bytes and 256 bytes. We observe that in all cases the overhead is
     91low, no more than 13% (Apache with 64 bytes block). As we discuss in Section 3,
     92the smaller the block size that FGBI chooses, the greater is the memory overhead
     93that it introduces. In our experiments, the smaller block size that we chose is 64
     94bytes, so this is the worst case overhead compared with the other block sizes.
     95Even in this "worst" case, under all these benchmarks, the overhead is less than
     968.21%, on average.
     97
     98In order to understand the respective contributions of the three proposed
     99techniques (i.e., FGBI, sharing, and compression), Figure 5b shows the break-
     100down of the performance improvement among them under the NPB-EP bench-
     101mark. It compares the downtime between integrated FGBI (which we use for
     102evaluation in this Section), FGBI with sharing but no compression support,
     103FGBI with compression but no sharing support, and FGBI without sharing nor
     104compression support, under the NPB-EP benchmark. As previously discussed,
     105since NPB-EP is a memory-intensive workload, it should present a clear differ-
     106ence among the three techniques, all of which focus on reducing the memory-
     107related overhead. We do not include the downtime of LLM here, since for this
     108compute-intensive benchmark, LLM incurs a very long downtime, which is more
     109than 10 times the downtime that FGBI incurs.
     110
     111We observe from Figure 5b that if we just apply the FGBI mechanism without
     112integrating sharing or compression support, the downtime is reduced, compared
     113with that of Remus in Figure 4b, but it is not significant (reduction is no more
     114than twenty percent). However, compared with FGBI with no support, after in-
     115tegrating hybrid compression, FGBI further reduces the downtime, by as much
     116as 22%. We also obtain a similar benefit after adding the sharing support (down-
     117time reduction is a further 26%). If we integrate both sharing and compression
     118support, the downtime is reduced by as much as 33%, compared to FGBI without
     119sharing or compression support.