Changes between Version 1 and Version 2 of Support
- Timestamp:
- 08/30/11 21:18:16 (13 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Support
v1 v2 5 5 Unlike Difference Engine, we apply a straightforward sharing technique to reduce the complexity. The goal of our sharing mechanism is to eliminate redundant copies of identical blocks. We share blocks and compare hash values in memory at runtime, by using a hash function to index the contents of every block. If the hash value of a block is found more than once in a epoch, then there is a good probability that the current block is identical to the block that gave the same hash value. To ensure these blocks are identical, they are compared bit by bit. If the blocks are identical, they are reduced to one block. If, later on, the shared block is modified, we need to decide which of the original constituent blocks has been updated and needs to be transferred. 6 6 7 '''Hybrid Compression.''' Compression techniques can be used to significantly improve the performance of live migration [10]. Compressed dirty data takes shorter time to be transferred through the network. In addition, network traffic due to migration is significantly reduced when much less data is transferred between primary and backup hosts. Therefore, for dirty blocks in memory, we consider compressing them to reduce the amount of transferred data. Before transmitting a dirty block, we check for its presence in an address-indexed cache of previously transmitted blocks (through pages). In case of a cache hit, the whole page (including the dirty memory block) is XORed with the previous version, and the differences are run-length encoded (RLE). At the end of each migration epoch, we send only the delta from a previous transmission of the same memory block, in order to reduce the amount of migration traffic at each epoch.7 '''Hybrid Compression.''' Compression techniques can be used to significantly improve the performance of live migration. Compressed dirty data takes shorter time to be transferred through the network. In addition, network traffic due to migration is significantly reduced when much less data is transferred between primary and backup hosts. Therefore, for dirty blocks in memory, we consider compressing them to reduce the amount of transferred data. Before transmitting a dirty block, we check for its presence in an address-indexed cache of previously transmitted blocks (through pages). In case of a cache hit, the whole page (including the dirty memory block) is XORed with the previous version, and the differences are run-length encoded (RLE). At the end of each migration epoch, we send only the delta from a previous transmission of the same memory block, in order to reduce the amount of migration traffic at each epoch. 8 8 9 9 Since smaller amount of data is transferred, the total migration time and downtime can both be decreased. However, at the current migration epoch, there still remains a significant fraction of blocks that are not present in the cache. In these cases, we use a general-purpose algorithm to achieve a higher degree of compression. Through this hybrid approach, each dirty block is preferentially XOR-compressed through whole page compression. Besides, we apply a general-purpose compression technique such as zlib to the blocks which are “new” at the current migration epoch.