= [wiki:Support Support Techniques] = '''Block Sharing.''' Note that past efforts use the memory page as the sharing granularity. Thus, they still suffer from the “one byte differ, both pages can’t be shared” problem. Therefore, we consider using a smaller block in FGBI as the sharing granularity to reduce memory overhead. The Difference Engine project also illustrates the potential savings due to subpage sharing, both within and across virtual machines, and can achieve savings of up to 77%. In order to share memory at the sub-page level, the authors construct patches to represent a page as the difference relative to a reference page. However, this patching method requires selected pages to be accessed infrequently, otherwise the overhead of compression/decompression outweighs the benefits. Their experimental evaluations reveal that patching incurs additional complexity and overhead when running memory-intensive workloads on guest VMs (from results for “Random Pages” workload). Unlike Difference Engine, we apply a straightforward sharing technique to reduce the complexity. The goal of our sharing mechanism is to eliminate redundant copies of identical blocks. We share blocks and compare hash values in memory at runtime, by using a hash function to index the contents of every block. If the hash value of a block is found more than once in a epoch, then there is a good probability that the current block is identical to the block that gave the same hash value. To ensure these blocks are identical, they are compared bit by bit. If the blocks are identical, they are reduced to one block. If, later on, the shared block is modified, we need to decide which of the original constituent blocks has been updated and needs to be transferred. '''Hybrid Compression.''' Compression techniques can be used to significantly improve the performance of live migration [10]. Compressed dirty data takes shorter time to be transferred through the network. In addition, network traffic due to migration is significantly reduced when much less data is transferred between primary and backup hosts. Therefore, for dirty blocks in memory, we consider compressing them to reduce the amount of transferred data. Before transmitting a dirty block, we check for its presence in an address-indexed cache of previously transmitted blocks (through pages). In case of a cache hit, the whole page (including the dirty memory block) is XORed with the previous version, and the differences are run-length encoded (RLE). At the end of each migration epoch, we send only the delta from a previous transmission of the same memory block, in order to reduce the amount of migration traffic at each epoch. Since smaller amount of data is transferred, the total migration time and downtime can both be decreased. However, at the current migration epoch, there still remains a significant fraction of blocks that are not present in the cache. In these cases, we use a general-purpose algorithm to achieve a higher degree of compression. Through this hybrid approach, each dirty block is preferentially XOR-compressed through whole page compression. Besides, we apply a general-purpose compression technique such as zlib to the blocks which are “new” at the current migration epoch.