Changes between Version 42 and Version 43 of FGBI


Ignore:
Timestamp:
10/09/11 16:35:40 (13 years ago)
Author:
binoy
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • FGBI

    v42 v43  
    11= [wiki:FGBI FGBI] =
    22
    3 Traditional xen-based systems track memory updates by keeping evidence of the dirty pages at each migration epoch. In [http://nss.cs.ubc.ca/remus/ Remus] and also our previous work, [wiki:LLM LLM], they use the same page size as Xen (for x86, this is 4KB), which is also the granularity for detecting memory changes. However, when running computational-intensive workloads under [wiki:LLM LLM] system, the long downtime performance becomes unacceptable. [wiki:FGBI FGBI] (Fine-Grained Block Identification) is a mechanism which uses smaller memory blocks (smaller than page sizes) as the granularity for detecting memory changes. [wiki:FGBI FGBI] calculates the hash value for each memory block at the beginning of each migration epoch. At the end of each epoch, instead of transferring the whole dirty page, [wiki:FGBI FGBI] computes new hash values for each block and compares them with the corresponding old values. Blocks are only modified if their corresponding hash values don’t match. Therefore, [wiki:FGBI FGBI] marks such blocks as dirty and replaces the old hash values with the new ones. Afterwards, [wiki:FGBI FGBI] only transfers dirty blocks to the backup host.
     3Traditional Xen-based systems track memory updates by keeping evidence of the dirty pages at each migration epoch. In [http://nss.cs.ubc.ca/remus/ Remus] (and in our previous work [wiki:LLM LLM]), the same page size as that of Xen (for x86, this is 4KB) is used as the granularity for detecting memory changes. However, when running computationally intensive workloads under [wiki:LLM LLM], the downtime becomes unacceptably long. [wiki:FGBI FGBI] (Fine-Grained Block Identification) is a mechanism, which uses smaller memory blocks (smaller than a page size) as the granularity for detecting memory changes. [wiki:FGBI FGBI] calculates the hash value for each memory block at the beginning of each migration epoch. At the end of each epoch, instead of transferring the whole dirty page, [wiki:FGBI FGBI] computes new hash values for each block and compares them with the corresponding old values. Blocks are only modified if their corresponding hash values don’t match. Therefore, [wiki:FGBI FGBI] marks such blocks as dirty and replaces the old hash values with the new ones. Afterwards, [wiki:FGBI FGBI] only transfers dirty blocks to the backup host.
    44
    5 [wiki:FGBI FGBI] is based on [http://nss.cs.ubc.ca/remus/ The Remus project] and our previous efforts Lightweight Live Migration ([wiki:LLM LLM]) mechanism. For a full description and evaluation, please see our [wiki:Publications OPODIS'11] paper.
     5[wiki:FGBI FGBI] is based on [http://nss.cs.ubc.ca/remus/ The Remus project] and the Lightweight Live Migration ([wiki:LLM LLM]) mechanism. For a full description and evaluation, please see our [wiki:Publications OPODIS'11] paper.
    66
    7 == Downtime Problem in [wiki:LLM LLM] ==
     7== The Downtime Problem in [wiki:LLM LLM] ==
    88[[Image(figure1.jpg)]]
    99
    1010            Figure 1. Primary-Backup model and the downtime problem.
    1111
    12 Downtime is the primary factor for estimating the high availability of a system, since any long downtime experience for clients may result in loss of client loyalty and thus revenue loss. Under the Primary-Backup model (Figure 1), there are two types of downtime: I) the time from when the primary host crashes until the VM resumes from the last checkpointed state on the backup host and starts to handle client requests (D,,1,, = T,,3,, - T,,1,,); II) the time from when the VM pauses on the primary (to save for the checkpoint) until it resumes (D,,2,,). From Jiang’s paper we observe that for memory-intensive workloads running on guest VMs (such as the highSys workload), [wiki:LLM LLM] endures much longer type I downtime than [http://nss.cs.ubc.ca/remus/ Remus]. This is because, these workloads update the guest memory at high frequency. On the other side, [wiki:LLM LLM] migrates the guest VM image update (mostly from memory) at low frequency but uses input replay as an auxiliary. In this case, when failure happens, a significant number of memory updates are needed in order to ensure synchronization between the primary and backup hosts. Therefore, it needs significantly more time for the input replay process in order to resume the VM on the backup host and begin handling client requests.
     12Downtime is the primary factor for estimating the high availability of a system, since any long downtime experience for clients may result in loss of client loyalty and thus revenue loss. Under the Primary-Backup model (Figure 1), there are two types of downtime: I) the time from when the primary host crashes until the VM resumes from the last checkpointed state on the backup host and starts to handle client requests (D,,1,, = T,,3,, - T,,1,,); and II) the time from when the VM pauses on the primary (to save for the checkpoint) until it resumes (D,,2,,). From Jiang’s paper, we observe that for memory-intensive workloads running on guest VMs (such as the HighSys workload), [wiki:LLM LLM] endures much longer type I downtime than [http://nss.cs.ubc.ca/remus/ Remus]. This is because, such workloads update the guest memory at high frequency. In contrast, [wiki:LLM LLM] migrates the guest VM image update (mostly from memory) at low frequency, but uses input replay as an auxiliary. Thus, when a failure happens, a significant number of memory updates are needed in order to ensure synchronization between the primary and backup hosts. Therefore, [wiki:LLM LLM] needs significantly more time for the input replay process in order to resume the VM on the backup host and begin handling client requests.
    1313
    14 Regarding the type II downtime, there are several migration epochs between two checkpoints, and the newly updated memory data is copied to the backup host at each epoch. At the last epoch, the VM running on the primary host is suspended and the remaining memory states are transferred to the backup host. Thus, the type II downtime depends on the amount of memory that remains to be copied and transferred when pausing the VM on the primary host. If we reduce the dirty data which need to be transferred at the last epoch, then we can reduce the type II downtime. Moreover, if we reduce the dirty data which needs to be transferred at each epoch, trying to synchronize the memory state between primary and backup host all the time, then at the last epoch, there won’t be too much new memory update that need to be transferred, so we can reduce the type I downtime too.
     14There are several migration epochs between two checkpoints, and the newly updated memory data is copied to the backup host at each epoch. At the last epoch, the VM running on the primary host is suspended and the remaining memory states are transferred to the backup host. Thus, the type II downtime depends on the amount of memory that remains to be copied and transferred when pausing the VM on the primary host. If we reduce the dirty data which need to be transferred at the last epoch, then we can reduce the type II downtime. Moreover, if we reduce the dirty data which needs to be transferred at each epoch, while trying to synchronize the memory state between the primary and backup hosts all the time, then at the last epoch, there won’t be significant new memory updates that need to be transferred. Thus, we can also reduce type I downtime.
    1515
    1616== [wiki:FGBI FGBI] Design ==
    17 Therefore, in order to achieve HA in these virtualized systems, especially to address the downtime problem under memory-intensive workloads, we propose a memory synchronization technique for tracking memory updates, called Fine-Grained Block Identification (or [wiki:FGBI FGBI]). [http://nss.cs.ubc.ca/remus/ Remus] and [wiki:LLM LLM] track memory updates by keeping evidence of the dirty pages
     17Therefore, in order to reduce the downtime under memory-intensive workloads and increase availability, we propose a memory synchronization technique for tracking memory updates, called Fine-Grained Block Identification (or [wiki:FGBI FGBI]). As pointed out before, [http://nss.cs.ubc.ca/remus/ Remus] and [wiki:LLM LLM] track memory updates by keeping evidence of the dirty pages
    1818at each migration epoch. [http://nss.cs.ubc.ca/remus/ Remus] uses the same page size as Xen (for x86, this is
    19194KB), which is also the granularity for detecting memory changes. However, this
     
    2525blocks.
    2626
    27 We propose the [wiki:FGBI FGBI] mechanism which uses memory blocks (smaller than
     27The [wiki:FGBI FGBI] mechanism uses memory blocks (smaller than
    2828page sizes) as the granularity for detecting memory changes. FBGI calculates
    2929the hash value for each memory block at the beginning of each migration epoch.
     
    3434Therefore, [wiki:FGBI FGBI] marks such blocks as dirty and replaces the old hash values with
    3535the new ones. Afterwards, [wiki:FGBI FGBI] only transfers dirty blocks to the backup host.
     36
    3637However, because of using block granularity, [wiki:FGBI FGBI] introduces new overhead.
    3738If we want to accurately approximate the true dirty region, we need to set the
    3839block size as small as possible. For example, to obtain the highest accuracy,
    39 the best block size is one bit. That is impractical because it requires storing an
     40the best block size is one bit. But that is impractical, because it requires storing an
    4041additional bit for each bit in memory, which means that we need to double the
    4142main memory. Thus, a smaller block size leads to a greater number of blocks and
    42 also requires more memory for storing the hash values. Based on these past eorts
    43 illustrating the memory saving potential, we present two supporting
    44 techniques: block sharing and hybrid compression.
     43also requires more memory for storing the hash values. We present two techniques to reduce the memory overhead: block sharing and hybrid compression.
    4544
    4645== Downtime Evaluations ==
     
    4948                Figure 2. Type I Downtime comparison under different benchmarks.
    5049
    51 Figures 2a, 2b, 2c, and 2d show the type I downtime com-
    52 parison among [wiki:FGBI FGBI], [wiki:LLM LLM], and [http://nss.cs.ubc.ca/remus/ Remus] mechanisms under [http://httpd.apache.org/ Apache], [http://www.nas.nasa.gov/Resources/Software/npb.html NPB-EP],
     50Figures 2a, 2b, 2c, and 2d show the type I downtime comparison among [wiki:FGBI FGBI], [wiki:LLM LLM], and [http://nss.cs.ubc.ca/remus/ Remus] mechanisms under [http://httpd.apache.org/ Apache], [http://www.nas.nasa.gov/Resources/Software/npb.html NPB-EP],
    5351[http://www.spec.org/web2005/ SPECweb], and [http://www.spec.org/sfs97r1/ SPECsys] applications, respectively. The block size used in all
    5452experiments is 64 bytes. For [http://nss.cs.ubc.ca/remus/ Remus] and [wiki:FGBI FGBI], the checkpointing period is the
     
    6563worse than [http://nss.cs.ubc.ca/remus/ Remus]).
    6664
    67 Although [http://www.spec.org/web2005/ SPECweb] is a web workload, it still has a high page modifi-
    68 cation rate, which is approximately 12,000 pages/second. In our experi-
    69 ment, the 1 Gbps migration link is capable of transferring approximately 25,000
     65Although [http://www.spec.org/web2005/ SPECweb] is a web workload, it still has a high page modification rate, which is approximately 12,000 pages/second. In our experiment, the 1 Gbps migration link is capable of transferring approximately 25,000
    7066pages/second. Thus, [http://www.spec.org/web2005/ SPECweb] is not a lightweight computational workload for
    7167these migration mechanisms. As a result, the relationship between [wiki:FGBI FGBI] and
     
    8278Table 1 shows the type II downtime comparison among
    8379[http://nss.cs.ubc.ca/remus/ Remus], [wiki:LLM LLM], and [wiki:FGBI FGBI] mechanisms under different applications. We have three
    84 main observations: (1) Their downtime results are very similar for "idle" run.
    85 This is because [http://nss.cs.ubc.ca/remus/ Remus] is a fast checkpointing mechanism and both [wiki:LLM LLM] and [wiki:FGBI FGBI] are based on it. There is rare memory update for "idle" run, so the type
    86 II downtime in all three mechanisms is short. (2) When running [http://www.nas.nasa.gov/Resources/Software/npb.html NPB-EP] ap-
    87 plication, the guest VM memory is updated at high frequency. When saved for
    88 the checkpoint, [wiki:LLM LLM] takes much more time to save huge "dirty" data caused
    89 by its low memory transfer frequency. Therefore in this case [wiki:FGBI FGBI] achieves a
    90 much lower downtime than [http://nss.cs.ubc.ca/remus/ Remus] (reduce more than 70%) and [wiki:LLM LLM] (more
    91 than 90%). (3) When running [http://httpd.apache.org/ Apache] application, the memory update is not so
    92 much as that when running [http://www.nas.nasa.gov/Resources/Software/npb.html NPB], but the memory update is definitely more than
    93 "idle" run. The downtime results shows [wiki:FGBI FGBI] still outperforms both [http://nss.cs.ubc.ca/remus/ Remus] and
     80main observations. First, the downtime results are very similar for the idle run case.
     81This is because, [http://nss.cs.ubc.ca/remus/ Remus] is a fast checkpointing mechanism and both [wiki:LLM LLM] and [wiki:FGBI FGBI] are based on it. Memory update are rare during idle runs, so the type II downtime in all three mechanisms is short and similar.  Second, when running the [http://www.nas.nasa.gov/Resources/Software/npb.html NPB-EP] application, the guest VM memory is updated at high frequency. When saving the checkpoint, [wiki:LLM LLM] takes much more time to save large "dirty" data caused
     82by its low memory transfer frequency. Therefore in this case, [wiki:FGBI FGBI] achieves a
     83much lower downtime than [http://nss.cs.ubc.ca/remus/ Remus] (reduction is more than 70%) and [wiki:LLM LLM] (reduction is more
     84than 90%). Finally, when running the [http://httpd.apache.org/ Apache] application, the memory update is not so
     85much as that when running [http://www.nas.nasa.gov/Resources/Software/npb.html NPB], but the memory update is significantly more than the idle run. The downtime results show that [wiki:FGBI FGBI] still outperforms both [http://nss.cs.ubc.ca/remus/ Remus] and
    9486[wiki:LLM LLM].
    9587
     
    9789[[Image(figure3.jpg)]]
    9890
    99     Figure 3. (a) Overhead under dierent block size; (b) Comparison of proposed techniques.
     91    Figure 3. (a) Overhead under different block size. (b) comparison of proposed techniques.
    10092
    10193Figure 3a shows the overhead during VM migration. The figure compares the
    10294applications' runtime with and without migration, under [http://httpd.apache.org/ Apache], [http://www.spec.org/web2005/ SPECweb],
    103 [http://www.nas.nasa.gov/Resources/Software/npb.html NPB-EP], and [http://www.spec.org/sfs97r1/ SPECsys], with the size of the fine-grained blocks varies from 64
    104 bytes to 128 bytes and 256 bytes. We observe that in all cases the overhead is
    105 low, no more than 13% ([http://httpd.apache.org/ Apache] with 64 bytes block). As we discuss in Section 3,
     95[http://www.nas.nasa.gov/Resources/Software/npb.html NPB-EP], and [http://www.spec.org/sfs97r1/ SPECsys], with the size of the fine-grained blocks varying from 64 bytes to 128 bytes and 256 bytes. We observe that, in all cases, the overhead is
     96low, no more than 13% ([http://httpd.apache.org/ Apache] with 64 byte block). As discussed before,
    10697the smaller the block size that [wiki:FGBI FGBI] chooses, the greater is the memory overhead
    10798that it introduces. In our experiments, the smaller block size that we chose is 64
    10899bytes, so this is the worst case overhead compared with the other block sizes.
    109 Even in this "worst" case, under all these benchmarks, the overhead is less than
     100Even in this "worst" case, under all the benchmarks, the overhead is less than
    1101018.21%, on average.
    111102
    112103In order to understand the respective contributions of the three proposed
    113 techniques (i.e., [wiki:FGBI FGBI], sharing, and compression), Figure 3b shows the break-
    114 down of the performance improvement among them under the [http://www.nas.nasa.gov/Resources/Software/npb.html NPB-EP] bench-
    115 mark. It compares the downtime between integrated [wiki:FGBI FGBI] (which we use for
    116 evaluation in this Section), [wiki:FGBI FGBI] with sharing but no compression support,
     104techniques (i.e., [wiki:FGBI FGBI], sharing, and compression), Figure 3(b) shows the break-
     105down of the performance improvement among them under the [http://www.nas.nasa.gov/Resources/Software/npb.html NPB-EP] benchmark. The figure compares the downtime between integrated [wiki:FGBI FGBI] (which we use for
     106evaluation here), [wiki:FGBI FGBI] with sharing but no compression support,
    117107[wiki:FGBI FGBI] with compression but no sharing support, and [wiki:FGBI FGBI] without sharing nor
    118108compression support, under the [http://www.nas.nasa.gov/Resources/Software/npb.html NPB-EP] benchmark. As previously discussed,
     
    125115integrating sharing or compression support, the downtime is reduced, compared
    126116with that of [http://nss.cs.ubc.ca/remus/ Remus] in Figure 3b, but it is not significant (reduction is no more
    127 than twenty percent). However, compared with [wiki:FGBI FGBI] with no support, after integrating hybrid compression, [wiki:FGBI FGBI] further reduces the downtime, by as much
    128 as 22%. We also obtain a similar benefit after adding the sharing support (down-
     117than twenty percent). However, compared with [wiki:FGBI FGBI] with no support, after integrating hybrid compression, [wiki:FGBI FGBI] further reduces the downtime, by as much as 22%. We also obtain a similar benefit after adding the sharing support (down-
    129118time reduction is a further 26%). If we integrate both sharing and compression
    130119support, the downtime is reduced by as much as 33%, compared to [wiki:FGBI FGBI] without