Changes between Version 32 and Version 33 of FGBI
- Timestamp:
- 10/06/11 01:51:58 (13 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
FGBI
v32 v33 10 10 Figure 1. Primary-Backup model and the downtime problem. 11 11 12 Downtime is the primary factor for estimating the high availability of a system, since any long downtime experience for clients may result in loss of client loyalty and thus revenue loss. Under the Primary-Backup model (Figure 1), there are two types of downtime: I) the time from when the primary host crashes until the VM resumes from the last checkpointed state on the backup host and starts to handle client requests (D1 = T3 - T1); II) the time from when the VM pauses on the primary (to save for the checkpoint) until it resumes (D2). From Jiang’s paper we observe that for memory-intensive workloads running on guest VMs (such as the highSys workload), LLM endures much longer type I downtime than Remus. This is because, these workloads update the guest memory at high frequency. On the other side, LLMmigrates the guest VM image update (mostly from memory) at low frequency but uses input replay as an auxiliary. In this case, when failure happens, a significant number of memory updates are needed in order to ensure synchronization between the primary and backup hosts. Therefore, it needs significantly more time for the input replay process in order to resume the VM on the backup host and begin handling client requests.12 Downtime is the primary factor for estimating the high availability of a system, since any long downtime experience for clients may result in loss of client loyalty and thus revenue loss. Under the Primary-Backup model (Figure 1), there are two types of downtime: I) the time from when the primary host crashes until the VM resumes from the last checkpointed state on the backup host and starts to handle client requests (D1 = T3 - T1); II) the time from when the VM pauses on the primary (to save for the checkpoint) until it resumes (D2). From Jiang’s paper we observe that for memory-intensive workloads running on guest VMs (such as the highSys workload), [wiki:LLM LLM] endures much longer type I downtime than Remus. This is because, these workloads update the guest memory at high frequency. On the other side, [wiki:LLM LLM] migrates the guest VM image update (mostly from memory) at low frequency but uses input replay as an auxiliary. In this case, when failure happens, a significant number of memory updates are needed in order to ensure synchronization between the primary and backup hosts. Therefore, it needs significantly more time for the input replay process in order to resume the VM on the backup host and begin handling client requests. 13 13 14 14 Regarding the type II downtime, there are several migration epochs between two checkpoints, and the newly updated memory data is copied to the backup host at each epoch. At the last epoch, the VM running on the primary host is suspended and the remaining memory states are transferred to the backup host. Thus, the type II downtime depends on the amount of memory that remains to be copied and transferred when pausing the VM on the primary host. If we reduce the dirty data which need to be transferred at the last epoch, then we can reduce the type II downtime. Moreover, if we reduce the dirty data which needs to be transferred at each epoch, trying to synchronize the memory state between primary and backup host all the time, then at the last epoch, there won’t be too much new memory update that need to be transferred, so we can reduce the type I downtime too. 15 15 16 16 == FGBI Design == 17 Therefore, in order to achieve HA in these virtualized systems, especially to address the downtime problem under memory-intensive workloads, we propose a memory synchronization technique for tracking memory updates, called Fine-Grained Block Identification (or FGBI). Remus and LLMtrack memory updates by keeping evidence of the dirty pages17 Therefore, in order to achieve HA in these virtualized systems, especially to address the downtime problem under memory-intensive workloads, we propose a memory synchronization technique for tracking memory updates, called Fine-Grained Block Identification (or FGBI). Remus and [wiki:LLM LLM] track memory updates by keeping evidence of the dirty pages 18 18 at each migration epoch. Remus uses the same page size as Xen (for x86, this is 19 19 4KB), which is also the granularity for detecting memory changes. However, this … … 50 50 51 51 Figures 2a, 2b, 2c, and 2d show the type I downtime com- 52 parison among FGBI, LLM, and Remus mechanisms under Apache, NPB-EP,52 parison among FGBI, [wiki:LLM LLM], and Remus mechanisms under Apache, NPB-EP, 53 53 SPECweb, and SPECsys applications, respectively. The block size used in all 54 54 experiments is 64 bytes. For Remus and FGBI, the checkpointing period is the 55 time interval of system update migration, whereas for LLM, the checkpointing55 time interval of system update migration, whereas for [wiki:LLM LLM], the checkpointing 56 56 period represents the interval of network buffer migration. By configuring the 57 57 same value for the checkpointing frequency of Remus/FGBI and the network 58 buffer frequency of LLM, we ensure the fairness of the comparison. We observe59 that Figures 2a and 2b show a reverse relationship between FGBI and LLM.58 buffer frequency of [wiki:LLM LLM], we ensure the fairness of the comparison. We observe 59 that Figures 2a and 2b show a reverse relationship between FGBI and [wiki:LLM LLM]. 60 60 Under Apache (Figure 2a), the network load is high but system updates are 61 rare. Therefore, LLMperforms better than FGBI, since it uses a much higher61 rare. Therefore, [wiki:LLM LLM] performs better than FGBI, since it uses a much higher 62 62 frequency to migrate the network service requests. On the other hand, when 63 63 running memory-intensive applications (Figure 2b and 2d), which involve high 64 computational loads, LLMendures a much longer downtime than FGBI (even64 computational loads, [wiki:LLM LLM] endures a much longer downtime than FGBI (even 65 65 worse than Remus). 66 66 … … 70 70 pages/second. Thus, SPECweb is not a lightweight computational workload for 71 71 these migration mechanisms. As a result, the relationship between FGBI and 72 LLMin Figure 2c is more similar to that in Figure 2b (and also Figure 2d),73 rather than Figure 2a. In conclusion, compared with LLM, FGBI reduces the72 [wiki:LLM LLM] in Figure 2c is more similar to that in Figure 2b (and also Figure 2d), 73 rather than Figure 2a. In conclusion, compared with [wiki:LLM LLM], FGBI reduces the 74 74 downtime by as much as 77%. Moreover, compared with Remus, FGBI yields a 75 75 shorter downtime, by as much as 31% under Apache, 45% under NPB-EP, 39% … … 80 80 81 81 Table 1 shows the type II downtime comparison among 82 Remus, LLM, and FGBI mechanisms under different applications. We have three82 Remus, [wiki:LLM LLM], and FGBI mechanisms under different applications. We have three 83 83 main observations: (1) Their downtime results are very similar for "idle" run. 84 This is because Remus is a fast checkpointing mechanism and both LLMand84 This is because Remus is a fast checkpointing mechanism and both [wiki:LLM LLM] and 85 85 FGBI are based on it. There is rare memory update for "idle" run, so the type 86 86 II downtime in all three mechanisms is short. (2) When running NPB-EP ap- 87 87 plication, the guest VM memory is updated at high frequency. When saved for 88 the checkpoint, LLMtakes much more time to save huge "dirty" data caused88 the checkpoint, [wiki:LLM LLM] takes much more time to save huge "dirty" data caused 89 89 by its low memory transfer frequency. Therefore in this case FGBI achieves a 90 much lower downtime than Remus (reduce more than 70%) and LLM(more90 much lower downtime than Remus (reduce more than 70%) and [wiki:LLM LLM] (more 91 91 than 90%). (3) When running Apache application, the memory update is not so 92 92 much as that when running NPB, but the memory update is definitely more than 93 93 "idle" run. The downtime results shows FGBI still outperforms both Remus and 94 LLM.94 [wiki:LLM LLM]. 95 95 96 96 == Overhead == 97 97 [[Image(figure3.jpg)]] 98 98 99 Figure 3. (a) Overhead under dierent block size; (b) Comparison of proposed techniques.99 Figure 3. (a) Overhead under dierent block size; (b) Comparison of proposed techniques. 100 100 101 101 Figure 3a shows the overhead during VM migration. The figure compares the … … 119 119 since NPB-EP is a memory-intensive workload, it should present a clear differ- 120 120 ence among the three techniques, all of which focus on reducing the memory- 121 related overhead. We do not include the downtime of LLMhere, since for this122 compute-intensive benchmark, LLMincurs a very long downtime, which is more121 related overhead. We do not include the downtime of [wiki:LLM LLM] here, since for this 122 compute-intensive benchmark, [wiki:LLM LLM] incurs a very long downtime, which is more 123 123 than 10 times the downtime that FGBI incurs. 124 124