= [wiki:LLM LLM] = [http://nss.cs.ubc.ca/remus/ Remus] is a virtual machine (VM) live migration technique for application-level fault recovery. The technique is based on checkpointing VM states at high frequency, which introduces significant overhead, as non-trivial CPU cycles and memory are consumed by the migration process. This can result in significantly long delays, i.e., long downtimes, for servicing client requests. On the other hand, if the VM states are migrated at low frequency (to reduce the overhead), many client requests may be duplicately serviced. In addition, this may increase the downtime of new service requests that are issued after the duplicately served requests. To solve this problem, based on the checkpointing approach of [http://nss.cs.ubc.ca/remus/ Remus], we have developed an integrated live migration mechanism, called Lightweight Live Migration ([wiki:LLM LLM]), which consists of both whole-system checkpointing and input replay. For a full description and evaluation, please see our [wiki:Publications SSS'10] paper. == [wiki:LLM LLM]'s Architecture == {{{ #!html

Figure 1. LLM Architecture.

}}} The architecture of [wiki:LLM LLM] is shown in Figure 1. Beyond [http://nss.cs.ubc.ca/remus/ Remus], we also migrate the change in network driver buffers. The process works as follows: * 1) First, on the primary machine, we setup the mapping between the ingress buffer and the egress buffer, signifying which packets are generated corresponding to which service request(s), and which requests are yet to be served. Moreover, [wiki:LLM LLM] hooks a copy for each ingress service request. * 2) Second, at each migration pause, [wiki:LLM LLM] migrates the hooked copy as well as the boundary information to the backup machine asynchronously, using the same migration socket as the one used by [http://nss.cs.ubc.ca/remus/ Remus] for CPU/memory status updates and writes to the file system. * 3) Third, all the migrated service requests are buffered in a queue in the “merge” module. Those buffered requests that have been served are removed based on the migrated boundary information. Once a failure occurs on the primary machine that breaks the migration data stream, the backup machine recovers the migrated memory image and merges the service requests into the corresponding driver buffers. == Asynchronous Network Buffer Migration in [wiki:LLM LLM] == [http://nss.cs.ubc.ca/remus/ Remus] uses checkpointing to migrate the ever-changing updates of CPU/memory/disk to the backup machine. Only at the beginning of each checkpointing cycle, the migration occurs in a burst mode after the guest VM resumes. Most of the time, there is no traffic flowing through the network connection between the primary and the backup machines. During this interval, we can migrate the service requests at a higher frequency than that of checkpointing. Similar to the migration of CPU/memory/disk updates, the migration of service requests is also done in an asynchronous manner, i.e., the primary machine resumes its service without waiting for an acknowledgement from the backup machine. [[Image(figure2.jpg)]] Figure 2. Checkpointing Sequence. Figure 2 shows the time sequence of migrating the checkpointed resources and the incoming service requests at different frequencies on a single network socket. The entire sequence within an epoch is described as follows: * 1) The dashed blocks represent the suspension period when the guest VM is paused. During this suspension period, all the status updates of CPU/memory/disk are collected and stored in a migration buffer. * 2) Once the guest VM is resumed, the content stored in the migration buffer is migrated first (shown as a block shaded area that is adjacent to the dashed area in the figure). * 3) Subsequently, the network buffer migration is started at a high frequency until the guest VM is suspended again. At the end of each network buffer migration cycle (the thin, shaded strips in the figure), [wiki:LLM LLM] transmits two boundary sequence numbers for the moment: one is for the first service request in the current checkpointing period, and the other is for the first service request that has a “False” completion flag. All the services after the first boundary need to be replayed on the backup machine for consistency, but only those after the second boundary need to be responded to the clients. If there are no new requests, [wiki:LLM LLM] only transmits the boundary sequence numbers. == Benchmarks and Measurements == We used three network applications to evaluate the downtime, network delay, and overhead of [wiki:LLM LLM] and [http://nss.cs.ubc.ca/remus/ Remus]: * 1) Example 1 (highnet)—The first example is flood ping with an interval of 0:01 seconds, with no significant computation task running on domain U. Thus, the network load will be extremely high, but the system updates are not significant. We call this, “highnet” to indicate the intensity of the network load. * 2) Example 2 (highsys)—In the second example, we designed a simple application to taint 200 pages (4 KB per page on our platform) per second, with no service requests from external clients. Therefore, this example involves significant computational workload on domain U. The name “highsys” reflects the intensity on system updates. * 3) Example 3 (Kernel Compilation)—We used kernel compilation as the third example, which involves all the components in a system, including CPU/memory/disk updates. As part of Xen, we used Linux kernel 2:6:18 directly. Given the limited resource on domain U, we reduced the configuration to a small subset in order to reduce the time required to run each experiment. == Evaluation Results == [[Image(figure3.jpg)]] Figure 3. Downtime under highnet and highsys. Figure 3 shows the downtime results under highnet and highsys. We observe that under highsys, [wiki:LLM LLM] incurs a downtime that is longer than, yet comparable to, that of [http://nss.cs.ubc.ca/remus/ Remus]. The reason is that, [wiki:LLM LLM] runs at low frequency, hence the migration traffic in each period is higher than that of [http://nss.cs.ubc.ca/remus/ Remus]. Under highnet, the downtime of [wiki:LLM LLM] and [http://nss.cs.ubc.ca/remus/ Remus] show a reverse relationship, where [wiki:LLM LLM] outperforms [http://nss.cs.ubc.ca/remus/ Remus]. This is because, from the client side, there are too many duplicated packets to be served again by the backup machine in [http://nss.cs.ubc.ca/remus/ Remus]. In [wiki:LLM LLM], on the contrary, the primary machine migrates the requested packets as well as boundaries to the backup machine, i.e., only those packets yet to be served are served by the backup. Thus the client does not need to re-transmit the requests, and therefore experiences a shorter downtime. [[Image(figure4.jpg)]] Figure 4. Network Delay under highnet and highsys. Figure 4 shows the network delay results under highnet and highsys. In both cases, we observe that [wiki:LLM LLM] significantly reduces the network delay by removing the egress queue management and releasing responses immediately. In Figure 4, we only recorded the average network delay in a migration period. Next, we show the details of the network delay in a specific migration period in Figure 4, in which the interval between two adjacent peak values represents one migration period. We observe that the network delay of [http://nss.cs.ubc.ca/remus/ Remus] decreases linearly within a period but remains at a plateau. In [wiki:LLM LLM], on the contrary, the network delay is very high at the beginning of a period, then quickly decreases to nearly zero after a system update is over. Therefore, most of the time, [wiki:LLM LLM] demonstrates a much shorter network delay than [http://nss.cs.ubc.ca/remus/ Remus]. [[Image(figure5.jpg)]] Figure 5. Overhead under Kernel Compilation. Figure 5 shows the overhead under kernel compilation. The overhead significantly changes only in the checkpointing period interval of [1;60] seconds, as shown in the figure. For checkpointing in the shorter periods, the migration of system updates may last longer than a configured checkpointing period. Therefore the kernel compilation time for these cases are almost the same with minor fluctuation. For checkpointing in the longer periods, especially when it is longer than the baseline (i.e., kernel compilation without any checkpointing), a VM suspension may or may not occur during one compilation process. Therefore, the kernel compilation time will be very close to the baseline, meaning a zero percent overhead. Right in this interval, [wiki:LLM LLM]’s overhead due to the suspension of domain U is significantly lower than that of [http://nss.cs.ubc.ca/remus/ Remus], as it runs at much lower frequency than [http://nss.cs.ubc.ca/remus/ Remus].