Changes between Version 15 and Version 16 of LLM
- Timestamp:
- 10/12/11 23:33:08 (13 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
LLM
v15 v16 12 12 The architecture of [wiki:LLM LLM] is shown in Figure 1. Beyond [http://nss.cs.ubc.ca/remus/ Remus], we also migrate the change in network driver buffers. The process works as follows: 13 13 14 1) First, on the primary machine, we setup the mapping between the ingress buffer and the egress buffer, signifying which packets are generated corresponding to which service request(s), and which requests are yet to be served. Moreover, [wiki:LLM LLM] hooks a copy for each ingress service request.14 * 1) First, on the primary machine, we setup the mapping between the ingress buffer and the egress buffer, signifying which packets are generated corresponding to which service request(s), and which requests are yet to be served. Moreover, [wiki:LLM LLM] hooks a copy for each ingress service request. 15 15 16 2) Second, at each migration pause, [wiki:LLM LLM] migrates the hooked copy as well as the boundary information to the backup machine asynchronously, using the same migration socket as the one used by [http://nss.cs.ubc.ca/remus/ Remus] for CPU/memory status updates and writes to the file system.16 * 2) Second, at each migration pause, [wiki:LLM LLM] migrates the hooked copy as well as the boundary information to the backup machine asynchronously, using the same migration socket as the one used by [http://nss.cs.ubc.ca/remus/ Remus] for CPU/memory status updates and writes to the file system. 17 17 18 3) Third, all the migrated service requests are buffered in a queue in the “merge” module. Those buffered requests that have been served are removed based on the migrated boundary information. Once a failure occurs on the primary machine that breaks the migration data stream, the backup machine recovers the migrated memory image and merges the service requests into the corresponding driver buffers.18 * 3) Third, all the migrated service requests are buffered in a queue in the “merge” module. Those buffered requests that have been served are removed based on the migrated boundary information. Once a failure occurs on the primary machine that breaks the migration data stream, the backup machine recovers the migrated memory image and merges the service requests into the corresponding driver buffers. 19 19 20 20 == Asynchronous Network Buffer Migration in [wiki:LLM LLM] == … … 29 29 Figure 2 shows the time sequence of migrating the checkpointed resources and the incoming service requests at different frequencies on a single network socket. The entire sequence within an epoch is described as follows: 30 30 31 1) The dashed blocks represent the suspension period when the guest VM is paused. During this suspension period, all the status updates of CPU/memory/disk are collected and stored in a migration buffer.31 * 1) The dashed blocks represent the suspension period when the guest VM is paused. During this suspension period, all the status updates of CPU/memory/disk are collected and stored in a migration buffer. 32 32 33 2) Once the guest VM is resumed, the content stored in the migration buffer is migrated first (shown as a block shaded area that is adjacent to the dashed area in the figure).33 * 2) Once the guest VM is resumed, the content stored in the migration buffer is migrated first (shown as a block shaded area that is adjacent to the dashed area in the figure). 34 34 35 3) Subsequently, the network buffer migration is started at a high frequency until the guest VM is suspended again. At the end of each network buffer migration cycle (the thin, shaded strips in the figure), [wiki:LLM LLM] transmits two boundary sequence numbers for the moment: one is for the first service request in the current checkpointing period, and the other is for the first service request that has a “False” completion flag. All the services after the first boundary need to be replayed on the backup machine for consistency, but only those after the second boundary need to be responded to the clients. If there are no new requests, [wiki:LLM LLM] only transmits the boundary sequence numbers.35 * 3) Subsequently, the network buffer migration is started at a high frequency until the guest VM is suspended again. At the end of each network buffer migration cycle (the thin, shaded strips in the figure), [wiki:LLM LLM] transmits two boundary sequence numbers for the moment: one is for the first service request in the current checkpointing period, and the other is for the first service request that has a “False” completion flag. All the services after the first boundary need to be replayed on the backup machine for consistency, but only those after the second boundary need to be responded to the clients. If there are no new requests, [wiki:LLM LLM] only transmits the boundary sequence numbers. 36 36 37 37 == Benchmarks and Measurements == 38 38 We used three network applications to evaluate the downtime, network delay, and overhead of [wiki:LLM LLM] and [http://nss.cs.ubc.ca/remus/ Remus]: 39 39 40 1) Example 1 (highnet)—The first example is flood ping with an interval of 0:01 seconds, with no significant computation task running on domain U. Thus, the network load will be extremely high, but the system updates are not significant. We call this, “highnet” to indicate the intensity of the network load.40 * 1) Example 1 (highnet)—The first example is flood ping with an interval of 0:01 seconds, with no significant computation task running on domain U. Thus, the network load will be extremely high, but the system updates are not significant. We call this, “highnet” to indicate the intensity of the network load. 41 41 42 2) Example 2 (highsys)—In the second example, we designed a simple application to taint 200 pages (4 KB per page on our platform) per second, with no service requests from external clients. Therefore, this example involves significant computational workload on domain U. The name “highsys” reflects the intensity on system updates.42 * 2) Example 2 (highsys)—In the second example, we designed a simple application to taint 200 pages (4 KB per page on our platform) per second, with no service requests from external clients. Therefore, this example involves significant computational workload on domain U. The name “highsys” reflects the intensity on system updates. 43 43 44 3) Example 3 (Kernel Compilation)—We used kernel compilation as the third example, which involves all the components in a system, including CPU/memory/disk updates. As part of Xen, we used Linux kernel 2:6:18 directly. Given the limited resource on domain U, we reduced the configuration to a small subset in order to reduce the time required to run each experiment.44 * 3) Example 3 (Kernel Compilation)—We used kernel compilation as the third example, which involves all the components in a system, including CPU/memory/disk updates. As part of Xen, we used Linux kernel 2:6:18 directly. Given the limited resource on domain U, we reduced the configuration to a small subset in order to reduce the time required to run each experiment. 45 45 46 46 == Evaluation Results ==