Changes between Version 7 and Version 8 of LLM
- Timestamp:
- 10/04/11 02:36:30 (13 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
LLM
v7 v8 33 33 We utilized three network application examples to evaluate the downtime, network delay and overhead of LLM and Remus: 34 34 35 1) Example 1 (highnet)—The first example is flood ping [23] with the interval of 0:01 second, and there is no significant computation task running on domain U. In this case, the network load will be extremely high, but the system updates are not significant. We named it “ HighNet” to signify the intensity of network load.35 1) Example 1 (highnet)—The first example is flood ping [23] with the interval of 0:01 second, and there is no significant computation task running on domain U. In this case, the network load will be extremely high, but the system updates are not significant. We named it “highnet” to signify the intensity of network load. 36 36 37 2) Example 2 (highsys)—In the second example, we designed a simple application to taint 200 pages (4 KB per page on our platform) per second, and there are no service requests from external clients. Therefore, this example involves a lot of computation workload on domain U. The name “ HighSys” reflects its intensity on system updates.37 2) Example 2 (highsys)—In the second example, we designed a simple application to taint 200 pages (4 KB per page on our platform) per second, and there are no service requests from external clients. Therefore, this example involves a lot of computation workload on domain U. The name “highsys” reflects its intensity on system updates. 38 38 39 39 3) Example 3 (Kernel Compilation)—We used kernel compilation as the third example which involves all the components in a system, including CPU/memory/disk updates. As part of Xen, we used Linux kernel 2:6:18 directly. Given the limited resource on domain U, we cut the configuration to a small subset in order to reduce the time required to run each experiment.