-
Notifications
You must be signed in to change notification settings - Fork 59
increase in boot time between 2.1.4 and 2.1.11 #980
Comments
Hi @egernst Could be... I had a quick look at the release notes (https://github.com/01org/cc-oci-runtime/releases) just to check, and nothing jumped out at me. We can try to reproduce to confirm. To check - how many times did you run the time test - it can be somewhat variable. You may wish to try https://github.com/01org/cc-oci-runtime/blob/master/tests/metrics/workload_time/docker_workload_time.sh such as |
We just started playing with Clear Containers and we noticed it's about ~2x slower than Docker with runc. What should we expect? Based on this issue, it sounds like it may be expected to be closer to runc timings (before this regression), is that correct? |
Hi @treeder the last results that I saw were around ~1.2 - 1.4 seconds |
Hi @treeder. It will be slower. We do have to do some extra work (get a VM up and running etc.). How much slower can depend on what workload you are running, what hardware you are on and exactly what 'time' you are measuring (for instance, just the time the runtime is executing, or the time docker is composing the image and the time for the workload to get booted etc.). |
@grahamwhaley we do have a use case that is speed sensitive, it's for a serverless/functions-as-a-service project. So lots of short lived containers, the faster the better. Very small image sizes, no special features needed other than the extra security. |
@treeder right, understood. As such, I would say it would first be best to do a POC showing the integration/flows give the results expected, taking into account the knowledge that the launch times may be improvable if necessary. I will also note, at some point in the near future I expect us to have a round of optimisations on the CC3.x code base for both size and speed: We have discussed for some time if we will/should provide multiple setups for both QEMU and the VM kernel/OS to suit different use cases. We have avoided doing that so far to reduce complexity and focus on other things, but it may be in the future that we do ship multiple 'configs'. Of course, that has other impacts on us such as spreading our development out and expanding our testing requirements. |
I notice that time to boot has changed since 2.1.4. Unforunately, PnP measurements weren't started until ~2.1.5 release, and since then it appears a major change hasn't been found. I would guess there is something between 2.1.5 and 2.1.4 that is adding considerably to bootime.
Note: in output share below, the only thing that was modified was the cc-oci-runtime version -- test was carried out on same host.
$ cc-oci-runtime --version
cc-oci-runtime version: 2.1.4
$ time docker run --runtime=cor -itd alpine sh
03b6444d2526f50bd5f13ac4e0e1153de3beb0b85403ab708fb68644cf578107
real 0m0.546s
$ apt-get install cc-oci-runtime
$ cc-oci-runtime --version
cc-oci-runtime version: 2.1.11
$ time docker run --runtime=cor -itd alpine sh
f56f42c4e9052e40838d8aa2258c434aef5ff8952dfc2d18d81d3e19dc889e79
real 0m0.993s
The text was updated successfully, but these errors were encountered: