Skip to content
This repository has been archived by the owner on Mar 28, 2018. It is now read-only.

increase in boot time between 2.1.4 and 2.1.11 #980

Open
egernst opened this issue Jun 13, 2017 · 7 comments
Open

increase in boot time between 2.1.4 and 2.1.11 #980

egernst opened this issue Jun 13, 2017 · 7 comments
Labels

Comments

@egernst
Copy link

egernst commented Jun 13, 2017

I notice that time to boot has changed since 2.1.4. Unforunately, PnP measurements weren't started until ~2.1.5 release, and since then it appears a major change hasn't been found. I would guess there is something between 2.1.5 and 2.1.4 that is adding considerably to bootime.

Note: in output share below, the only thing that was modified was the cc-oci-runtime version -- test was carried out on same host.

$ cc-oci-runtime --version
cc-oci-runtime version: 2.1.4
$ time docker run --runtime=cor -itd alpine sh
03b6444d2526f50bd5f13ac4e0e1153de3beb0b85403ab708fb68644cf578107
real 0m0.546s
$ apt-get install cc-oci-runtime
$ cc-oci-runtime --version
cc-oci-runtime version: 2.1.11
$ time docker run --runtime=cor -itd alpine sh
f56f42c4e9052e40838d8aa2258c434aef5ff8952dfc2d18d81d3e19dc889e79

real 0m0.993s

@egernst
Copy link
Author

egernst commented Jun 13, 2017

@grahamwhaley
Copy link

Hi @egernst Could be... I had a quick look at the release notes (https://github.com/01org/cc-oci-runtime/releases) just to check, and nothing jumped out at me. We can try to reproduce to confirm. To check - how many times did you run the time test - it can be somewhat variable. You may wish to try https://github.com/01org/cc-oci-runtime/blob/master/tests/metrics/workload_time/docker_workload_time.sh such as bash ./workload_time/docker_workload_time.sh true busybox cor 10
Running that locally I see time variance from 1.00s to 1.43s (some of that does look like cache warming effect as well - the first few runs being slower than later ones).

@egernst egernst added the CC 2.1 label Jul 18, 2017
@treeder
Copy link

treeder commented Aug 18, 2017

We just started playing with Clear Containers and we noticed it's about ~2x slower than Docker with runc. What should we expect? Based on this issue, it sounds like it may be expected to be closer to runc timings (before this regression), is that correct?

@MarioCarrilloA
Copy link
Contributor

Hi @treeder the last results that I saw were around ~1.2 - 1.4 seconds

@grahamwhaley
Copy link

Hi @treeder. It will be slower. We do have to do some extra work (get a VM up and running etc.). How much slower can depend on what workload you are running, what hardware you are on and exactly what 'time' you are measuring (for instance, just the time the runtime is executing, or the time docker is composing the image and the time for the workload to get booted etc.).
Taking all those things into account, it would not be unusual to see in the order of 500ms overhead.
Generally we have to make tradeoffs of features vs speed (and size - the rule of three...).
If you happen to have a usage scenario that is more speed sensitive than feature or size sensitive then we'd be interested to discuss it :-)

@treeder
Copy link

treeder commented Aug 22, 2017

@grahamwhaley we do have a use case that is speed sensitive, it's for a serverless/functions-as-a-service project. So lots of short lived containers, the faster the better. Very small image sizes, no special features needed other than the extra security.

@grahamwhaley
Copy link

@treeder right, understood.
It is possible to modify both qemu and the small kernel/OS that lives inside the VM to optimise them for a specific use case. I would call this an 'advanced user' use case.
There is a helper package to build the kernel/OS over here: https://github.com/clearcontainers/osbuilder

As such, I would say it would first be best to do a POC showing the integration/flows give the results expected, taking into account the knowledge that the launch times may be improvable if necessary.

I will also note, at some point in the near future I expect us to have a round of optimisations on the CC3.x code base for both size and speed:
https://github.com/clearcontainers/runtime

We have discussed for some time if we will/should provide multiple setups for both QEMU and the VM kernel/OS to suit different use cases. We have avoided doing that so far to reduce complexity and focus on other things, but it may be in the future that we do ship multiple 'configs'. Of course, that has other impacts on us such as spreading our development out and expanding our testing requirements.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants