Replies: 2 comments 1 reply
-
Hmm, I guess you tried some larger instance types to get 25 Gb instead of "up to 10 Gb" network bandwith? Especially with an ECR placed in your VPC it should be faster than pulling from GitLab. Not sure if a custom AMI with the Docker image solves this issue as this AMI has to be transferred to your machine. You could give it a try. The current executors using the |
Beta Was this translation helpful? Give feedback.
-
c5d.large <= 10Gb |
Beta Was this translation helpful? Give feedback.
-
This module is amazing and we've been using it for over a year very successfully now. Thank you!
In addition to standard CMake C++ and similar projects, we are also using it for Unreal Engine builds and packaging via GitLab CI. This works great overall, but Unreal Engine images are very big, taking 9+ minutes for UE4 and 12+ minutes for UE5 images.
We are looking for solutions on decreasing the time spent on pulling the image to a newly instantiated runner, as Unreal images are getting bigger and bigger, a solution is more and more pressing.
Of course, an always-on runner would be an option, but that is not cost effective. We already tried ECR but it is not any faster than pulling from our GitLab Registry (hosted on Backblaze B2 using an S3 driver). We also re-use runners for subsequent builds (although that is a hit or miss it seems).
As the runner machines are freshly created, I understand the image can't already be available on them. Are there other solutions around to cut that time down? Custom AMIs with pre-installed docker images maybe?
Beta Was this translation helpful? Give feedback.
All reactions