You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
http_archive(
name = "bazel_gazelle",
# To get the new checksum run `shasum -a 256` on the downloaded file.
sha256 = "32938bda16e6700063035479063d9d24c60eda8d79fd4739563f50d331cb3209",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/bazel-gazelle/releases/download/v0.35.0/bazel-gazelle-v0.35.0.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.35.0/bazel-gazelle-v0.35.0.tar.gz",
],
)
In MODULE.bazel we now have:
bazel_dep(name = "gazelle", version = "0.35.0", repo_name = "bazel_gazelle")
bazel_dep(name = "rules_go", version = "0.46.0", repo_name = "io_bazel_rules_go")
What version of Bazel are you using?
> bazel version
Bazelisk version: development
Build label: 7.1.1
Build target: @@//src/main/java/com/google/devtools/build/lib/bazel:BazelServer
Build time: Thu Mar 21 18:08:59 2024 (1711044539)
Build timestamp: 1711044539
Build timestamp as int: 1711044539
With .bazeliskrc: USE_BAZEL_VERSION=7.1.1
Does this issue reproduce with the latest releases of all the above?
Current latest releases of bazel, gazelle, rules_go.
What operating system and processor architecture are you using?
MacOS 14.4.1 with Apple M1 Pro CPU and container based of Ubuntu 20.04 LTS for amd64.
What did you do?
bazel run //:gazelle-update-files
Set common --enable_bzlmod=true in .bazelrc when using the MODULES.bazel file.
What did you expect to see?
When running gazelle-update-files on macOS it takes between 30s and 90s, when we run on the same host but through our build container it takes 4-5m (through CPU emulation).
With WORKSPACE the process takes 1-2m under Kubernetes: EKS with amd64 EC2 instances, with the same Ubuntu based container.
What did you see instead?
We expect a similar time between using WORKSPACE and MODULE.bazel.
When running under EKS, with MODULE.bazel, the process takes 1h.
Something we noticed is that the disk cache is very high in the container. We allocated 5GB to the container and the java process (Bazel) would consume 750MB, a few hundred MB for the kernel and 3.8GB for the cache. When allocating 7GB to the container the disk cache started using that memory and not releasing. The CPU utilization was near zero.
This is on a private monorepo with a sizable go.mod and hundreds of .proto files. Amongst the 3rd party go modules we have azure, docker, AWS SDK, ...
Something really odd is the fact that in the slow case, the CPU load is near zero (according to top), but the disk cache is hoarding a lot of memory (according to cgroup v2 data).
We did try to give 16GB to the container and the disk cache usage peaked at 6.22GB, and the RSS usage peaked at 1.15GB, with the CPU being near zero and the RAM not being starved I have no idea why things seem stuck.
Sorry if I can't share a reproducible case.
The text was updated successfully, but these errors were encountered:
What version of gazelle are you using?
In
WORKSPACE
we had:In
MODULE.bazel
we now have:What version of rules_go are you using?
In
WORKSPACE
we had:In
MODULE.bazel
we now have:What version of Bazel are you using?
With
.bazeliskrc
:USE_BAZEL_VERSION=7.1.1
Does this issue reproduce with the latest releases of all the above?
Current latest releases of bazel, gazelle, rules_go.
What operating system and processor architecture are you using?
MacOS 14.4.1 with Apple M1 Pro CPU and container based of Ubuntu 20.04 LTS for amd64.
What did you do?
Set
common --enable_bzlmod=true
in.bazelrc
when using theMODULES.bazel
file.What did you expect to see?
When running
gazelle-update-files
on macOS it takes between 30s and 90s, when we run on the same host but through our build container it takes 4-5m (through CPU emulation).With
WORKSPACE
the process takes 1-2m under Kubernetes: EKS with amd64 EC2 instances, with the same Ubuntu based container.What did you see instead?
We expect a similar time between using
WORKSPACE
andMODULE.bazel
.When running under EKS, with
MODULE.bazel
, the process takes 1h.Something we noticed is that the disk cache is very high in the container. We allocated 5GB to the container and the java process (Bazel) would consume 750MB, a few hundred MB for the kernel and 3.8GB for the cache. When allocating 7GB to the container the disk cache started using that memory and not releasing. The CPU utilization was near zero.
This is on a private monorepo with a sizable
go.mod
and hundreds of.proto
files. Amongst the 3rd party go modules we have azure, docker, AWS SDK, ...Something really odd is the fact that in the slow case, the CPU load is near zero (according to
top
), but the disk cache is hoarding a lot of memory (according to cgroup v2 data).We did try to give 16GB to the container and the disk cache usage peaked at 6.22GB, and the RSS usage peaked at 1.15GB, with the CPU being near zero and the RAM not being starved I have no idea why things seem stuck.
Sorry if I can't share a reproducible case.
The text was updated successfully, but these errors were encountered: