We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It takes up to 3 minutes on holesky for aggregateWithRandomness when subscribing to all subnets
aggregateWithRandomness
considering 1 epoch = 6.4 minutes, this takes almost 50% of main thread cpu time this cause performance issue for attached validators
cpu profile shows 51.6% for this time
holesky_subscribe_all_subnets_network.cpuprofile.zip holesky_subscribe_all_subnets_main.cpuprofile.zip
reduce this time, or consider offload this to native using async/await call cc @wemeetagain @matthewkeil
No response
Linux
unstable, v1.22.0
The text was updated successfully, but these errors were encountered:
beacon_attestation messages per slot increased from <3k to ~45k since we turned subscribe_all_subnets flag on
beacon_attestation
subscribe_all_subnets
Sorry, something went wrong.
Closed by #7204. Might be some more optimizations possible by tuning the number of libuv threads though
Successfully merging a pull request may close this issue.
Describe the bug
It takes up to 3 minutes on holesky for
aggregateWithRandomness
when subscribing to all subnetsconsidering 1 epoch = 6.4 minutes, this takes almost 50% of main thread cpu time this cause performance issue for attached validators
cpu profile shows 51.6% for this time
holesky_subscribe_all_subnets_network.cpuprofile.zip
holesky_subscribe_all_subnets_main.cpuprofile.zip
Expected behavior
reduce this time, or consider offload this to native using async/await call
cc @wemeetagain @matthewkeil
Steps to reproduce
No response
Additional context
No response
Operating system
Linux
Lodestar version or commit hash
unstable, v1.22.0
The text was updated successfully, but these errors were encountered: