Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Purpose
closes #683
Notes
gpu_dyamond_target_nodiags
shortrun uses 0.25GB before atmos init, 15.83 after atmos init, then exceeds the 15.895GB limit during bucket init (on P100) see buildgpu_aquaplanet_dyamond
run uses ~10GB (on P100) see buildgpu_dyamond_target
longrun running without atmos diagnostics uses 0.43GB before atmos init, 62GB after atmos init - see details below (on H100) see buildgpu_dyamond_target
longrun running with atmos or coupler diagnostics uses 0.43GB before atmos init, 62GB after atmos init, 66GB right before coupler loop, and 67GB right after coupler loop (on H100) see build** I would expect 2 and 5 to show the same results, but atmos run from coupler allocates 15GB while from ClimaAtmos it allocates 10GB. Maybe I should run 5 on H100 to see how high the allocations will reach in that case (but clima is offline this morning)
atmos allocations when running
gpu_dyamond_target
without diagnostics (same for last two setups mentioned above):