-
Notifications
You must be signed in to change notification settings - Fork 37
TACC Open Hackathon 2024
Ben Ryan edited this page Oct 15, 2024
·
15 revisions
Some notes for organizing our efforts
!!! Need at least three people for every day
- Tues Oct 8 10 AM – 11:30 AM online
- Meet with mentor
- Tues Oct 15 9 AM – 5 PM online
- Cluster intro
- Introductory team presentations
- Work with mentor
- Tues Oct 22 – Thurs Oct 24 9 AM – 5 PM hybrid
- Work on code with mentor
-
Ideas
- Use smaller fixed-space communication buffers that greedily fill and send repeatedly until all data is exchanged
- Use contiguous buffers large enough to accommodate all fields (not respecting sparsity)
- Others?
-
Example problem: [
parthenon_vibe
,advection
,fine_advection
]- Modify example to vary number of separately enrolled fields at runtime
<parthenon/job>
problem_id = advection
<parthenon/mesh>
refinement = none
nx1 = 256
x1min = -0.5
x1max = 0.5
ix1_bc = periodic
ox1_bc = periodic
nx2 = 256
x2min = -0.5
x2max = 0.5
ix2_bc = periodic
ox2_bc = periodic
nx3 = 256
x3min = -0.5
x3max = 0.5
ix3_bc = periodic
ox3_bc = periodic
<parthenon/meshblock>
nx1 = 128
nx2 = 128
nx3 = 128
<parthenon/time>
nlim = 25
tlim = 1.0
integrator = rk2
ncycle_out_mesh = -10000
<Advection>
cfl = 0.45
vx = 1.0
vy = 1.0
vz = 1.0
profile = hard_sphere
refine_tol = 0.3 # control the package specific refinement tagging function
derefine_tol = 0.03
compute_error = false
num_vars = 1 # number of variables
vec_size = 10 # size of each variable
fill_derived = false # whether to fill one-copy test vars
Sample performance on a single GH200 (ran above with block sizes of 64, 128 and 256):
nb64.out:|-> 6.62e-02 sec 3.6% 100.0% 0.0% ------ 51 boundary_communication.cpp::96::SendBoundBufs [for]
nb128.out:|-> 1.44e-01 sec 11.0% 100.0% 0.0% ------ 51 boundary_communication.cpp::96::SendBoundBufs [for]
nb256.out:|-> 5.45e-01 sec 25.9% 100.0% 0.0% ------ 51 boundary_communication.cpp::96::SendBoundBufs [for]
nb64.out:|-> 8.81e-02 sec 4.8% 100.0% 0.0% ------ 51 boundary_communication.cpp::274::SetBounds [for]
nb128.out:|-> 1.69e-01 sec 12.9% 100.0% 0.0% ------ 51 boundary_communication.cpp::274::SetBounds [for]
nb256.out:|-> 6.44e-01 sec 30.6% 100.0% 0.0% ------ 51 boundary_communication.cpp::274::SetBounds [for]
- Example problem:
particles-example
- Example problem:
-
This would be a heavy lift to fully implement
-
Example problem:
- Example problem:
- Secondary goal interests
- Particle scaling
- Secondary goal interests
- Multigrid parallel performance
- Secondary goal interests
- Improve buffer kernel performance for few (large) blocks
- Secondary goal interests
- Particles
- NCCL
- Secondary goal interests
- Secondary goal interests
- Single-meshblock bottlenecks
- Interface for downstreams to add CUDA async copies?
- Secondary goal interests
User guide: https://docs.tacc.utexas.edu/hpc/vista/
To log in:
ssh [tacc username]@vista.tacc.utexas.edu
[enter your TACC password]
[enter your TACC 2FA pin]
To get to your scratch space (purge policy should be ignorable by us for this hackathon):
cd $SCRATCH
To get parthenon:
git clone https://github.com/parthenon-hpc-lab/parthenon.git
git submodule update --init --recursive
To set up python for your user account:
module load phdf5
pip install numpy h5py
To load the environment:
module load nvidia/24.9
module load openmpi/5.0.5_nvc249
module load phdf5
Two-hour interactive job on a Grace-Hopper node:
idev -p gh -N 1 -n 1 -m 120
Configure the code
export NVCC_WRAPPER_DEFAULT_COMPILER=mpicxx
cmake -DKokkos_ENABLE_CUDA=ON -DPARTHENON_DISABLE_HDF5_COMPRESSION=ON -DKokkos_ARCH_HOPPER90=On -DCMAKE_CXX_COMPILER=/path/to/source/parthenon/external/Kokkos/bin/nvcc_wrapper -DCMAKE_C_COMPILER=mpicc /path/to/source/parthenon
Run the code with UCX workarounds:
export UCX_CUDA_COPY_DMABUF=n
export UCX_TLS=^gdr_copy
ibrun /path/to/build/example/advection/advection-example -i /path/to/source/parthenon/example/advection/parthinput.advection