-
Notifications
You must be signed in to change notification settings - Fork 37
2024.02.29 Meeting Notes
Philipp Grete edited this page Mar 14, 2024
·
3 revisions
- Individual/group updates
- Higher order methods
- review non-WIP PRs
LR
- making progress on forest of octree approach (WIP PR is open)
- current status:
- old MeshBlockTree is currently still there and kept in parallel to the forest
- this way neighbor connections can easily be checked for debugging
- currently testing more complex (rectangular) grid setups
- thinking about how to handle restart wrt logical locations
- drop in replacement expected ready for testing for sometime next week
- replacement should behave identical for rectangular root grids
- future extension should also support more complex (non-rectangular) grids
JM
- worked on improving compile time. PR is open and already discussed
- also fixing leftover comments for cleanup of boundary callbacks
- thinking/talking (e.g., to Samrai developers) about issues related to connectivity (for complex grids)
PM
- encountered hiccup where
gid
needed in sparsepack - added this as a small open PR
BP
- watching compile times and testing changes by JM
- issue only comes up for nvhpc (which is the thing that works on Delta for device side MPI)
- KHARMA takes 300min in userspace to compile on ARM
BR
- worked on a new test code with cyl/sph coordinates
- will work with FG to upstream
- came across a usecase to use return vector of real for histogram
- working in a couple of fix related to particles (boundary comm for different swarms)
- also swarm packs (with PM)
- swam bvals (to not require relocatable device code) (also with PM). Will schedule separate meeting.
FG
- curvilinear changes are going through code release process
- JM also has a fix to change coordinate system headers in downstream codes at compile time
- will push upstream as it'd also be useful for KHARMA
PG
- OpenPMD output
- data model behind OpenPMD standard (variables -> block) does not directly match our current model (block -> variables)
- stems from original design on uniform mesh for OpenPMD
- not an issue with the ADIOS2 backend (that allows for sparsity in the data -- which would also solve the Parthenon sparse variable output issue), but potentially issue with hdf5 backend
- but we have an HDF5 backend so it should be fine to just support/recommend openPMD with the ADIOS2 backend (for AMR sims)
- going to implement tracer particles in AthenaPK following the openPMD output
- fighting with Lumi software stack (that is now rather outdated as it's running the stable HPE/Cray stack)
- BP: (intermittent) issue on startup with MPI could also be related to latest Kokkos version 4.2 (as he also observed sth similar on Delta with the latest Cray stack)
- downstream implementation of 4th order FV volume (similar to Felker & Stone) show significant improvement wrt to quality of solution and total wall_time to solution
- most pieces could eventually go upstream
- also new entries for Shu-Osher low storage integrator tables should be upstreamed, too, as they allow for a net benefit in time (due to cfl >1) at increased temporal accurary (compared to standard second order)
- Kernel auto tuning (and associated framework)