-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issue in ROMIO collective buffer aggregation for Parallel HDF5 on sunspot #6984
Comments
Talked to @pkcoff offline. He's going to test without unsetting the collective tuning file envvars to see if there's any impact on the performance and report back. |
@raffenet advised me to NOT unset the collective tuning json vars, I did so: |
At the advice of Ken I also unset all these: |
hey paul this is way late but can you try again with reasonable lustre striping? (lfs setstripe -c -1 /lus/gila/projects/Aurora_deployment/pkcoff/run/h5bench/rundir8991806 or whatever the directory is) |
no matter how many processes you run on a node, the default is for just one of those to be an i/o aggregator, so the 15 non-aggregator processes are going to increase their time score. more stripes increases parallelism as does more aggregators. so try setting these hints:
|
@pkcoff another thing we should do is confirm this issue on actual Aurora vs. Sunspot. If the Aurora behavior is different we need to consider removing the |
@raffenet There is similar behavior on aurora with daos, I only opened it on sunspot and gila because they were not available when I had time to open the issue. Once daos_user is stabilized I will retest at 16 ppn with 8 aggregators as @roblatham00 suggests and report back. |
Using the mpich build 'mpich/20231026/icc-all-pmix-gpu' on sunspot and using darshan and vtune for performance analysis I am seeing what appears to be very bad performance in the messaging layer for the ROMIO collective buffering aggregation. I am usng the HDF5 h5bench exerciser benchmark which uses collective MPI-IO for the backend. This is just on 1 node so just intra-node communication, looking at darshan for example using 2 ranks I see:
Time is in seconds, the total mpi-io time is 0.79 sec and within that the posix (lustre io) time is only 0.27 sec to write and then 0.10 sec to read (if doing rmw) so the delta is most likely messaging layer, and with 16 ranks it gets much worse:
So for 16 ranks the ratio is alot higher for the messaging layer time in mpiio. HDF5 is using collective mpi-io aggregation so there is a POSIX section which has all the times for the actual lustre filesystem interaction and then an MPIIO section with times that include all the messaging and the POSIX time, so taking the delta between them roughly gives the messaging time for aggregation. With Vtune I can see most all the time for mpi-io writing (MPI_File_write_at_all) is in ofi. So for 1 node 16 ranks the question is for MPIIO out of 37.61 secodes only 2.6 seconds are spent writing to lustre leaving over 35 seconds doing what I presume is mpi communication for the aggregation. To reproduce on sunspot running against lustre (gila):
Start interactive job on 1 node:
qsub -lwalltime=60:00 -lselect=1 -A Aurora_deployment -q workq -I
Then to get the darshan text file run this:
The text was updated successfully, but these errors were encountered: