You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As discussed briefly today, it might be useful to add a job analyzer that, for jobs run in a particular mode, records the times at which a job changes status, which will enable (with some pre-defined level of granularity) the job wall-time, queue time, transfer time and serialization time. This could then be extended to a flow too.
This ties in with the idea of small jobs, as it will allow us to basically profile the cases for which workflow overhead is too much and needs optimisation, and also cases where the small job part is not worth optimizing away given the real HPC time required (e.g., making 2000 super cells might be 1000 times slower than running natively, but if you are going to run DFT on those 2000 anyway it will still be 0.001% of the total time).
The text was updated successfully, but these errors were encountered:
As discussed briefly today, it might be useful to add a job analyzer that, for jobs run in a particular mode, records the times at which a job changes status, which will enable (with some pre-defined level of granularity) the job wall-time, queue time, transfer time and serialization time. This could then be extended to a flow too.
This ties in with the idea of small jobs, as it will allow us to basically profile the cases for which workflow overhead is too much and needs optimisation, and also cases where the small job part is not worth optimizing away given the real HPC time required (e.g., making 2000 super cells might be 1000 times slower than running natively, but if you are going to run DFT on those 2000 anyway it will still be 0.001% of the total time).
The text was updated successfully, but these errors were encountered: