-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new grompp/mdrun behaviour conflicts with .cpt file? #3
Comments
Hi, @pgbarletta, My apologies for the late reply, yesterday we meet with GROMACS developers and confirmed that at the moment there is not a way to modify CPT files. Our solution is to chdir to the sandbox and use just file names and not absolute or relative paths. Regards, |
Hi, thanks for the heads up. So, just to be clear, say the scheduler kills the process, the MD is abruptly interrupted and the files are left in this sandbox dir, how would I go about restarting it? Another issue I have with this sandbox dir is that the user may check how the MD is going but is not going to find the log file or the current trajectory file where it expects it to be, but on a cryptic directory, which ends up --if I'm correct--, on the directory from which my protocol was called. This means the library will create dirs and files where it chooses and I won't have any control over it. I'm sure you have good reasons for this new behaviour, but I'm not sure these have been properly weighted against the cons. Is the decision to change over to this behaviour final? |
Hi, @pgbarletta, If you check the last two commits to biobb_common you'll find:
I hope the second one "disabling the sandbox" would fulfill your needs. Of course, at the moment these new features are not fully tested. Regards, |
Thanks! I'll try it out as soon as I can. |
I think this variable should be named Also, I rewrote
but I didn't make a PR since it's a bit scrappy. |
Sorry, I don't get it, it's still creating a sandbox dir and running the MD in there instead of running everything in the folder I point to. So now, I don't get an error from the cpi, but a new temporary folder each time I restart my protocol and a new MD is started instead of restarting from a previous step. |
Thank you @pgbarletta all your suggestions are correct and already committed. From my point of view, your rewriting of the copy_to_host function is as good as it can be, just added a few comments to increase code readability. Pau |
It shouln't create the sandbox folder if the disable_sandbox property is set to True. Please let me check, I'm writing a bunch of new tests. Pau |
oh, I didn't set that option. I'll try again as soon as I can. |
So, Over here it gets the current working dir (by the way, this is the dir from where the script is launched, and not the directory where the input files are located, which was probably what you intended). As a result, the current version of mdrun wipes away the whole dir from where it was launched. Not the best user experience. |
This issue is stale because it has been open for 30 days with no activity. If there is no more activity in the following 14 days it will be automatically closed. |
This issue is stale because it has been open for 30 days with no activity. If there is no more activity in the following 14 days it will be automatically closed. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
This issue will need major changes in biobb_common opening an issue there pointing here. |
This issue is stale because it has been open for 30 days with no activity. If there is no more activity in the following 14 days it will be automatically closed. |
Hi,
I've recently updated to 3.9 and it seems that now everything is run inside a temporary sandbox dir while previously only the .zip topology file was decompressed inside a temporary folder.
So now, if I want to restart a run from a checkpoint file I get this error:
That is, since the temporary dir (
faf79796-f8fe-4f8b-8647-2bd6f6d90226
) always changes, I can't properly restart an MD.Am I getting this right or is there something else going on? Because this would be quite an issue for most workflows, I assume.
The text was updated successfully, but these errors were encountered: