You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For context, one of the main clusters I have access to places an unfortunate limit of 12 hours on each job submission, and another has a 16-hour limit. Because my log likelihood (already written in C) evaluations take a few seconds, as I have increased N_active and N_effective into the 10,000s in search of converged results (even in this limit, pocomc seems quite a bit more efficient than many other samplers), things have gotten to the point where the jobs can time out before making it to the next iteration and dumping a pickled output file (using save_every = 1).
Would it be possible to add an option along the lines of saving the sampler state after N likelihood evaluations, or some other more predictable metric?
The text was updated successfully, but these errors were encountered:
For context, one of the main clusters I have access to places an unfortunate limit of 12 hours on each job submission, and another has a 16-hour limit. Because my log likelihood (already written in C) evaluations take a few seconds, as I have increased N_active and N_effective into the 10,000s in search of converged results (even in this limit, pocomc seems quite a bit more efficient than many other samplers), things have gotten to the point where the jobs can time out before making it to the next iteration and dumping a pickled output file (using save_every = 1).
Would it be possible to add an option along the lines of saving the sampler state after N likelihood evaluations, or some other more predictable metric?
The text was updated successfully, but these errors were encountered: