Skip to content

AutoML checkpointing? #610

Answered by sonichi
ZviBaratz asked this question in Q&A
Jun 23, 2022 · 1 comments · 3 replies
Discussion options

You must be logged in to vote

Currently, the way to do warm start is via the starting_points argument. Are you thinking of normal termination of an AutoML run followed by another AutoML run or a forced termination of an AutoML run followed by another AutoML run that recovers from the failure? For the former, warm-start + logging can work. For the latter, it also works when not using ray. But when using ray, the log is not written when the termination is forced and that needs an improvement.

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@ZviBaratz
Comment options

ZviBaratz Jun 23, 2022
Collaborator Author

@sonichi
Comment options

@ZviBaratz
Comment options

ZviBaratz Jun 23, 2022
Collaborator Author

Answer selected by ZviBaratz
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants