-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a way to save the preprocessing objects for inference? (OneHotEncoder, Scaler) #76
Comments
Hi @kkristacia, Similarly, to preprocess the inference dataset, I would recommend running Please let me know if you run into any other issues and I can help you solve it! :) |
Hi Akash, thanks for the clarification. Yea I was hoping for some way to not use the training data during inference. Definitely will be great if future versions can have the functionality! |
Hi Akash. Just to second this - it would be great if the preprocessing objects were saved for making inferences in production. Loading my whole dataset into my production environment would take up space unnecessarily. Love the toolkit, and looking forward to seeing an update in the future! |
Thanks @dsunart! I'm reopening this issue as a feature request. It should be added in as part of our next release! |
Hey @kkristacia and @dsunart, happy to note that this is now part of the toolkit. You can see this in action in this example. |
Hi thank you for developing this package! I want to be able to load the already saved model, then use it for inference like in production. How can I let the inference dataset to go through the same preprocessing steps eg. OneHotEncoding of categorical variables, scaling?
The text was updated successfully, but these errors were encountered: