To explore the practical application of ML by trying to predict poisonous mushrooms, noticing the trade off between accuracy and safety.
We are interested in keeping Catalonian mushroom foragers safe from poisonous mushrooms, and therefore our aim is to completely eliminate Type II errors.
In general, the aim of fine-tuning and perfecting the algorithms is to get our accuracy close to perfection. However, this time around the emphasis is on Error Types and the delicate dance between accuracy and safety.
- Are there any ML algorithms that by default err on the side of caution?
- Can we achieve 0 hospital cases with adjusting tresholds and exploring ROC curves?
- Import mushroom database
- Explore and analyze features
- Experiment with several ML models
- Experiment with tresholds while keeping an eye on accuracy
- Explore ROC curve
- Test our algorithm on data it has never seen before
- Rinse and repeat
The Google Colab Notebook for trying out different ML algorithms is found here with a supporting Medium article that outlines my thinking process and practical takeaways more in detail here.
- Data Reading & Cleaning
- Data Splitting
- Building a Preprocessor
- LazyPredict & Modelling
- Error Analysis
- Tresholds and ROC Curve analysis
My aim after running LazyPredict was to experiment with algorithms based on various mathematical models. RandomForest is a Decision Tree-based classifier, Label Propagation is a semi-supervised learning model, LGBM is a gradient boosting method, KNN groups data into “neighborhoods” based on similarities, while SVC looks for and calculates distances for the optimal hyperplane to divide the data into classes. By exploring various methods based on different mathematical models, I was curious whether any one of them would be more or less prone to a certain error type.