Skip to content

Commit

Permalink
Create README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
adi271001 authored Jul 24, 2024
1 parent af70f79 commit f0f963e
Showing 1 changed file with 77 additions and 0 deletions.
77 changes: 77 additions & 0 deletions Fictional Character Battle Outcome Prediction/Models/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Models Implemented in Fictional Character Battle Outcome Prediction Project

This document provides details on the machine learning models implemented in the "Fictional Character Battle Outcome Prediction" project.

## Models and Their Accuracies

1. **Random Forest**:
- Accuracy: 75.27%
- A versatile ensemble learning method that operates by constructing multiple decision trees during training and outputs the mode of the classes for classification.

2. **Support Vector Classifier (SVC)**:
- Accuracy: 77.40%
- A supervised learning model that analyzes data for classification by finding the hyperplane that best separates the classes.

3. **Logistic Regression**:
- Accuracy: 76.33%
- A statistical model that in its basic form uses a logistic function to model a binary dependent variable.

4. **Decision Tree**:
- Accuracy: 71.00%
- A decision support tool that uses a tree-like graph of decisions and their possible consequences.

5. **K-Nearest Neighbors (KNN)**:
- Accuracy: 73.56%
- A simple, instance-based learning algorithm that assigns a class to a sample based on the majority class among its k-nearest neighbors.

6. **Gradient Boosting**:
- Accuracy: 77.40%
- An ensemble technique that builds models sequentially, each new model correcting errors made by previous models.

7. **AdaBoost**:
- Accuracy: 78.25%
- A boosting algorithm that combines the predictions of several base estimators to improve robustness over a single estimator.

8. **CatBoost**:
- Accuracy: 76.12%
- A high-performance library for gradient boosting on decision trees, especially well-suited for categorical data.

9. **Extra Trees**:
- Accuracy: 73.35%
- An ensemble learning method similar to Random Forest but with more randomness in the splitting of nodes.

10. **XGBoost**:
- Accuracy: 72.71%
- An optimized distributed gradient boosting library designed to be highly efficient, flexible, and portable.

11. **Bagging Classifier**:
- Accuracy: 73.99%
- An ensemble meta-estimator that fits base classifiers each on random subsets of the original dataset and then aggregates their predictions.

12. **Stacking Classifier**:
- Accuracy: 75.27%
- An ensemble learning technique that combines multiple base classifiers via a meta-classifier. The base classifiers are trained on the training dataset, and the meta-classifier is trained on the outputs of the base classifiers.

![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_0.png?raw=true)
![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_1.png?raw=true)
![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_2.png?raw=true)
![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_3.png?raw=true)
![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_4.png?raw=true)
![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_5.png?raw=true)
![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_6.png?raw=true)
![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_7.png?raw=true)
![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_8.png?raw=true)
![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_9.png?raw=true)
![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_10.png?raw=true)
![eda-1](https://github.com/adi271001/ML-Crate/blob/Fictional-Character-Battle/Fictional%20Character%20Battle%20Outcome%20Prediction/Images/__results___25_11.png?raw=true)

## Conclusion
Among the models evaluated, **AdaBoost** achieved the highest accuracy of 78.25%. This model is the best performer due to its ability to adaptively adjust the weights of misclassified instances, leading to improved performance in the prediction task. AdaBoost’s strong performance across multiple metrics, including precision, recall, and F1 score, highlights its effectiveness in handling this classification problem.

## Signature

**Name:** Aditya D
**Github:** [https://www.github.com/adi271001](https://www.github.com/adi271001)
**LinkedIn:** [https://www.linkedin.com/in/aditya-d-23453a179/](https://www.linkedin.com/in/aditya-d-23453a179/)
**Topmate:** [https://topmate.io/aditya_d/](https://topmate.io/aditya_d/)
**Twitter:** [https://x.com/ADITYAD29257528](https://x.com/ADITYAD29257528)

0 comments on commit f0f963e

Please sign in to comment.