Toxic Comment Classification Challenge is a Kaggel challenge to build a model that is capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based using a dataset of comments from Wikipedia’s talk page edits.
Link to the challenge: https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge
The code include all the tested approaches.
Link to the datasets used for train and test: https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data
Our team was called "Lets have Fun" and the final score was 0.9807.
Link to challenge leader board: https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/leaderboard