Skip to content

A cross lingual toxicity detection model that works for over 100 languages. Powered the mighty XLM-R model, the model performance is state of the art.

License

Notifications You must be signed in to change notification settings

Jayveersinh-Raj/cross-lingual-zero-shot-transfer

Repository files navigation

Cross Lingual Zero Shot Transfer

Cross-lingual NLP: Using vector space alignment to get cross lingual performance is zero shot cross lingual mechanism. Here we use it to train on English dataset, and get effective performance over 100 or more languages. Checkout this clickable 🤗 for the languages it supports.

Repository admins (This repository is mainly the work of the following, but open source contributors are welcome):

Jayveersinh Raj

Makar Shevchenko

Nikolay Pavlenko

Our pledge

We attempt to make internet a better, safer, and healthier place creating this. Moreover, our work also detects threatning as toxic, and hence could be used by LLMs as well as Robots to refrain from taking toxic/threatning actions, making them safer to use, and interactions healthier.

Brief Description

This is a project for Abuse reporting trained on toxic comments by Jigsaw Google dataset with 150k+ english comments. The project aims to accomplish the arbitary zero shot transfer for abuse detection in arbitarary language while being trained on English dataset. It attempts to achieve this by using the vector space alignment that is the core idea behind embedding models like XLM-Roberta, MUSE etc. Different embeddings are tested with the dataset to check the best performing embedder. Our project/model can be used by any platform or software engineer/enthusiast who has to deal with multiple languages to directly flag the toxic behaviour, or identify a valid report by users for a toxic behaviour. The use case for this can be application specific, but the idea is to make the model work with arbitary language by training on a singular language data available.

The architectural diagram

image

NOTE: The classifier architecture can have arbitrary parameters, or hidden states, the above diagram is a general idea. Diagram Credits: Samuel Leonardo Gracio

Similar work

Daily motion (Credits : Samuel Leonardo Gracio) image

Dataset Description and link

jigsaw-toxic-comment-classification

We merged all the classes to one, since all the classes belong to one super class of toxicity. Our hypothesis is to use this to flag severe toxic behaviour, severe enough to ban or block a user.

Tech stack

pytorch python python Jupyter Notebook NumPy pandas Matplotlib seaborn scikit_learn seaborn seaborn seaborn seaborn seaborn

Key Results

  • Non-toxic sentences
    • 100% accuracy - our model generalizes well
  • Toxic sentences
    • 75% → they were not toxic enough, but were subjective to the human annotator.
    • Proof of claim: GPT4 translated them, they weren’t severe, but refused to generate toxic sentences
  • Conclusion: GPT4 after generating said they were toxic, which is contradictory to itself, hence, our model is better in detecting abuse/toxicity/severity.

Importing from Huggingface

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("Jayveersinh-Raj/PolyGuard")
model = AutoModelForSequenceClassification.from_pretrained("Jayveersinh-Raj/PolyGuard")

Example usecase

from transformers import XLMRobertaForSequenceClassification, AutoTokenizer
import torch

model_name = "Jayveersinh-Raj/PolyGuard"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = XLMRobertaForSequenceClassification.from_pretrained(model_name)

text = "Jayveer is a great NLP engineer, and a noob in CV"
inputs = tokenizer.encode(text, return_tensors="pt", max_length=512, truncation=True)
outputs = model(inputs)[0]
probabilities = torch.softmax(outputs, dim=1)
predicted_class = torch.argmax(probabilities).item()
if predicted_class == 1:
  print("Toxic")
else:
  print("Not toxic")

NOTE

Make sure to have sentencepiece already installed, restart runtime after installation

pip install sentencepiece

Application of our model, and contribution

  • An example of code of conduct that it assist to regulate using advanced NLP techniques. Code of conduct
  • On a higher level our model can also be used as a decision making last layer for LLMs to decide on wheather the reponse if offensive, obscence, toxic or threatning.
  • It also would act as a ethical decision maker for Robots using LLMs for thought generations, it would help make AI, models, and Robots safer by providing a decision maker factor by blocking toxic as a harmful act, since our model also detects threatning as toxic. Hence, making AI safer, and ethical.