Skip to content

happer64bit/yinetal-dataset-model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Note

We are looking for the volunteer for writing rude words for dataset

To enhance the README for your Yinetal Model repository, here are some suggested adjustments and additions:

Yinetal Model

This repository contains datasets of rude words in Burmese and a model for censoring these words to provide a safe environment, particularly for children.

Purpose

The purpose of this project is to automatically detect and censor offensive Burmese words in text data. This helps create safer online environments by filtering out inappropriate content, which is crucial for platforms catering to children and sensitive audiences.

Features

  • Dataset: Includes datasets of Burmese words labeled as rude or non-rude.
  • Model: Utilizes a logistic regression model trained on this dataset to predict whether a word is rude based on its context.
  • Functionality: Provides methods to load data, train the model, evaluate its performance, and censor text by replacing rude words with asterisks.

Usage

To use the Yinetal Model:

  1. Clone the repository:

    git clone https://github.com/happer64bit/yinetal-dataset-model.git
    cd yinetal-dataset-model
  2. Set up your environment:

    • Ensure Python and necessary dependencies are installed (pandas, scikit-learn, etc.).
  3. Load and preprocess data:

    • Use the provided datasets (words.txt). Ensure the dataset is placed correctly as per the README instructions.
  4. Train the model:

    • Use the train() method to train the logistic regression model on your dataset. Example:
      model = RudeWord()
      df = model.load_data()
      trained_model, vectorizer, X_test_vec, y_test = model.train(df)
  5. Evaluate model performance:

    • Use the evaluate() method to assess the accuracy and other metrics of the trained model. Example:
      accuracy, report = model.evaluate(trained_model, X_test_vec, y_test)
      print(f"Accuracy: {accuracy:.2f}")
      print("Classification Report:")
      print(report)
  6. Censor text:

    • Use the predict_rude_word() method to censor text by replacing offensive Burmese words with asterisks (*). Example:
      while True:
          text = input("> Enter some text (type 'exit' to quit): ")
          if text.lower() == 'exit':
              break
          
          rude_word_detected = model.predict_rude_word(text)
          if rude_word_detected:
              print("Rude words detected in the input text.")
          else:
              print("No rude words detected in the input text.")

Contributing

Contributions to improve the dataset, model performance, or documentation are welcome. Please fork the repository, make your changes, and submit a pull request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

A place where magic of peace happens 🔮

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages