Note
We are looking for the volunteer for writing rude words for dataset
To enhance the README for your Yinetal Model
repository, here are some suggested adjustments and additions:
This repository contains datasets of rude words in Burmese and a model for censoring these words to provide a safe environment, particularly for children.
The purpose of this project is to automatically detect and censor offensive Burmese words in text data. This helps create safer online environments by filtering out inappropriate content, which is crucial for platforms catering to children and sensitive audiences.
- Dataset: Includes datasets of Burmese words labeled as rude or non-rude.
- Model: Utilizes a logistic regression model trained on this dataset to predict whether a word is rude based on its context.
- Functionality: Provides methods to load data, train the model, evaluate its performance, and censor text by replacing rude words with asterisks.
To use the Yinetal Model:
-
Clone the repository:
git clone https://github.com/happer64bit/yinetal-dataset-model.git cd yinetal-dataset-model
-
Set up your environment:
- Ensure Python and necessary dependencies are installed (
pandas
,scikit-learn
, etc.).
- Ensure Python and necessary dependencies are installed (
-
Load and preprocess data:
- Use the provided datasets (
words.txt
). Ensure the dataset is placed correctly as per the README instructions.
- Use the provided datasets (
-
Train the model:
- Use the
train()
method to train the logistic regression model on your dataset. Example:model = RudeWord() df = model.load_data() trained_model, vectorizer, X_test_vec, y_test = model.train(df)
- Use the
-
Evaluate model performance:
- Use the
evaluate()
method to assess the accuracy and other metrics of the trained model. Example:accuracy, report = model.evaluate(trained_model, X_test_vec, y_test) print(f"Accuracy: {accuracy:.2f}") print("Classification Report:") print(report)
- Use the
-
Censor text:
- Use the
predict_rude_word()
method to censor text by replacing offensive Burmese words with asterisks (*
). Example:while True: text = input("> Enter some text (type 'exit' to quit): ") if text.lower() == 'exit': break rude_word_detected = model.predict_rude_word(text) if rude_word_detected: print("Rude words detected in the input text.") else: print("No rude words detected in the input text.")
- Use the
Contributions to improve the dataset, model performance, or documentation are welcome. Please fork the repository, make your changes, and submit a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.