Skip to content

The first-ever vast natural language generation benchmark for Indonesian, Sundanese, and Javanese. We provide multiple downstream tasks, pre-trained IndoGPT and IndoBART models, and a starter code! (EMNLP 2021)

License

Notifications You must be signed in to change notification settings

IndoNLP/indonlg

Repository files navigation

IndoNLG

Pull Requests Welcome GitHub license Contributor Covenant

Baca README ini dalam Bahasa Indonesia.

⚠️ Update 16/11/2024: We update the links to the datasets and fasttext models in IndoNLG!

IndoNLG is a collection of Natural Language Generation (NLG) resources for Bahasa Indonesia with 6 kind of downstream tasks. We provide the code to reproduce the results and large pre-trained models (IndoBART and IndoGPT) trained with around 4 billion word corpus (Indo4B-Plus), around ~25 GB of text data. This project was initially started by a joint collaboration between universities and industry, such as Institut Teknologi Bandung, Universitas Multimedia Nusantara, The Hong Kong University of Science and Technology, Universitas Indonesia, DeepMind, Gojek, and Prosa.AI.

Research Paper

IndoNLG has been accepted by EMNLP 2021 and you can find the details in our paper https://aclanthology.org/2021.emnlp-main.699. If you are using any component on IndoNLG including Indo4B-Plus, IndoBART, or IndoGPT in your work, please cite the following paper:

@inproceedings{cahyawijaya-etal-2021-indonlg,
    title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation",
    author = "Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu and Purwarianti, Ayu and Fung, Pascale",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.699",
    pages = "8875--8898",
}

Example

  • We provide example to load IndoBART model and fine-tune the model on Machine Translation task.
  • Check our example on the following Link

How to contribute to IndoNLG?

Be sure to check the contributing guidelines and contact the maintainers or open an issue to collect feedbacks before starting your PR.

IndoNLG Downstream Task

Download and unzip the dataset from this [Link]

Indo4B-Plus Dataset

We provide the access to our large pretraining dataset.

  • Indo4B-Plus Dataset Upscaled (~25 GB uncompressed, 9.4 GB compressed) [Link]

IndoBART and IndoGPT Models

We provide IndoBART and IndoGPT Pretrained Language Model [Link]

Indobenchmark Toolkit

We provide the toolkit to use the IndoNLGTokenizer in [Link]

About

The first-ever vast natural language generation benchmark for Indonesian, Sundanese, and Javanese. We provide multiple downstream tasks, pre-trained IndoGPT and IndoBART models, and a starter code! (EMNLP 2021)

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published