layout | title | nav_order | has_children |
---|---|---|---|
default |
Pipeline steps |
3 |
true |
The pipeline steps are based on the train-student recipe. They can be represented as a Directly Acyclic Graph (DAG) (svg).
This scheme is slightly outdated, since it was generated using Snakemake and we stopped maintaining it, but it shows the idea of dependencies.
Installs dependencies and compiles Marian and other tools.
Downloads datasets and samples an appropriate amount from the sentences. The time depends on dataset size. The sampling of huge mono datasets (100M+ sentences) is the most intensive operation.
Uses datasets.train
configuration section.
Runs data analysis on the downloaded datasets and outputs charts. For example a distribution of sentence length in a dataset.
Basic preprocessing, dataset specific, language specific, rule based and other attempts to clean noisy data in parallel and monolingual datasets. Good parallelization across CPU cores. To make cleaning of a new language more efficient add it to clean_parallel.py.
Alternatively use OpusCleaner.
Filters noisy sentence pairs in a parallel corpus using bicleaner-ai classifier. Cleaning thresholds are configurable per dataset, see [Dataset cleaning](##Dataset cleaning)
See more details on the Bicleaner page.
Merges clean datasets into one and applies deduplication.
Trains SentencePiece vocabulary/tokenizer model on parallel corpus.
See more details on choosing the size of vocabulary here.
Trains a shallow sequence to sequence RNN model in an opposite direction. It is useful for back-translations and cross entropy filtering. It is based on a marian example.
Translates monolingual corpus combined from monolingual datasets in target language using the backward model.
It is more useful for low-resource languages but is still recommended for high-resource ones as well.
Trains whitespace-tokenized alignments accepted by OpusTrainer using eflomal library.
It trains alignments separately for origninal parallel, backtranslated and student corpus. Backtranslated and student steps use eflobal priors extracted from the alignments trained for the original parallel corpus. It can improve accuracy for a smaller corpus as well as performance.
It works with uncompressed datasets, so it can be heavy on disk.
Trains several big transformer models on the augmented dataset. They will be later used for decoding as an ensemble of models.
Translates the corpus and the monolingual data in source language (configurable in datasets.mono-src
) using the
ensemble of teacher models.
This is the heaviest part of the pipeline but highly parallelizable.
Scores the translated corpus with the backward model and removes a part of the corpus with the lowest scores to reduce noise.
At this point we work with huge datasets that can be very disk intensive.
Trains SentencePiece tokenized alignments using elfomal similar to the alignments steps and then extracts lexical shortlist using extract_lex tool.
Some tools require uncompressed datasets on disk, and they are huge at this point. Good CPU parallelization.
Trains a small transformer student model on the filtered data and using the alignments. OpusTrainer remaps the alignment to SentencePiece. See more details on the OpusTrainer page.
Fine-tunes the student model by emulating 8bit GEMM during training. Converges very quickly and then degrades.
Applies 8 bit quantization to the fined-tuned student model. Marian CPU threads must be set to 1 for this step.
Calculates metrics for all models (BLEU, chrF) using SacreBLEU. It runs Marian decoding GPU for all models except the quantized ones that it runs on CPU.
It uses datasets.test
configuration section.
Exports the trained model and the shortlist to (bergamot-translator)(https://github.com/mozilla/bergamot-translator) format.
Step | Bottleneck |
---|---|
Installation | CPU |
Data downloading | Network, Disk |
Analyze data | CPU, Disk |
Data cleaning | CPU |
Bicleaner | GPU |
Merge and dedupe | CPU, Disk |
Training vocabulary | CPU |
Training backward model | GPU |
Augmentation with back-translations | GPU |
Training alignments | CPU, Disk |
Training teacher | GPU |
Translation by teacher | GPU |
Cross-entropy filtering | GPU, CPU, Disk |
Training shortlist | CPU, Disk |
Training student | GPU |
Fine-tuning student | GPU |
Quantizaiton | CPU |
Evaluation | GPU |
Export |