Skip to content

Latest commit

 

History

History
89 lines (70 loc) · 4.35 KB

README.md

File metadata and controls

89 lines (70 loc) · 4.35 KB

Iterative Error Decimation

arXiv Tensorflow 2.4.0 Keras 2.4.3 Python 3.6 License: MIT

This repository contains the codes for the paper Iterative Error Decimation for Syndrome-Based Neural Network Decoders, accepted for publication in the Journal of Communication and Information Systems (JCIS).

In this project, we introduce a new syndrome-based decoder where a deep neural network (DNN) estimates the error pattern from the reliability and syndrome of the received vector. The proposed algorithm works by iteratively selecting the most confident positions to be the error bits of the error pattern, updating the vector received when a new position of the error pattern is selected.

If the code or the paper has been useful in your research, please add a citation to our work:

@article{kamassury_ied,
  title={Iterative Error Decimation for Syndrome-Based Neural Network Decoders},
  author={Kamassury, Jorge K S and Silva, Danilo},
  journal={Journal of Communication and Information Systems},
  year={2021}
}

Project overview

For an overview of the project, follow the steps from the main_code module, namely:

  • Get the parity check matrix (H): bch_par
  • Building the neural network: models_nets
  • Model training: training_nn
  • Model inference using the IED decoder: BER_FER
  • Plot of inference results: inference

The default configuration (using the function in get_training_model) will train a model with the cross entropy as the loss function. The following are the important parameters of the training

  • training_nn(model, H, loss, lr, batch_size, spe, epochs, EbN0_dB, tec) , where:
    • model: neural network for short length BCH code
    • H: parity check matrix
    • loss: loss function (by default, binary cross entropy)
    • lr: learning rate
    • batch size: batch size for training
    • spe: steps per epoch
    • epochs: number of epochs for training
    • EbN0_dB: ratio of energy per bit to noise power spectral density
    • tec: technique for changing the learning rate (ReduceLROnPlateau or CyclicalLearningRate)

Important routines can be found in the module uteis, especially:

  • training_generator: simulates the transmission of codewords via the AWGN channel for model training
  • getfer: computes the metrics BLER, BER, ...
  • biawgn: simulate codewords for inference
  • custom_loss: custom loss function joining binary cross entropy and loss syndrome

Pretrained models

All pre-trained models are in the folder models, where:

  • model_63_45: trained model for the BCH(63, 45) code;
  • model_relu_63_36: trained model for BCH(63, 36) code using ReLU as activation function;
  • model_sigmoid_63_36: trained model for BCH(63, 36) code using Sigmoid as activation function;
  • model_BN_sigmoid_63_36: trained model for BH(63, 36) code using Sigmoid as activation function and batch normalization layers.

Model inference

To perform model inference for the BER and BLER metrics, use the module ber_fer_result, where:

  • max_nfe: number of block errors
  • T: number of iterations using IED
  • p_initial: EbN0_dB initial value for inference
  • p_end: EbN0_dB final value for inference

If you just want to load the pre-trained model, perform and plot the inference, use the script load_infer_plot.


Result

Performances for BCH codes using the IED decoder are in the folder results.

  • BLER and BER for the BCH(63,45) code, respectively:

  • BLER and BER for the BCH(63, 36) code, respectively: