Skip to content

epishchik/OpenNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Open Neural Networks library for image classification.

PyPI Generic badge
Docker PyTorch

Table of content

  1. Quick start
  2. Warnings
  3. Encoders
  4. Decoders
  5. Pretrained
  6. Pretrained old configs fixes
  7. Datasets
  8. Losses
  9. Metrics
  10. Optimizers
  11. Schedulers
  12. Examples
  13. Wandb

Quick start

1. Straight install.

1.1 Install torch with cuda.
pip install -U torch --extra-index-url https://download.pytorch.org/whl/cu113
1.2 Install opennn_pytorch.
pip install -U opennn_pytorch

2. Dockerfile.

cd docker/
docker build -t opennn:latest .

Warnings

  1. Cuda is only supported for nvidia graphics cards.
  2. Alexnet decoder doesn't support bce losses family.
  3. Sometimes combination of dataset/encoder/decoder/loss will give bad results, try to combine others.
  4. Custom cross-entropy support only mode when preds have (n, c) shape and labels have (n) shape.
  5. Not all options in transform.yaml and config.yaml are required.
  6. Mean and std in datasets section must be used in transform.yaml, for example [mean=[0.2859], std=[0.3530]] -> normalize: [[0.2859], [0.3530]]

Encoders

Decoders

Pretrained

LeNet
Encoder Decoder Dataset Weights Configs Logs
LeNet LeNet MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
LeNet LeNet FASHION-MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
LeNet Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
LeNet Linear FASHION-MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
LeNet AlexNet MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
LeNet AlexNet FASHION-MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
AlexNet
Encoder Decoder Dataset Weights Configs Logs
AlexNet LeNet MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
AlexNet LeNet FASHION-MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
AlexNet Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
AlexNet Linear FASHION-MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
AlexNet AlexNet MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
AlexNet AlexNet FASHION-MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
GoogleNet
Encoder Decoder Dataset Weights Configs Logs
GoogleNet Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
GoogleNet Linear FASHION-MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
ResNet
Encoder Decoder Dataset Weights Configs Logs
ResNet18 Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
ResNet34 Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
ResNet50 Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
ResNet101 Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
ResNet152 Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
MobileNet
Encoder Decoder Dataset Weights Configs Logs
MobileNet Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
VGG
Encoder Decoder Dataset Weights Configs Logs
VGG-11 Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
VGG-16 Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL
VGG-19 Linear MNIST BEST, PLAN CONFIG, TRANSFORM TRAINVAL

Pretrained configs issues

Config file changed, check configs folder!!!

  1. Config must include test_part value, (train_part + valid_part + test_part) value can be < 1.0.
  2. You will able to add wandb structure for logging in wandb.
  3. Full restructure into branches structure.

Datasets

Dataset parameters:

  • MNIST [classes=10] [mean=[0.1307], std=[0.3801]]
  • FASHION-MNIST [classes=10] [mean=[0.2859], std=[0.3530]]
  • CIFAR-10 [classes=10] [mean=[0.491, 0.482, 0.446], std=[0.247, 0.243, 0.261]]
  • CIFAR-100 [classes=100] [mean=[0.5071, 0.4867, 0.4408], std=[0.2675, 0.2565, 0.2761]]

Losses

Metrics

Optimizers

Schedulers

Examples

  1. Run from yaml config.
from opennn_pytorch import run

config = 'path to yaml config'  # check configs folder
run(config)
  1. Get encoder and decoder.
from opennn_pytorch.encoder import get_encoder
from opennn_pytorch.decoder import get_decoder
  
encoder_name = 'ResNet18'
decoder_name = 'AlexNet'
decoder_mode = 'Single'
input_channels = 1
number_classes = 10
device = 'cuda:0'

encoder = get_encoder(encoder_name, 
                      input_channels).to(device)

model = get_decoder(decoder_name, 
                    encoder, 
                    number_classes, 
                    decoder_mode, 
                    device).to(device)

3.1 Get dataset.

from opennn_pytorch.dataset import get_dataset
from torchvision import transforms

transform_config = 'path to transform yaml config'
dataset_name = 'MNIST'
datafiles = None
train_part = 0.7
valid_part = 0.2
test_part = 0.05

transform_lst = opennn_pytorch.transforms_lst(transform_config)
transform = transforms.Compose(transform_lst)

train_data, valid_data, test_data = get_dataset(dataset_name,
                                                train_part, 
                                                valid_part, 
                                                test_part, 
                                                transform, 
                                                datafiles)

3.2 Get custom dataset.

from opennn_pytorch.dataset import get_dataset
from torchvision import transforms

transform_config = 'path to transform yaml config'
dataset_name = 'CUSTOM'
images = 'path to folder with images'
annotation = 'path to annotation yaml file with image: class structure'
datafiles = (images, annotation)
train_part = 0.7
valid_part = 0.2
test_part = 0.05

transform_lst = opennn_pytorch.transforms_lst(transform_config)
transform = transforms.Compose(transform_lst)

train_data, valid_data, test_data = get_dataset(dataset_name,
                                                train_part, 
                                                valid_part, 
                                                test_part, 
                                                transform, 
                                                datafiles)
  1. Get optimizer.
from opennn_pytorch.optimizer import get_optimizer

optimizer_name = 'RAdam'
lr = 1e-3
weight_decay = 1e-5
optimizer_params = {'lr': lr,
                    'weight_decay': weight_decay}

optimizer = get_optimizer(optimizer_name, model, optimizer_params)
  1. Get scheduler.
from opennn_pytorch.scheduler import get_scheduler

scheduler_name = 'PolynomialLRDecay'
scheduler_type = 'custom'
scheduler_params = {'max_decay_steps': 20,
                    'end_learning_rate': 0.0005,
                    'power': 0.9}

scheduler = get_scheduler(scheduler_name,
                          optimizer,
                          scheduler_params,
                          scheduler_type)
  1. Get loss function.
from opennn_pytorch.loss import get_loss

loss_fn = 'L1Loss'
loss_fn, one_hot = get_loss(loss_fn)
  1. Get metrics functions.
from opennn_pytorch.metric import get_metric

metrics_names = ['accuracy', 'precision', 'recall', 'f1_score']
number_classes = 10
metrics_fn = get_metric(metrics_names, nc=number_classes)
  1. Train/Test.
from opennn_pytorch.algo import train, test, prediction
import random

algorithm = 'train'
batch_size = 16
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
number_classes = 10
save_every = 5
epochs = 20
wandb_flag = True
wandb_metrics = ['accuracy', 'f1_score']

train_dataloader = torch.utils.data.DataLoader(train_data, 
                                               batch_size=batch_size, 
                                               shuffle=True)

valid_dataloader = torch.utils.data.DataLoader(valid_data, 
                                               batch_size=batch_size, 
                                               shuffle=False)

test_dataloader = torch.utils.data.DataLoader(test_data, 
                                              batch_size=1, 
                                              shuffle=False)

if algorithm == 'train':
  train(train_dataloader, 
        valid_dataloader, 
        model, 
        optimizer, 
        scheduler, 
        loss_fn, 
        metrics_fn, 
        epochs, 
        checkpoints, 
        logs, 
        device, 
        save_every, 
        one_hot, 
        number_classes,
        wandb_flag,
        wandb_metrics)
elif algorithm == 'test':
  test_logs = test(test_dataloader, 
                   model, 
                   loss_fn, 
                   metrics_fn, 
                   logs, 
                   device, 
                   one_hot, 
                   number_classes,
                   wandb_flag,
                   wandb_metrics)
  if pred:
    indices = random.sample(range(0, len(test_data)), 10)
    os.mkdir(test_logs + '/prediction', 0o777)
    for i in range(10):
      tmp_range = range(number_classes)
      tmp_dct = {i: names[i] for i in tmp_range}
      prediction(test_data,
                 model,
                 device,
                 tmp_dct,
                 test_logs + f'/prediction/{i}',
                 indices[i])

Wandb

Wandb is very powerful logging tool, you will able to log metrics, hyperparamets, model hooks etc.

wandb login
<your api token from wandb.ai>

Workspace

Table

Citation

Project citation.

License

Project is distributed under MIT License.