- Quick start
- Warnings
- Encoders
- Decoders
- Pretrained
- Pretrained old configs fixes
- Datasets
- Losses
- Metrics
- Optimizers
- Schedulers
- Examples
- Wandb
pip install -U torch --extra-index-url https://download.pytorch.org/whl/cu113
pip install -U opennn_pytorch
cd docker/
docker build -t opennn:latest .
- Cuda is only supported for nvidia graphics cards.
- Alexnet decoder doesn't support bce losses family.
- Sometimes combination of dataset/encoder/decoder/loss will give bad results, try to combine others.
- Custom cross-entropy support only mode when preds have (n, c) shape and labels have (n) shape.
- Not all options in transform.yaml and config.yaml are required.
- Mean and std in datasets section must be used in transform.yaml, for example [mean=[0.2859], std=[0.3530]] -> normalize: [[0.2859], [0.3530]]
- LeNet [paper] [code]
- AlexNet [paper] [code]
- GoogleNet [paper] [code]
- ResNet18 [paper] [code]
- ResNet34 [paper] [code]
- ResNet50 [paper] [code]
- ResNet101 [paper] [code]
- ResNet152 [paper] [code]
- MobileNet [paper] [code]
- VGG-11 [paper] [code]
- VGG-16 [paper] [code]
- VGG-19 [paper] [code]
LeNet
Encoder | Decoder | Dataset | Weights | Configs | Logs |
---|---|---|---|---|---|
LeNet | LeNet | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
LeNet | LeNet | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
LeNet | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
LeNet | Linear | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
LeNet | AlexNet | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
LeNet | AlexNet | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet
Encoder | Decoder | Dataset | Weights | Configs | Logs |
---|---|---|---|---|---|
AlexNet | LeNet | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet | LeNet | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet | Linear | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet | AlexNet | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet | AlexNet | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
GoogleNet
ResNet
Encoder | Decoder | Dataset | Weights | Configs | Logs |
---|---|---|---|---|---|
ResNet18 | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
ResNet34 | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
ResNet50 | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
ResNet101 | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
ResNet152 | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
MobileNet
VGG
Config file changed, check configs folder!!!
- Config must include test_part value, (train_part + valid_part + test_part) value can be < 1.0.
- You will able to add wandb structure for logging in wandb.
- Full restructure into branches structure.
Dataset parameters:
- MNIST [classes=10] [mean=[0.1307], std=[0.3801]]
- FASHION-MNIST [classes=10] [mean=[0.2859], std=[0.3530]]
- CIFAR-10 [classes=10] [mean=[0.491, 0.482, 0.446], std=[0.247, 0.243, 0.261]]
- CIFAR-100 [classes=100] [mean=[0.5071, 0.4867, 0.4408], std=[0.2675, 0.2565, 0.2761]]
- Run from yaml config.
from opennn_pytorch import run
config = 'path to yaml config' # check configs folder
run(config)
- Get encoder and decoder.
from opennn_pytorch.encoder import get_encoder
from opennn_pytorch.decoder import get_decoder
encoder_name = 'ResNet18'
decoder_name = 'AlexNet'
decoder_mode = 'Single'
input_channels = 1
number_classes = 10
device = 'cuda:0'
encoder = get_encoder(encoder_name,
input_channels).to(device)
model = get_decoder(decoder_name,
encoder,
number_classes,
decoder_mode,
device).to(device)
3.1 Get dataset.
from opennn_pytorch.dataset import get_dataset
from torchvision import transforms
transform_config = 'path to transform yaml config'
dataset_name = 'MNIST'
datafiles = None
train_part = 0.7
valid_part = 0.2
test_part = 0.05
transform_lst = opennn_pytorch.transforms_lst(transform_config)
transform = transforms.Compose(transform_lst)
train_data, valid_data, test_data = get_dataset(dataset_name,
train_part,
valid_part,
test_part,
transform,
datafiles)
3.2 Get custom dataset.
from opennn_pytorch.dataset import get_dataset
from torchvision import transforms
transform_config = 'path to transform yaml config'
dataset_name = 'CUSTOM'
images = 'path to folder with images'
annotation = 'path to annotation yaml file with image: class structure'
datafiles = (images, annotation)
train_part = 0.7
valid_part = 0.2
test_part = 0.05
transform_lst = opennn_pytorch.transforms_lst(transform_config)
transform = transforms.Compose(transform_lst)
train_data, valid_data, test_data = get_dataset(dataset_name,
train_part,
valid_part,
test_part,
transform,
datafiles)
- Get optimizer.
from opennn_pytorch.optimizer import get_optimizer
optimizer_name = 'RAdam'
lr = 1e-3
weight_decay = 1e-5
optimizer_params = {'lr': lr,
'weight_decay': weight_decay}
optimizer = get_optimizer(optimizer_name, model, optimizer_params)
- Get scheduler.
from opennn_pytorch.scheduler import get_scheduler
scheduler_name = 'PolynomialLRDecay'
scheduler_type = 'custom'
scheduler_params = {'max_decay_steps': 20,
'end_learning_rate': 0.0005,
'power': 0.9}
scheduler = get_scheduler(scheduler_name,
optimizer,
scheduler_params,
scheduler_type)
- Get loss function.
from opennn_pytorch.loss import get_loss
loss_fn = 'L1Loss'
loss_fn, one_hot = get_loss(loss_fn)
- Get metrics functions.
from opennn_pytorch.metric import get_metric
metrics_names = ['accuracy', 'precision', 'recall', 'f1_score']
number_classes = 10
metrics_fn = get_metric(metrics_names, nc=number_classes)
- Train/Test.
from opennn_pytorch.algo import train, test, prediction
import random
algorithm = 'train'
batch_size = 16
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
number_classes = 10
save_every = 5
epochs = 20
wandb_flag = True
wandb_metrics = ['accuracy', 'f1_score']
train_dataloader = torch.utils.data.DataLoader(train_data,
batch_size=batch_size,
shuffle=True)
valid_dataloader = torch.utils.data.DataLoader(valid_data,
batch_size=batch_size,
shuffle=False)
test_dataloader = torch.utils.data.DataLoader(test_data,
batch_size=1,
shuffle=False)
if algorithm == 'train':
train(train_dataloader,
valid_dataloader,
model,
optimizer,
scheduler,
loss_fn,
metrics_fn,
epochs,
checkpoints,
logs,
device,
save_every,
one_hot,
number_classes,
wandb_flag,
wandb_metrics)
elif algorithm == 'test':
test_logs = test(test_dataloader,
model,
loss_fn,
metrics_fn,
logs,
device,
one_hot,
number_classes,
wandb_flag,
wandb_metrics)
if pred:
indices = random.sample(range(0, len(test_data)), 10)
os.mkdir(test_logs + '/prediction', 0o777)
for i in range(10):
tmp_range = range(number_classes)
tmp_dct = {i: names[i] for i in tmp_range}
prediction(test_data,
model,
device,
tmp_dct,
test_logs + f'/prediction/{i}',
indices[i])
Wandb is very powerful logging tool, you will able to log metrics, hyperparamets, model hooks etc.
wandb login
<your api token from wandb.ai>
Project citation.
Project is distributed under MIT License.