Skip to content

Models supported: ResNet, ResNetV2, SE-ResNet, ResNeXt, SE-ResNeXt [layers: 18, 34, 50, 101, 152] (1D and 2D versions with DEMO for Classification and Regression).

License

Notifications You must be signed in to change notification settings

Sakib1263/TF-1D-2D-ResNetV1-2-SEResNet-ResNeXt-SEResNeXt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

ResNet-ResNeXt-Model-Builder-Tensorflow-Keras

This repository contains One-Dimentional (1D) and Two-Dimentional (2D) versions of ResNet (original) and ResNeXt (Aggregated Residual Transformations on ResNet) developed in Tensorflow-Keras. The models in this repository have been built following the original papers' implementation guidances (as much as possible).

Supported Models:

  1. ResNet18 [1] - ResNeXt18 [2]
  2. ResNet34 [1] - ResNeXt34 [2]
  3. ResNet50 [1] - ResNeXt50 [2]
  4. ResNet101 [1] - ResNeXt101 [2]
  5. ResNet152 [1] - ResNeXt152 [2]

Squeeze and Excite (SE) versions of ResNet and ResNeXt models are also available.

ResNet Architecture Params

All the models contain BatchNormalization (BN) blocks after Convolutional blocks and before activation (ReLU), which is deviant from the original implementation to obtain better performance. Read more about BN in this paper [3].

ResNet Architectures

A table from the original paper containing the architectures of the ResNet models developed is shown below:

ResNet Architecture Params

The original implementation comes up with a fixed number of 1000 classification due to using ImageNet dataset for training and evaluation purposes. The developed ResNet model is flexible enough to accept any number of classed according to the user's requirements.

Mentionable that ResNet18 and ResNet34 uses a lighter residual block that other three deeper models as shown in the Figure below where the deeper residual block with a bottleneck structure is for ResNet50, ResNet101 and ResNet152.

Residual Blocks

ResNeXt Architectures

The architecture of ResNeXt, also known as ResNet_v3, is almost same as that of the original ResNet, except the Residual Block as shown in the figure below. The aggregated residual block in ResNeXt divides the input tensor into multiple parallel paths based on the cardinality factor set by the user. Normally the more paths we have, the better is the performance and the lighter is the network. This image from the paper shows three equivalent structure for Aggregated Residual Blocks. In this code, only model (b) has been implemented (so far).

Aggregated Residual Block Models

The following Table represents a comparison between ResNet50 and ResNeXt50. It can be seen that with Cardinality = 32 (Default in the paper), the ResNeXt model has around 500000 less parameters than its equivalent ResNet counterpart.

Aggregated Residual Block Table

Supported Features

The speciality about this model is its flexibility. The user has the option for:

  1. Choosing any of 5 available ResNet or ResNeXt models for either 1D or 2D tasks.
  2. Varying number of input kernel/filter, commonly known as the Width of the model.
  3. Varying number of classes for Classification tasks and number of extracted features for Regression tasks.
  4. Varying number of Channels in the Input Dataset.
  5. Varying Cardinality amount in the ResNext architecture (model default is 8, paper default is 32). Mentionable that, When Cardinality = 1, ResNeXt becomes ResNet.

Details of the process are available in the DEMO provided in the codes section. The datasets used in the DEMO as also available in the 'Documents' folder.

References

[1] He, K., Zhang, X., Ren, S., & Sun, J. (2021). Deep Residual Learning for Image Recognition. arXiv.org. Retrieved 30 August 2021, from https://arxiv.org/abs/1512.03385.
[2] Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2021). Aggregated Residual Transformations for Deep Neural Networks. arXiv.org. Retrieved 30 August 2021, from https://arxiv.org/abs/1611.05431.
[3] Ioffe, S., & Szegedy, C. (2021). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv.org. Retrieved 30 August 2021, from https://arxiv.org/abs/1502.03167v3.

About

Models supported: ResNet, ResNetV2, SE-ResNet, ResNeXt, SE-ResNeXt [layers: 18, 34, 50, 101, 152] (1D and 2D versions with DEMO for Classification and Regression).

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published