Skip to content

In this project, we propose to study Vision Transformers trained using the Barlow Twins self-supervised method, and compare the results with DINO. We demonstrate the effectiveness of the Barlow Twins method by showing that networks pretrained on the small PASCAL VOC 2012 dataset are able to generalize well. Authors: Apavou Clément & Zucker Arthur

License

Notifications You must be signed in to change notification settings

clementapa/RecvisProject

 
 

Repository files navigation

Study of the Emerging Properties of Self-Supervised Vision Transformers and Semantic Segmentation

Authors: Apavou Clément & Zucker Arthur

Python PyTorch PyTorch Lightning

Abstract

Self-supervised learning using transformers has shown interesting emerging properties and learn rich embeddings without annotations. Most recently, Barlow Twins proposed an elegant self-supervised learning technique using a ResNet-50 backbone which achieved competitive results when fine-tuned on downstream tasks. In this paper, we propose to study Vision Transformers trained using the Barlow Twins self-supervised method, and compare the results with. We demonstrate the effectiveness of the Barlow Twins method by showing that networks pretrained on the small PASCAL VOC 2012 dataset are able to generalize well while requiring less training and computing power than the DINO method. Finally, we propose to leverage self-supervised vision transformers and their semantically rich attention maps for semantic segmentation tasks.

dino attention map with patch size 8 dino attention map with patch size 16
Dino pretrained attention maps with a patch size of 8. Credits to Apavou & Zucker Dino pretrained attention maps with a patch size of 16. Credits to Apavou & Zucker

effective receptive fields

"Effective receptive fields of 8 layers of the deeplabv3 architecture. Credits to Apavou & Zucker"

Cross correlation matrix

Cross correlation matrix obtained by training a ViT with Barlow Twins, initialized with DINO weights. Refer to brisk-valley-111 experiment. Credits to Apavou & Zucker

predictions pretrain predictions heads
Predictions for a set of 4 images from the validation dataset using 4 different ResNet-50 backbones and a deeplabv3 head. Credits to Apavou & Zucker Predictions for a set of 4 images from the validation dataset using a ViT-S/8 backbones and 4 different heads. Credits to Apavou & Zucker

Project report

You can find the complete project report in the repository or click here. Our slides are also available here.

Experiments

Ours experiments are available on wandb :

Setting up the environment

We exported the required packages in a requirement.txt file that can be used as follows :

pip install -r requirements.txt

Training

Refer to the Barlow Twins Wiki and the Semantic Segmentation Wiki for more details

Contributions

We implemented the global structure and the Barlow Twins method from scratch in PyTorch Lightning, our visualization of the attention maps is inspired from the official DINO repository. Our trainer module takes care of initializing the lightning module and the datamodule, both of which can be chosen in our configuration file (config/hparams.py). simple parsing package extracts and parses the configuration file and allows us to switch between the two tasks: Barlow Twins training and Semantic Segmentation fine-tuning. We used the very practical Weights & Biases (wandb) library to log all of our experiments.

Visualizations

We implemented two very efficient and easy-to use callbacks to visualize the effective receptive fields and the attention maps at train and validation time. Examples are shown above. Both rely on pytorch hooks and provide more interpretation to the training. Both were implemented from scratch, and the visualization of the effective receptive fields is based on the theory from Understanding the Effective Receptive Field in Deep Convolutional Neural Networks.

We also logged the evolution of the cross-correlation matrix which is fare more interpretable than the value of the loss. As various training showed, a decreasing loss can have a cross-correlation matrix far from the identity. We used a heatmap to represent the empirical cross correlation matrix were values close to 1 are red and values close to zeros are cyan blue.

Acknowledgments

Our implementation relies on pytorch lightning, and thus requires its installation. We also use the rich library for nicer progress bars and the very handy wandb library to visualize our experiments.

We used the following implementations for the backbones and heads :

  • DeepLabV3 from pytorch vision model
  • ViT from pytorch image models
  • ViT from vit-pytorch
  • SETR adapted from setr-pytorch

In model/fix_tim/vision_transformer the vision transformer returns every token in the forward pass (while only the cls token is usually returned). We use this to obtain more features for the semantic segmentation task.

About

In this project, we propose to study Vision Transformers trained using the Barlow Twins self-supervised method, and compare the results with DINO. We demonstrate the effectiveness of the Barlow Twins method by showing that networks pretrained on the small PASCAL VOC 2012 dataset are able to generalize well. Authors: Apavou Clément & Zucker Arthur

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%