Skip to content

Official PyTorch implementation of AAAI-22: Self-supervised Representation Learning Framework for Remote PhysiologicalMeasurement using Spatiotemporal Augmentation Loss (SLF-RPM)

License

Notifications You must be signed in to change notification settings

Dylan-H-Wang/SLF-RPM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SLF-RPM: Self-supervised Representation Learning Framework for Remote Physiological Measurement Using Spatiotemporal Augmentation Loss

This repository hosts the PyTorch implementation of SLF-RPM.

The paper is accepted by AAAI-22 and is available at: Arxiv.

overview

Highlights

  • Simple and flexible training process: SLF-RPM can be easily scaled to any RPM-related datasets and models acting as an effective pre-training strategy.

  • RPM-specific data augmentation

    • Landmark-based spatial augmentation: Split and compare different facial parts to effectively capture the colour fluctuations on human skin.

    • Sparsity-based temporal augmentation: Characterise periodic colour variations using Nyquist–Shannon sampling theorem to exploit rPPG signal features.

  • More stable contrastive learning process: A new loss function using the pseudo-labels derived from our augmentations to regulate the training process of contrastive learning and handles complicated noise.

  • Collections of benchmarks: Several SOTA supervised and self-supervised studies are evaluated and compared.

Dependencies and Installation

To install required packages, you can install packages with pip by

pip install -r requirements.txt

After preparing required environment, you can clone this repository to use SLF-RPM.

Data

Please refer to the official websites for license and terms of usage.

We provide each dataset links below:

Usage

To train and test SLF-RPM, you can run:

chmod u+x ./run.sh
bash ./run.sh

Note: make sure you have setup dataset_dir path correctly.

Identified Issues

  1. If you meet [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) in your machine, please check this PyTorch issue.

Models and Results

For your convinience, we provide trained model weights (before linear probing) and results on each dataset (after linear probing).

Dataset Model MAE RMSE SD R
MAHNOB-HCI Download 3.60 4.67 4.58 0.92
UBFC-rPPG Download 8.39 9.70 9.60 0.70
VIPL-HR-V2 Download 12.56 16.59 16.60 0.32

Citation

If you find this repo useful in your work or research, please cite:

@article{Wang2021SelfSupervisedLF,
  title={Self-supervised Representation Learning Framework for Remote Physiological Measurement Using Spatiotemporal Augmentation Loss},
  author={Hao Wang and Euijoon Ahn and Jinman Kim},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.07695}
}

About

Official PyTorch implementation of AAAI-22: Self-supervised Representation Learning Framework for Remote PhysiologicalMeasurement using Spatiotemporal Augmentation Loss (SLF-RPM)

Topics

Resources

License

Stars

Watchers

Forks