Skip to content

Pytorch implementation of 'Representation Learning of Resting State fMRI with Variational Autoencoder'

Notifications You must be signed in to change notification settings

libilab/rsfMRI-VAE

Repository files navigation

rsfMRI-VAE

This repository is the official Pytorch implementation of 'Representation Learning of Resting State fMRI with Variational Autoencoder'

Environments

This code is developed and tested with

Python 2.7.17
Pytorch 1.2.0

Training

To train the model in this paper, run this command:

python fMRIVAE_Train.py --data-path path-to-your-data

Evaluation

If you want to get latent variables of the trained model, change path inside the code Example_Encoder.py and run:

python Example_Encoder.py

If you want to reconstruct images from the latent variables, change path inside the code Example_Decoder.py and run:

python Example_Decoder.py

Demo

Directory demo includes a whole pipeline from processing fMRI data to getting latent variables from VAE. A brief illustration of the pipeline is shown in the figure below.

Illustration of the whole pipeline of demo.

The file in the data folder is a cifti file that can be inputted into the preprocess.m function, which outputs a cifti file with preprocessed data. Either the original or preprocessed file can be inputted into the geometric_reformatting.m file which outputs a mat file called fMRI.mat and an h5 file called demo_data.h5 into the data folder.

The data loader in /demo/lib/utils.py feeds the reformatted data into the VAE when VAE_inference.py is run. This function uses the pretrained model to generate latent variables which are saved in /demo/result/demolatent as mat files.

VAE_inference.py also uses the latent variables to generate reconstructed images in /demo/result/recon as mat files. Then backward_reformatting.m converts the data back into a cifti file called: /demo/data/rfMRI_REST1_LR_Atlas_MSMAll_hp2000_clean_reconstruction.dtseries.nii.

Usage

Steps for running the VAE data preparation code:

  1. Clone github repository into your local computer.

    • git clone https://github.com/libilab/rsfMRI-VAE.git
  2. Sample inputs data and trained model weights must be downloaded from here. Place downloaded files into the same directory in demo.

    • Download /checkpoint/checkpoint.pth.tar into /demo/checkpoint/.
    • Download /data/rfMRI_REST1_LR_Atlas_MSMAll_hp2000_clean.dtseries.nii into /demo/data/.
  3. run preprocess.m

  4. run geometric_reformatting.m

  5. run VAE_inference.py

  6. run backward_reformatting.m

Output Layouts

The following intermediate mat files will be saved while running the demo codes:

  • demo/data/fMRI.mat

    • matrix that holds normalized fMRI data for each time point
    • size: (num voxels in visual cortex) x (number time points)
    • This file includes only voxels in valid regions (gets rid of any nan values).
  • /result/MSE_Mask.mat

    • This file includes two fields: Regular_Grid_Right_Mask and Regular_Grid_Left_Mask.
    • Those two fields hold im_size x im_size 2D masks telling whether that voxel is valid, or has an nan value (meaning that the data points must be excluded).
  • /result/Left_fMRI2Grid_192_by_192_NN.mat

    • This file includes two fields: grid_mapping_L, inverse_transformation_L
  • /result/Right_fMRI2Grid_192_by_192_NN.mat

    • This file includes two fields: grid_mapping_R, inverse_transformation_R
    • The grid mapping (for each of L/R) is size (im_size x im_size) x num_voxels (without nan) and will be multiplied by the voxel data for each time point to map them to the 2D grid the inverse transformation (for each of L/R) maps the data in the 2D grid back to the voxel space

License

Copyright 2021 Laboratory of Integrated Brain Imaging at the University of Michigan.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

The software is provided "as is", without warranty of any kind, express or implied, including but not limited to the warrenties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, demages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the software.

Reference

Kim, Jung-Hoon, et al. "Representation Learning of Resting State fMRI with Variational Autoencoder." NeuroImage (2021). https://doi.org/10.1016/j.neuroimage.2021.118423

About

Pytorch implementation of 'Representation Learning of Resting State fMRI with Variational Autoencoder'

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages