This repo includes the visual food localizer, the code and the links to data required to replicate the results of the paper: "Selectivity for food in human ventral visual cortex".
The visual food localizer is built on the fLoc localizer published in Stigliani et al., 2015. To run this localizer:
- Download and install the fLoc localizer following the instructions provided. Make sure the localizer is running as expected.
- Copy the food image directory
food4thought/localizer/food
from this repository into thefLoc/stimuli
directory in the fLoc repository. I.e., there needs to be a new directory namedfood
along with the other directories (adult
,body
,car
, etc.). - Add the line
stim_set3 = {'food','body' 'word' 'adult', 'house'};
after the line 24 offLoc/functions/fLocSequence.m
. Change the new line 27 tostim_per_set = 80;
(this will change the localizer to only use the first 80 images of each category). Add the following case to therun_sets
function after line 107:
case 4
run_sets = repmat(seq.stim_set3, seq.num_runs, 1);
- Change the while loop on line 42 of
fLoc/runme.m
to:
while ~ismember(stim_set, 1:4)
stim_set = input('Which stimulus set? (1 = standard, 2 = alternate, 3 = both, 4 = food) : ');
end
- When running the localizer, specify option 4. The run should take 4 min exactly.
Follow the fLoc instructions to analyse the data using vistasoft.
The preprocessed localizer data from our paper is available at this link. After downloading it can be added to the food4thought/localizer/data
directory. Please contact the corresponding author to obtain the pycortex store for the subjects from the paper. After you obtain them, you should add them to your pycortex store. You can also run this analysis with your own data, after preprocessing it, obtaining the Free Surfer surfaces for your subject and adding them to pycortex.
The localizer code is organized as such, with each file being independently runnable:
- analysis
- run_all.ipynb used to run the localizer analysis for identifying food regions
- A set of .py files containing the necessary analysis and utility functions
- data
- Preprocessed localizer fMRI data for each subject along with the stimulus log files
- food
- The food images used in the food localizer
- res
- Directory for storing the eventual results
The NSD data was collected by Allen et al., 2021 and is freely available at http://naturalscenesdataset.org/. The NSD code is organized as such, with each file being independently runnable:
- Analysis
- Encoding models
- OLS encoding model
- Ridge regression encoding model
- Decoding models
- Note: any of these steps can be replaced with custom searchlights
- Searchlight_get_indices: get indices for each voxel’s searchlight
- Searchlight_inds_to_voxel_vals: Using indices, get corresponding voxel values and filter voxel responses to be only that of relevant images (in our case, shared images)
- Searchlight_cvm: run SVM cross validation on searchlight data to decode food v non food
- PCA
- Clustering
- Specify within this file what kind of input to cluster
- Data
- Basic data needed that cannot be preprocessed by user
- Visualization
- Example visualization tool
Note: Voxel data is assumed to be inputted in a format of stimuli * voxel cortical surface