- What is PALS?
- Expected Data Structure
- Getting started: Installation Guide
- PALS Configuration Definition
- Outputs
PALS is a pipeline for reliably preprocessing images of subjects with stroke lesions. The pipeline is implemented using Nipype, with several modules:
- Reorientation to radiological convention (LAS, left is right)
- Lesion correction for healthy white matter
- Lesion load calculation by ROI
Here is a visualization of the workflow:
For additional information about the original implementation, see the publication in Frontier in Neuroinformatics.
PALS expects its input data to be BIDS-compatible but does not expect any particular values for the BIDS entities. You will need to modify the configuration file to set "LesionEntities" and "T1Entities" to match your data. Outputs are provided in BIDS derivatives.
The naming conventions of the input must be as follows:
Main Image:
- Unregistered:
sub-{subject}_ses-{session}_T1.nii.gz
- Registered:
sub-{subject}_ses-{session}_space-{space}_desc_T1.nii.gz
.
Lesion Mask: sub-{subject}_ses-{session}_space-{space}_label-L_desc-T1lesion_mask.nii.gz
.
White Matter Segmentation File: sub-{subject}_ses-{session}_space-{space}_desc-WhiteMatter_mask.nii.gz
.
Where 'space' should be the name of the reference image or 'orig' if unregistered. For example sub-01_ses-R001_space-orig_desc_T1.nii.gz
A walkthrough of the PALS installation is available on YouTube. The command prompts for each step below are in gray.
The command prompts for each step below are in gray.
PALS can be installed directly via the run_pals.py Python code. Additionally, you will have to install the python packages listed in requirements.txt.
-
PALS is implemented in Python 3.9.0; you will first need to install Python.
-
We recommend that you also install the Python virtual environment Virtualenv.
python3.9 -m pip install --user virtualenv
-
Create a virtual environment in your worksapce for PALS with
python3.9 -m venv pals_venv
and activate the environment withsource pals_venv/bin/activate
. You can deactivate the environment by typingdeactivate
in the command line when not using PALS. You will need to activate the environment every time before use. -
Install PALS through your terminal using:
python3.9 -m pip install -U git+https://github.com/npnl/PALS
-
Additionally, you will need to download the PALS code to your workspace:
git clone https://github.com/npnl/PALS
-
You will also need to install the following software packages on your machine. This is the full list of required neuroimaging packages:
- FSL
- For running FLIRT and FAST.
- Python packages in requirements.txt
- These can be installed in your virtual environment with bash command
python3.9 -m pip install -r requirements.txt
. Run this command when you have 'cd'ed, or entered, into the cloned PALS directory on the command line:~/PALS
. This command MUST be run when you have activated your virtual environment as in step 3.
- These can be installed in your virtual environment with bash command
- FSL
Note that if your intended pipeline won't use components that are dependent on a particular package, it does not need to be installed. E.g., if you plan to use FLIRT for registration, you don't need to install ANTs.
Once the configuration file is set, you can run PALS from the command line:
For direct use:
python3.9 run_pals.py
PALS will prompt for required inputs through GUI, apply the desired pipeline, then return the output in a BIDS directory specified by info in the 'Outputs' input.
Docker must be installed to run PALS in a Docker container. You can follow instructions from here to install the Docker software on your system. Once Docker is installed, follow the instructions below to run PALS.
NOTE: this requires 12Gb of free space on your hard drive to run
- Gather all subjects on which you want to perform PALS operations into a single data directory. This directory should contain sub-directories with subject ID's for each subject and all reference files (see Expected Data Structure). For example, here we will call this directory
/subjects
. - Create another directory which would contain the result files after running PALS on the input subjects. We will call this directory
/results
in our example. - Go to PALS Config Generator, select all the options that apply and download the config file. This step will download a file named
config.json
. Do not rename this file. - Store the config file in a separate directory. Here, we have moved our config file to our
/settings
directory
A walkthrough of the PALS installation using docker is available on YouTube.
-
Make sure that your Docker process is already running. You can do this by executing the following command on the terminal.
docker run hello-world
If you see the following kind of output, then you have a running instance of docker on your machine and you are good to go.
Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 1b930d010525: Pull complete Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. ... Few more lines...
-
To run PALS, simply run the following command, making sure to replace filepaths with your own filepaths.
docker run -it -v <absolute_path_to_directory_containing_input_subjects>:/input/ -v <absolute_path_to_the output_directory>:/output/ -v <absolute_path_to_directory_containing_config_file>:/config/ laharimuthyala/npnl-pals:v1 -d
For example, with the configuration file created in the Preparation step, the command to run PALS would be given as follows.
docker run -it -v /subjects:/input/ -v /results:/output/ -v /settings:/config/ laharimuthyala/npnl-pals:v1 -d
Note: Make sure you do not change the
:/input/
or:/output/
or:/config/
parts in the command! -
That's it! You can find the outputs from PALS in the output directory you specified in Preparation step #2!
This module will check that all subject inputs are in the same orientation, flag subjects that have mismatched input orientations, and convert all remaining inputs to radiological convention. This is recommended for all datasets, and especially for multi-site data.
Orientation to standardize to. Options: L/R (left/right), A/P (anterior/posterior), I/S (inferior/superior). Default is LAS.
This module will perform registration to a common template.
Registration method. Example: FLIRT (default) or leave blank (no registration).
This module will perform brain extraction. Options: true, false.
Method to use for brain extraction. Options: BET (default) or leave blank (no brain extraction).
This module will perform white matter segmentation. Options: true, false. If false, and you want to perform LesionCorrection, LesionLoadCalculation, or Lesionheatmap, you must place file in same location as the input files in the BIDS structure.
This module will perform lesion correction. Options: true, false. If true, requires white matter segmentation file.
This module will compute lesion load. Options: true, false. If true, requires white matter segmentation file.
This module will combine the lesions into a heatmap. Options: true, false. If true, requires white matter segmentation file.
Directory path to the BIDS root directory for the raw data.
ID of the subject to run. Runs all subjects if left blank. Ex: r001s001
ID of the session to run. Runs all sessions if left blank. Ex: 1
Path to the BIDS root directory for the lesion masks.
Path to the BIDS root directory for the white matter segmentation files.
Path to the directory containing ROI image files.
Number of threads to use for multiprocessing. Has no effect unless more than 1 subject is being processed.
Provide the space for your lesion file. For example, put 'MNIEx2009aEx' if your file is sub-r044s001_ses-1_space-MNIEx2009aEx_label-L_desc-T1lesion_mask.nii
Provide the label for your lesion file. For example, put 'L' if your file is sub-r044s001_ses-1_space-MNIEx2009aEx_label-L_desc-T1lesion_mask.nii
Provide the suffix for your lesion file. For example, put 'mask' if your file is sub-r044s001_ses-1_space-MNIEx2009aEx_label-L_desc-T1lesion_mask.nii
Provide the space for your T1 file. For example, put 'MNIEx2009aEx' if your file is sub-r044s001_ses-1_space-MNIEx2009aEx_desc-T1FinalResampledNorm.nii
Provide the desc for your T1 file. For example, put 'T1FinalResampledNorm' if your file is sub-r044s001_ses-1_space-MNIEx2009aEx_desc-T1FinalResampledNorm.nii
Path to directory where to place the output.
Value to use for 'space' entity in BIDS output filename.
Reserved for future use.
Path for saving registration transform.
Path for saving reoriented volume.
Path for saving the brain extracted volume.
Path for saving the white matter-corrected lesions.
Cost function for registration
Path to reference file
Minimum value for image.
Maximum value for image
The deviation of the white matter intensity as a fraction of the mean white matter intensity.
Overlays the heatmap on this image and creates NIFTI file with overlay and NITFI file with the mask only. Also produces 4 PNGS: 9 slices of the lesions from sagittal, axial, and coronal orientations (3 images) and an image with a cross-section of each orientation. If your images are pre-registered, you MUST use your own reference image used for their registration.
Transparency to use when mixing the reference image and the heatmap. Smaller values darker reference and lighter heatmap.
The precise output will depend on the flags you set, but here is a list of the output files you would typically expect:
graph.png
,graph_detailed.png
- Visual representation of the pipeline used to generate the data.
sub-X_ses-Y_desc-LAS_T1w.nii.gz
- The input data reoriented to LAS. "LAS" will change to match your requested orientation.
sub-X_ses-Y_desc-LesionLoad.csv
- A .csv file containing the lesion load in each of the requested regions of interest. Units are in voxels.
UncorrectedVolume
column contains the total number voxels.CorrectedVolume
subtracts the white matter voxels (ifLesionCorrection
is set totrue
in the config file) fromUncorrectedVolume
.
sub-X_ses-Y_space-SPACE_desc-CorrectedLesion_mask.nii.gz
- The lesion mask after white matter correction; note that the quality of the mask depends on the quality of the white matter segmentation.
sub-X_ses-Y_space-SPACE_desc-transform.mat
- Affine matrix for the transformation used to register the subject images.
sub-X_ses-Y_space-SPACE_desc_WhiteMatter_mask.nii.gz
- White matter mask generated by the white matter segmentation.
Placeholder values:
- X: subject ID
- Y: session ID
- SPACE: String indicating the space the image is in (e.g. MNI152NLin2009aSym)
If you use PALS for you paper, please cite the original PALS publication in Frontier in Neuroinformatics.