The query tool is a React application, developed in TypeScript using a variety of tools including Vite, Cypress, and MUI.
Quickstart | Local Installation | Usage | Testing | License
Please refer to our official documentation for more detailed information on how to use the query tool.
The query tool is hosted at https://query.neurobagel.org/ and interfaces with Neurobagel federation API.
To run the query tool locally, you have two options:
- Use our docker image
- Do a manual install from the cloned git repo.
but before proceeding with either you need to set the environment variables.
Environment variable | Type | Required | Default value if not set | Example |
---|---|---|---|---|
NB_API_QUERY_URL |
string | Yes | - | https://federate.neurobagel.org/ |
NB_QUERY_APP_BASE_PATH |
string | No | / |
/query/ |
NB_ENABLE_AUTH |
boolean | No | false |
false |
NB_QUERY_CLIENT_ID |
string | Yes (if NB_ENABLE_AUTH is set to true) |
- | 46923719231972-dhsahgasl3123.apps.googleusercontent.com |
You'll need to set the NB_API_QUERY_URL
environment variable required to run the query tool. NB_API_QUERY_URL
is the Neurobagel API URL that the query tool uses to send requests to for results.
If you are using a custom configuration where the query tool is accessible via a path other than the root (/
), you need to set the NB_QUERY_APP_BASE_PATH
to your custom path. This ensures that the query tool is correctly rendered and accessible at the specified URL
If the API you'd like to send queries to requires authentication, you need to set NB_ENABLE_AUTH
to true
as it is false
by default. This will enable authentication flow of the app.
If the NB_ENABLE_AUTH
is set to true
(it is false
by default), you need to provide a valid client ID for the authentication.
At the moment, query tool uses Google for authentication, so you need to obtain a client ID from Google developer console. See documentation for more information.
To set environment variables, create a .env
file in the root directory and add the environment variables there. If you're running a neurobagel node-API locally on your machine (following the instructions here), your .env
file would look something like this:
NB_API_QUERY_URL=http://localhost:8000/
if you're using the remote api, your .env
file would look something like this:
NB_API_QUERY_URL=https://federate.neurobagel.org/
if you're using a remote api with authentication, your .env
file would look something like this:
NB_API_QUERY_URL=https://federate.neurobagel.org/
NB_ENABLE_AUTH=true
NB_QUERY_CLIENT_ID=46923719231972-dhsahgasl3123.apps.googleusercontent.com
NB_API_QUERY_URL
uses https
instead of http
.
To obtain the query tool docker image, simply run the following command in your terminal:
docker pull neurobagel/query_tool:latest
This Docker image includes the latest release of the query tool and a minimal http server to serve the static tool.
To launch the query tool Docker container and pass in the .env
file you have created, simply run
docker run -p 5173:5173 --env-file=.env neurobagel/query_tool:latest
Then you can access the query tool at http://localhost:5173
Note: the query tool is listening on port 5173
inside the docker container,
replace port 5173
by the port you would like to expose to the host.
For example if you'd like to run the tool on port 8000
of your machine you can run the following command:
docker run -p 8000:5173 --env-file=.env neurobagel/query_tool:latest
To install the query tool directly, you'll need node package manager (npm) and Node.js. You can find the instructions on installing npm and node in the official documentation.
Once you have npm and node installed, you'll need to install the dependencies outlined in the package.json file. You can do so by running the following command:
npm install
To launch the tool in developer mode run the following command:
npm run dev
You can also build and then run the tool from (production) build of the application by running the following command:
npm run build && npm run preview
You can verify the tool is running by watching for the` info messages from Vite regarding environment, rendering, and what port the tool is running on in your terminal.
Having installed the dependencies, run the following command to enable husky pre-commit
and post-merge
hooks:
npx husky init
Since the query tool relies on other neurobagel tools to function, their presence is often required during development. To facilitate this, a docker compose containing a complete testing environment has been created. To use it follow the steps below:
- Install
recipes
andneurobagel_examples
submodules:
git submodule init
git submodule update
- Pull the latest images and bring up the stack using the
test
profile:
docker compose --profile test pull && docker compose --profile test up -d
NOTE: Make sure your .env file in the root directory doesn't contain any of the environment variables used in the docker compose file as it will conflict with the configuration, since docker compose will try to use .env by default.
To define a cohort, set your inclusion criteria using the following:
- Age: Minimum and/or maximum age (in years) of participant that should be included in the results.
- Sex: Sex of participant that should be included in the results.
- Diagnosis: Diagnosis of participant that should be included in the results
- Healthy control: Whether healthy participants should be included in the results. Once healthy control checkbox is selected, diagnosis field will be disabled since a participant cannot be both a healthy control and have a diagnosis.
- Minimum number of imaging sessions: Minimum number of imaging sessions that participant should have to be included in the results.
- Minimum number of phenotypic sessions: Minimum number of phenotypic sessions that participant should have to be included in the results.
- Assessment tool: Non-imaging assessment completed by participant that should be included in the results.
- Imaging modality: Imaging modality of participant scans that should be included in the results.
- Pipeline name: Name of the pipeline used to process subject scans.
- Pipeline version: Version of the pipeline used to process subject scans.
Once you've defined your criteria, submit them as a query and the query tool will display the results.
The query tool offers two different TSV files for results:
-
cohort participant results TSV contains: dataset name, portal uri, number of matching subjects, subject id, session id, session file path, session type, age, sex, diagnosis, assessment, number of matching phenotypic sessions, number of matching imaging sessions, session imaging modality, session completed pipelines, dataset imaging modality, and dataset pipelines
-
cohort participant machine results TSV contains: dataset name, dataset portal uri, subject id, session id, session file path, session type, number of matching phenotypic sessions, number of matching imaging sessions, session imaging modality, session completed pipelines, dataset imaging modality, and dataset pipeline
You can refer to the neurobagel documentation to see what the outputs of the query tool look like and how they are structured. You can also download the raw example output files here.
The query tool utilizes Cypress framework for testing.
To run the tests execute the following command:
npx cypress open
The query tool is released under the terms of the MIT License