Skip to content

Latest commit

 

History

History
186 lines (128 loc) · 5.84 KB

development.md

File metadata and controls

186 lines (128 loc) · 5.84 KB

Development Setup

Docker

Install docker and docker-compose.

Configuration

cp .env.example .env

Edit values in .env to point to existing postgresql databases. See nmdc_server/config.py for all configuration variables. Variable names in .env should be all uppercase and prefixed with NMDC_.

OAuth setup

  1. Create an ORCID account at orcid.org.

  2. Create an Application via the ORCID developer tools page.

    • Set the Redirect URIs (the first and only one) to http://127.0.0.1:8000
      • In case you run into validation errors, you may find this issue helpful.
      • Note: Our production Redirect URIs are listed here.
    • You will use the resulting Client ID and Client Secret in the next step.
  3. Set the following configuration in .env.

    NMDC_ORCID_CLIENT_ID=changeme
    NMDC_ORCID_CLIENT_SECRET=changeme
  4. Populate the below fields in .env. Values for these should be generated by running openssl rand -hex 32. You will have to run the command twice total to get a value for each field. Now restart the stack.

    NMDC_SESSION_SECRET_KEY=changeme
    NMDC_API_JWT_SECRET=changeme

Load production data

The nmdc-server CLI has a load-db subcommand which populates your local database using a nightly production backup. These backups are stored on NERSC. You must have NERSC credentials to use this subcommand.

First use NERSC's sshproxy tool to generate an ssh key.

sshproxy.sh -u <nersc_username>

Then run the load-db subcommand from a backend container, mounting the ssh key.

docker compose run \
  --rm \
  -v ~/.ssh/nersc:/tmp/nersc \
  backend \
  nmdc-server load-db -u <nersc_username>

To see all CLI options run:

nmdc-server load-db --help

Note: if you already have a local database set up, the first time you attempt to load from a production backup you may see an error about a missing nmdc_data_reader role. If you see this error, run the following command to remove existing docker volumes:

docker compose down -v

This should only need to be done once. When the db service starts up again (including via running the load-db command), the necessary roles and databases will be created automatically.

Running the server

docker-compose up -d

View main application at http://127.0.0.1:8080/ and the swagger page at http://127.0.0.1:8080/api/docs.

Outside Docker

# Start only the service dependencies.
docker-compose up -d db data redis

With python virtualenv. Requires Python 3.7+

pip install -e .
pip install uvicorn tox

uvicorn nmdc_server.asgi:app --reload

View swagger page at http://127.0.0.1:8000/api/docs.

Running ingest

You need an active SSH tunnel connection to NERSC attached to the compose network. After running docker-compose up, run this container.

If you haven't already, set up MFA on your NERSC account (it's required for SSHing in).

export NERSC_USER=changeme
docker run --rm -it -p 27017:27017 --network nmdc-server_default --name tunnel kroniak/ssh-client ssh -o StrictHostKeyChecking=no -L 0.0.0.0:27017:mongo-loadbalancer.nmdc.production.svc.spin.nersc.org:27017 $NERSC_USER@dtn01.nersc.gov '/bin/bash -c "while [[ 1 ]]; do echo heartbeat; sleep 300; done"'

You can connect to the instance manually

docker run -d -p 3000:3000 --network nmdc-server_default mongoclient/mongoclient

In order to populate the database, you must create a .env file in the top level directory containing mongo credentials.

# .env
NMDC_MONGO_USER=changeme
NMDC_MONGO_PASSWORD=changeme

With that file in place, populate the docker volume by running,

docker-compose run backend nmdc-server truncate # if necessary
docker-compose run backend nmdc-server migrate
docker-compose run backend nmdc-server ingest -vv --function-limit 100

Running the client

Run the client in development mode.

cd web/
yarn
yarn serve

View main application at http://127.0.0.1:8081

Why not localhost?

It is recommended to use 127.0.0.1 instead of localhost for local development. This is because localhost is not allowed as a redirect URI for an ORCID client. The workaround is to register 127.0.0.1 as a redirect URI with ORCID and to use subsequently visit 127.0.0.1 for local testing.

Testing

tox

Generating new migrations

# Autogenerate a migration diff from the current HEAD
docker-compose run backend alembic -c nmdc_server/alembic.ini revision --autogenerate

In order to generate a migration, your database state should match HEAD. If you started the server from a totally empty database, then the default behavior is to ignore migration scripts and set up the database to match models.py. Before you can generate a migration, you need to reset your database to match HEAD.

# Destroy everything.  You'll lose your data!
docker-compose down -v
docker-compose up -d db
# Create the database
docker-compose run backend psql -c "create database nmdc_a;" -d postgres
# Run migrations to HEAD
docker-compose run backend alembic -c nmdc_server/alembic.ini upgrade head
# Autogenerate a migration diff from the current HEAD
docker-compose run backend alembic -c nmdc_server/alembic.ini revision --autogenerate

Developing with the shell

A handy IPython shell is provided with some commonly used symbols automatically imported, and autoreload 2 enabled. To run it:

docker-compose run --rm backend nmdc-server shell

You can also pass --print-sql to output all SQL queries.