Skip to content

Deployment

Joni Ollila edited this page Oct 29, 2020 · 17 revisions

Developing/running/testing the whole system can be done the easiest by using the commands and manifest below:

Docker stack manifest

version: '3.7'
services:
  db:
    image: mongo:4
    volumes:
    - /etc/localtime:/etc/localtime:ro
    deploy:
      restart_policy:
        condition: any
  mq:
    image: rabbitmq:3
    volumes:
    - /etc/localtime:/etc/localtime:ro    
    networks:
    - mq
    - default
    environment:
      HOSTNAME: mq
    deploy:
      restart_policy:
        condition: any
  api:  
    image: quay.io/natlibfi/melinda-record-import-api:latest
    depends_on:
    - db
    volumes:
    - /etc/localtime:/etc/localtime:ro
    - /${USERS_FILE}:/users.json:ro
    environment:            
      API_URL: ${API_URL}
      MONGO_URI: mongodb://db/db
      PASSPORT_LOCAL_USERS: 'file:///users.json'
      TZ: Europe/Helsinki
      LOG_LEVEL: debug
    ports:
    - target: 8080
      published: 8080
      protocol: http
      mode: host
    deploy:
      restart_policy:
        condition: any
  controller:
    image: quay.io/natlibfi/melinda-record-import-controller:latest
    depends_on:
    - api
    - db
    - mq
    volumes:
    - /etc/localtime:/etc/localtime:ro
    - /var/run/docker.sock:/var/run/docker.sock
    environment:            
      API_URL: ${API_URL}
      API_USERNAME: foo
      API_PASSWORD: bar
      API_USERNAME_IMPORTER: faa
      API_PASSWORD_IMPORTER: fuu
      API_USERNAME_TRANSFORMER: fii
      API_PASSWORD_TRANSFORMER: faa
      AMQP_URL: amqp://mq
      MONGO_URI: mongodb://db/db
      BLOB_METADATA_TTL: '1 hour'
      BLOB_CONTENT_TTL: '15 minutes'
      CONTAINER_NETWORKS: "[\"record-import_mq\"]"
      JOB_FREQ_PRUNE_CONTAINERS: '1 day'
      BLOB_METADATA_TTL: '1 day'
      BLOB_CONTENT_TTL: '1 day'
      CONTAINER_CONCURRENCY: 5
      TZ: Europe/Helsinki
      LOG_LEVEL: debug
    deploy:
      restart_policy:
        condition: any
networks:
  mq:
    external: true
    name: record-import_mq

Environment variables

Name Description
API_URL Defaults to http://localhost:8080. Set it to http://<your-public-ip>:8080 so that the API can be accessed from the dispatched containers and outside Docker.
USERS_FILE File containing the user authentication and authorization "database". See below for the format of this file

Local user configuration

The following shows the format of the user configuration file

[
  {
    "id": "foo",
    "password": "bar",
    "name": {
      "givenName": "foo",
      "familyName": "bar"
    },
    "displayName": "DEV CREATOR",
    "emails": [{"value": "foo@fu.bar", "type": "work"}],
    "organization": [],
    "groups": [
      "creator",
      "dev"
    ]
  },
  {
    "id": "fii",
    "password": "faa",
    "name": {
      "givenName": "fii",
      "familyName": "faa"
    },
    "displayName": "DEV TRANSFORMER",
    "emails": [{"value": "foo@fu.bar", "type": "work"}],
    "organization": [],
    "groups": [
      "transformer",
      "dev"
    ]
  },
  {
    "id": "faa",
    "password": "fuu",
    "name": {
      "givenName": "faa",
      "familyName": "fuu"
    },
    "displayName": "DEV IMPORTER",
    "emails": [{"value": "foo@fu.bar", "type": "work"}],
    "organization": [],
    "groups": [
      "importer",
      "dev"
    ]
  }
]

Creator, transformer and importer are blob operation permission groups. Dev is processing group that binds source, transformer and importer.

Operation Permission groups
Query all
Read all
Create creator
Update importer, transformer
Abort all
ReadContent all

Deploy the stack

Docker to swarm mode

docker swarm init

Deploying network for stack

docker network create --attachable -d overlay --scope swarm --internal record-import_mq

Deploying the stack

docker stack deploy -c record-import.yml record-import

Setting environment variables

API_URL=http://<public-ip>:8080 docker stack deploy -c record-import.yml record-import

Or from a env file: (Recommended)

set -a;source .env;set +a;docker stack deploy -c record-import.yml record-import

Configuring local stack

Creation of authentication profile for api

Create dev-group.json file containing information in bellow. This file specify processing group information like what image is being used as transformer and importer and environment variables they have.

{
  "import": {
    "image": "quay.io/natlibfi/melinda-record-import-importer-dummy:latest",
    "env": {
      "NOOP_MELINDA_IMPORT": "1",
      "LOG_LEVEL": "debug"
    }
  },
  "transformation": {
    "image": "quay.io/natlibfi/melinda-record-import-transformer-dummy:latest",
    "env": {
      "LOG_LEVEL": "debug"
    }
  },
  "auth": {
    "groups": [
      "dev"
    ]
  },
  "id": "dev"
}

Adding authentication profile to api

npx @natlibfi/melinda-record-import-cli profiles modify foo dev-group.json

Now you should be ready to develop in your local docker environment.

When you input blobs to system use creator account information made in users.json. And as content type use

npx @natlibfi/melinda-record-import-cli blobs create <file> -p <profile id e.g. foo> -t <contentType e.g. application/json>

Creating your local transformer or importer images

Docker image create -t <image name> <file location>

Docker dev tips:

Melinda-record-import-cli command help.

npx @natlibfi/melinda-record-import-cli --help

Shutdown docker stack

docker stack rm record-import

How to check logs

docker container ls -a

docker container logs <container id>

docker container logs -f <container id>

Rabbit mq status (Blob queues)

docker exec <container id> rabbitmqctl status

docker exec <container id> rabbitmqctl list_queues