-
Notifications
You must be signed in to change notification settings - Fork 15
Setting Up a Single‐Node DMOD Deployment
Install dependencies, starting with Usage Dependencies. That process isn't universal. For Linux environments, most of them can be installed using the OS package manager (e.g., zypper
in openSUSE):
# Also, go ahead (heh) and get the Go and Make dependencies that the deployx plugin needs below
sudo zypper install bash docker docker-compose git-core openssl go make minio-client
Note
Package names may vary slightly across different OS/distributions.
Important
Depending on your system, you may need add your user (and/or others) to the docker
user group.
Once Docker is installed, initialize a Docker Swarm:
docker swarm init
Important
Though these instructions are for a single-node deployment, it may be necessary in some circumstances to also open certain ports used by Docker Swarm on the host node's firewall. These are required to upgrade to a multi-node deployment.
The ports in question are 2377/tcp, 4789/udp, 7946/tcp, and 7946/udp. See Docker Swarm's Getting Started documentation for more details.
Note
Systems with multiple network addresses will need to specify the address on which to advertise by appending --advertise-addr <net_addr>
to the command above.
Also, after Docker is installed, install a required custom Docker plugin: deployx. That repo provides its own instructions, but it will likely look something like this:
# Move to an appropriate development directory in which to clone the plugin repo
cd <dev_dir>
# Clone the repo and move to it
git clone https://github.com/aaraney/deployx.git && cd deployx
# Build and install the plugin for your user
make build && make install
Then install the OS-level Development Dependencies (technically only strictly required if doing development). As with Usage Dependencies, this isn't universally described, but the OS-level dependencies can usually be handled on a Linux machine with a command similar to:
sudo zypper install python311 python311-devel python311-pip gcc-c++
Note
If you are not using a Python virtual environment, you could proceed with installing Python dependencies now in the global environment. We don't suggest this, though, so we'll wait for that step until after the repo directory is cloned and we have our venv/
directory.
Now we clone the DMOD repo to a local directory:
# Replace <repo_url> with that of either a fork or the main upstream repo (https://github.com/NOAA-OWP/DMOD.git).
# Likewise, replace <repo_local_dir> with the local destination directory, which does not need to already exist.
git clone <repo_url> <repo_local_dir>
# Then enter the repo dir (remaining examples assume running from this directory, unless otherwise noted)
cd <repo_local_dir>
While not strictly required, it is highly suggested you create a virtual environment for Python after the Python development dependency is installed. The rest of this walkthrough assumes this was done. Once that is done, a helper script is available to update or install the Python packages as needed within the virtual environment.
Important
If necessary, replace the python
command below with either python3
, python311
, etc., depending on the version you need and how things are installed on your system.
# A reminder: this is assumed to be run from within DMOD repo directory
python -m venv venv
# Make sure to use '-D' to install just project requirements
./scripts/update_package.sh -D
As described in this section, we need to create a DMOD environment config. There is a helper script that can assist in setting this up based on the provided example. From the repo root directory, run:
./scripts/create_env_config.sh
Note
It is recommended you use the file name .env
, as this is the default. More advanced usage of different file names and multiple files is support by most DMOD tools and scripts.
Important
There is no default config guaranteed to work in all situations. Network subnet conflicts is a particularly common reason why our selected defaults may not all be suitable for your situation. In such cases, manual editing of your .env
file will be necessary. See the details of the config item descriptions (in the example.env
, if not also copied to the .env
) to get context for the different config items.
Next, perform the necessary setup for SSL certificates for a deployment. As discussed in that document, there is a script that can essentially do the bulk of this for you:
# Will create things under a 'ssl/' dir in working directory; you could add '-d <dir_name>' to specify elsewhere
./scripts/gen_cert.sh -init -email "yourEmail@email.com"
Important
Make sure DMOD_SSL_DIR
in the deployment config is set to the full path of the top-level SSL directory.
Create a resources configuration YAML file, specifying the amount of CPUs and memory DMOD can utilize. This can be done using the referenced helper script below, which will detect the appropriate file path from your environment configuration file. It is also always possible to create or edit the file manually.
# Note these are example values to give you 8 CPUs and 24GB (roughly) of memory
./scripts/create_resources_config.sh --cpus 8 --memory 24000000000
The following helper script will create the necessary Docker networks, based on the environment configuration:
./scripts/control_stack.sh main networks
Important
If this does not work, generally that means there is an IP address conflict with the configured Docker networks and existing networks on the system. Make sure to remove any Docker networks that were partially created, then review and adjust your environment config.
Next, start the local development registry stack. Configuration for this stack and the contained registry service is included within the DMOD sources.
./scripts/control_stack.sh -dc docker/dev_registry_stack/docker-registry.yml dev_registry_stack start
Note
Strictly speaking, running this registry is optional. However, details for some registry must be defined in the earlier discussed environment configuration, as this is needed for building images. By default, that will be set to correspond to the included development registry.
DMOD supports controlling the physical node on which certain DMOD services are run via Swarm labels. We need to add some labels to our node to allow the object store stack services to run on it.
# You could replace the '$(docker node ls -q)' with the swarm node's id or name, visible using 'docker node ls'
docker node update --label-add "minio1=true" --label-add "object_store_proxy=true" $(docker node ls -q)
Start the included, supplemental stack for our object store backing storage, which utilizes MinIO.
./scripts/control_stack.sh object_store start
We need to create some constructs within Minio for DMOD's use. There is a script that will handle this:
./scripts/minio_init.sh --create-admin-alias
Important
Depending on your system, you may need to specify the command used to access the MinIO client program, which this helper script uses. Do this by using the --mc-command
flag, followed by the appropriate command name.
Now to building our custom DMOD Docker images. First, one for Python dependencies:
./scripts/control_stack.sh py-sources build
Next, images for our main
Docker stack, which includes job worker images.
Important
Due to recent changes with Docker, several DMOD images need to be built in isolation and in a particular order. This is reflected below and will be remedied in a future release of DMOD.
./scripts/control_stack.sh --build-args "base" main build
./scripts/control_stack.sh --build-args "deps" main build
./scripts/control_stack.sh main build
./scripts/control_stack.sh main start
Note
You can use the command docker service ls
to monitor if/when your services have started.
# Note that it is important that 'py-sources' stack images were built/rebuilt relatively recently
./scripts/control_stack.sh nwm_gui build
./scripts/control_stack.sh nwm_gui start