diff --git a/docs/source/_static/lasp_docker_registry_browse1.png b/docs/source/_static/lasp_docker_registry_browse1.png new file mode 100644 index 0000000..f77e40e Binary files /dev/null and b/docs/source/_static/lasp_docker_registry_browse1.png differ diff --git a/docs/source/_static/lasp_docker_registry_browse2.png b/docs/source/_static/lasp_docker_registry_browse2.png new file mode 100644 index 0000000..8fff8b2 Binary files /dev/null and b/docs/source/_static/lasp_docker_registry_browse2.png differ diff --git a/docs/source/_static/lasp_docker_registry_browse3.png b/docs/source/_static/lasp_docker_registry_browse3.png new file mode 100644 index 0000000..f58dc24 Binary files /dev/null and b/docs/source/_static/lasp_docker_registry_browse3.png differ diff --git a/docs/source/_static/lasp_docker_registry_delete_image1.png b/docs/source/_static/lasp_docker_registry_delete_image1.png new file mode 100644 index 0000000..358f128 Binary files /dev/null and b/docs/source/_static/lasp_docker_registry_delete_image1.png differ diff --git a/docs/source/_static/lasp_docker_registry_delete_image2.png b/docs/source/_static/lasp_docker_registry_delete_image2.png new file mode 100644 index 0000000..cf2beaf Binary files /dev/null and b/docs/source/_static/lasp_docker_registry_delete_image2.png differ diff --git a/docs/source/workflows/docker/beginner_guide_to_docker.md b/docs/source/workflows/docker/beginner_guide_to_docker.md index f102df7..3656db4 100644 --- a/docs/source/workflows/docker/beginner_guide_to_docker.md +++ b/docs/source/workflows/docker/beginner_guide_to_docker.md @@ -42,7 +42,7 @@ generally find yourself running the same commands for creating containers and wa **Docker Registry:** A registry or archive store is a place to store and retrieve docker images. This is one way to share already-built docker images. LASP has a private repository, in the form of the -[LASP docker registry](./lasp_docker_registry.md). +[LASP docker registry](lasp_docker_registry). So, you define a Docker *image* using a *Dockerfile* and/or a *Docker Compose* file. Running this image produces a Docker *container*, which runs your code and environment. An image can be pushed up to a *registry*, where anyone with @@ -59,9 +59,9 @@ the current directory for a file named `Dockerfile` by default, but you can spec arguments or through your docker compose file. Generally, each Docker image should be as small as possible. Each Dockerfile should only do one thing at a time. If you -have a need for two extremely similar docker containers, you can also use [Multi-stage builds](./multi_stage_builds.md). -You can orchestrate multiple docker containers that depend on each other using -[Docker compose](./docker_compose_examples.md). +have a need for two extremely similar docker containers, you can also use [Multi-stage builds](multi_stage_builds). You +can orchestrate multiple docker containers that depend on each other using +[Docker compose](docker_compose_examples). To start, your Dockerfile should specify the base image using `FROM .`. Then, you can set up the environment by using `RUN` commands to run shell commands. Finally, you can finish the container by using a `CMD` command. This is an @@ -85,12 +85,11 @@ In the same folder, we run the `build` command to build our image: docker build --platform linux/amd64 -f Dockerfile -t docker_tutorial:latest . ``` -The flag `–platform linux/amd64` is optional unless you are [running an M1 chip mac](./running_docker_with_m1.md). The -`-f` flag indicates the name of the Dockerfile -- in this case, it is also optional, since `Dockerfile` is the default -value. The `-t` flag is a way to track the docker images and containers on our system by adding a name and a tag. -`latest` is the tag used to indicate the latest version of a Docker image. Additional useful flags include `--no-cache` -for a clean rebuild, and you can find a full list of flags -[here](https://docs.docker.com/reference/cli/docker/buildx/build/). +The flag `–platform linux/amd64` is optional unless you are [running an M1 chip mac](running_docker_with_m1). The `-f` +flag indicates the name of the Dockerfile -- in this case, it is also optional, since `Dockerfile` is the default value. +The `-t` flag is a way to track the docker images and containers on our system by adding a name and a tag. `latest` is +the tag used to indicate the latest version of a Docker image. Additional useful flags include `--no-cache` for a clean +rebuild, and you can find a full list of flags [here](https://docs.docker.com/reference/cli/docker/buildx/build/). Now that we have built the image, we can see all the Docker images that are built on our system by running the `docker images` command: @@ -146,8 +145,8 @@ intervention work. For an example of a system where that's operating, you can re in Docker](https://confluence.lasp.colorado.edu/display/DS/Containerize+TIM+Processing+-+Base+Image). Next steps, beyond going more in depth with the TIM dockerfiles, would be to learn about using the [LASP docker -registry](./lasp_docker_registry.md). Other topics include [Docker compose](./docker_compose_examples.md), running -Docker on [M1 chips](./running_docker_with_m1.md), and other pages under the [Docker Guidelines](./index.rst). +registry](lasp_docker_registry). Other topics include [Docker compose](docker_compose_examples), running Docker on +[M1 chips](running_docker_with_m1), and other pages under the [Docker Guidelines](index). ## Docker Cheat Sheet diff --git a/docs/source/workflows/docker/containerizing_idl_with_docker.md b/docs/source/workflows/docker/containerizing_idl_with_docker.md index ebdead3..2eb2ef8 100644 --- a/docs/source/workflows/docker/containerizing_idl_with_docker.md +++ b/docs/source/workflows/docker/containerizing_idl_with_docker.md @@ -1 +1,421 @@ -# Containerizing IDL with Docker \ No newline at end of file +# Containerizing IDL with Docker + +## Purpose for this guideline + +This document provides a preliminary implementation of IDL in a Docker container. + +## A Few Things to Note + +- Although the implementation described below is only valid for IDL version 8.6 and greater, the Dockerfile could + presumably be modified to accommodate older versions. The difference is that the network license access is different + for IDL 8.5 and earlier versions. +- This is not the same as Docker containers that access an external IDL on the host machine or via a network connection. +- The IDL image size is approximately 3GB, which is reasonable for an IDL installation (see + https://www.l3harrisgeospatial.com/docs/PlatformSupportTable.html). The base CentOS 8 image is only 210MB, so the IDL + installation accounts for more than 90% of the image. +- The IDLDE image size is approximately 3.5GB because a web browser (Firefox) has to be installed to export the display. +- For a simple container deployment, there is no Docker Compose file. The container is easily generated with simple + command-line arguments. +- Provided that the host machine has LASP VPN access (for licensing purposes), the containerized IDL should work + directly "out of the box" (i.e., no manual post-container creation steps are required). +- Although both the IDL and IDLDE images can be built locally using the Dockerfiles below, they are also available from + the [LASP Image Registry](lasp_docker_registry). + +## Dockerfile + +The Dockerfile directives consist of the IDL installation, requisite libraries, licensing configuration and the default +runtime command: + +```Dockerfile +# Build from parent image https://hub.docker.com/_/ubuntu +FROM ubuntu:22.04 + +WORKDIR /tmp + +# Install Ubuntu packages, clean up the apt cache to reduce image size +# libxpm-dev and libxmu-dev are dependencies of IDL +# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/ +RUN apt-get update && apt-get install -y \ + curl \ + libxpm-dev \ + libxmu-dev \ + && rm -rf /var/lib/apt/lists/* + +# Download IDL package, unarchive it, remove package, perform silent install using answer file +RUN curl -O https://artifacts.pdmz.lasp.colorado.edu/repository/datasystems/idl/installers/idl87.tar.gz \ + && tar -xzf ./idl87.tar.gz && rm -f ./idl87.tar.gz \ + && sh ./install.sh -s < ./silent/idl_answer_file + +# Install packages useful for development (not required by IDL, only for inspection and debugging) +RUN apt-get update && apt-get install -y \ + libncurses5-dev \ + libncursesw5-dev \ + wget \ + && rm -rf /var/lib/apt/lists/* + +# Set user-friendly aliases +RUN echo 'alias idl="/usr/local/harris/idl87/bin/idl"' >> ~/.bashrc \ + && echo 'alias idl="/usr/local/harris/idl87/bin/idl"' >> ~/.bash_profile + +# Necessary for network-based licensing +RUN echo 'http://idl-lic.lasp.colorado.edu:7070/fne/bin/capability' >> /usr/local/harris/license/o_licenseserverurl.txt + +CMD ["/usr/local/harris/idl87/bin/idl"] +``` + +## Stand-alone vs Multi-functional Container + +If the objective is merely to include IDL in a container with other services, such as Jenkins, gcc, git, Python, etc., +simply add the Docker commands from this example into the relevant Dockerfile. Do not include the `FROM` or `CMD` +statements (assuming that it is not a multi-stage build) and exclude any redundant instructions (e.g., `yum install +wget`). Note that the inclusion of IDL will increase the image size by about 2.5GB. + +Provided that all necessary tools and services reside in the same container, simply use IDL as with any standard +installation. **The remainder of this guideline describes the implementation of IDL in a stand-alone IDL container.** + +The following issues are associated with using IDL in a stand-alone container: + +- Cross-container access +- File sharing between containers +- Access from a Jenkins service +- Integration of git +- Integration of `mgunit` + +## Build the Image + +To build the image, first copy the Dockerfile example above into a file named `Dockerfile`. If, for any reason, it is +necessary to maintain a running background container, replace the `CMD` argument with: + +```Dockerfile +CMD ["tail", "-f", "/dev/null"] +``` + +Execute the following command from within the directory that contains the Dockerfile to build the image: + +```bash +docker image build -t . +``` + +The image name (`-t` argument) can be as simple as `idl`. For clarity, the name `idl_image` (to distinguish from +`idl_container`) is used in the examples below. + +## Start the container + +To start the container: + +```bash +docker container run -it --name= +``` + +The container name is optional and can be as simple as `idl`. If successful, an `IDL>` prompt will appear. Note that +this container `run` command is identical on all operating systems (Linux, MacOS, Windows). If the container already +exists, access will differ, and the specifics depend on whether or not the container is running. The status of all +containers can be displayed with: + +```bash +docker container ls -a +``` + +If the container is stopped, it must be restarted before access is possible: + +```bash +docker restart +``` + +Because the `run` command in the example above creates a new container, it must be replaced with the `exec` command if +the container already exists. In this case, the IDL executable (with full path) must be specified as well: + +```bash +docker container exec -it /usr/local/harris/idl87/bin/idl +``` + +For shell access to the container (possibly necessary for debugging), add `bash` to the end of the docker container +`run` command, or replace the `/usr/local/harris/idl87/bin/idl` command with `bash` in the docker container `exec` +command. + +## Host Machine Access + +The following demonstrates how to utilize IDL by directly interacting with a running container. In this example, the +(optional) container name is `idl_container` and the name of the (previously created) image is `idl_image`: + +``` +(base) MacL3947:idl stmu4541$ docker container run -it --name=idl_container idl_image +IDL 8.7.3 (linux x86_64 m64). +(c) 2020, Harris Geospatial Solutions, Inc. + +Licensed for use by: University of Colorado - Boulder (MAIN) +License: 100 +A new version is available: IDL 8.8 +https://harrisgeospatial.flexnetoperations.com +IDL> print, ((cos(45.0d*(!PI/180.0d)))^2 + (sin(45.0d*(!PI/180.0d)))^2).tostring() +1.0000000000000000 +IDL> exit +(base) stmu4541@MacL3947:~/projects/docker/idl$ +``` + +## Cross-container Access + +For web applications (e.g., Nginx, MySQL, etc.), accessing one containerized service from within another container +running on the same host is trivial. Simply create a Docker network, connect both containers to that network, and the +applications will find one another via default ports with no additional configuration. + +Unfortunately, this does not work for non-web applications, such as IDL (at least it hasn't yet been determined how to +make it work). The solution, however, is not too difficult. Simply bind mount the Docker socket and the Docker +executable of the host machine to the non-IDL container. These bind mounts will provide access to the Docker service +running on the host machine from within the non-IDL container. + +### Example Implementation + + In this example a container is created from the official `CentOS` image: + +```bash +docker container run -it --name=centos_container --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker centos bash +``` + +This `docker container run` command generates a CentOS container and initiates an interactive connection to that +container with a `bash` shell. + +A `/ #` prompt will appear, which allows the user to execute commands from within the container. The `docker container +run` flags are: + +| Flag | Purpose | +|----------|-------------------------------------------------------------------------------------------------------------------| +| --rm | Remove the container upon exit (optional) | +| -it | Run the container in interactive mode (required) | +| --name | Assign a user-defined name to the container (optional) | +| -v | Bind mount the host directory or file (in this case, the Docker socket and executable to the container (required) | + +If the `idl_container` container is not already running, start the separately containerized IDL from within the +`centos_container` (non-IDL) container as follows (note that the `idl_image` image must already exist, see above): + +```bash +docker container run -it --rm idl_image /usr/local/harris/idl87/bin/idl +``` + +This spins up an unnamed IDL container from the `idl_image` image that exists on the host machine (see above). Because +the Docker socket is shared between the host and the CentOS (non-IDL) container, the IDL container is accessible from +within the `centos_container`. + +If the `idl_container` container is already running, access IDL from within the `centos_container` container by +replacing `run` with `exec`: + +```bash +docker container exec -it idl_container /usr/local/harris/idl87/bin/idl +``` + +In this case, because the IDL container already exists, it is necessary to include the name of the IDL container in the +docker container `exec` command. An IDL prompt will appear. + +Note that this still results in an interactive session with the IDL container, and falls short of the ultimate objective +of remote non-interactive use of the containerized IDL deployment in, for example, a Jenkins build. To achieve this, an +additional necessary step is the implementation of file sharing between containers. + +## Inter-container File Sharing + +This section describes the manner in which the IDL container can access files (e.g., IDL procedures or functions) that +reside in the non-IDL container. Basically, the objective is to share files between containers. This is necessary +because the non-IDL container will be responsible for the service (e.g., Jenkins) that clones or checks out the IDL +files from a source code repository. + +There are two similar solutions. The preferred approach depends on whether or not it is necessary to access the files +from the host machine. + +### Solution 1 -- Shared Name Volume + +If there is no need to access the shared files from the host machine, the simplest solution is to create a named Docker +volume that is shared between the IDL and non-IDL containers. + +Although, technically, a named volume exists on the host, it is managed by the Docker daemon and is not intended to be +accessed from the host. + +#### On the Host Machine + +First, create a named Docker volume: + +```bash +docker volume create --name SharedData +``` + +Now, as in the previous example, spin up a non-IDL container based on the official `CentOS` image, but include an +additional bind mount for the newly-created data volume: + +```bash +docker container run --rm -it --name=centos_container -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker -v SharedData:/src centos bash +``` + +#### In the Interactive Shell Inside the `CentOS` Container + +From the interactive shell, create a simple test file in the `/src` directory of the `centos_container` (non-IDL) +container. For example: + +```idl +# hello_world.pro +pro hello_world + print, 'Hello World' +end +``` + +Spin up an unnamed IDL container based on the previously created `idl_image` image, mount the named volume +(`SharedData`) using the `-v` option, and append the `/src/hello_world.pro` file (or `/src/hello_world`, without the +`.pro` extension, works equally well) as a command-line argument. + +```bash +docker container run --rm -v SharedData:/src idl_image /usr/local/harris/idl87/bin/idl /src/hello_world.pro 2> err.txt +``` + +this example also includes error output redirection. + +The result should be: + +```bash +docker container run --rm -v SharedData:/src idl_image /usr/local/harris/idl87/bin/idl /src/hello_world.pro 2> err.txt +Hello World +``` + +### Solution 2 -- Host Directory Bind Mount + +Here, file systems in both containers are bind mounted to a common directory on the host machine. This allows file +access directly from the host machine when necessary. + +In the following example, two containers are created: an IDL container and a non-IDL container. The former will +essentially consist only of the IDL application, whereas the latter could include multiple components, such as a Jenkins +service, Python, Java, etc. In this example, however, the non-IDL container consists, as before, of a simple `CentOS` +OS. + +Create an IDL container named `idl_container` from the `idl_image` image generated by the Dockerfile included above, and +include a bind mount to a directory on the host machine. In this example, the host directory is +`/Users/stmu4541/projects/docker/src`, and is bind mounted to a directory named `/src` in the IDL container: + +```bash +docker container run -d -v /Users/stmu4541/projects/docker/src:/src --name=idl_container idl_image +``` + +The container is named `idl_container`, and the `/Users/stmu4541/projects/docker/src` directory on the host is bind +mounted to the `/src` directory in the container. + +Now create the non-IDL container named `centos_container`: + +```bash +docker container run --rm -it --name=centos_container -v /Users/stmu4541/projects/docker/src:/src -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker lentos bash +``` + +Here, `lentos`, which immediately precedes the bash command specifies the official `centos` image. + +The three bind mounts are identical to those described in Solution 1 except that the named volume mount (`SharedData`) +has been replaced by a bind mount to a directory (`/Users/stmu4541/projects/docker/src`) on the host machine. + +The `/Users/stmu4541/projects/docker/src` directory on the host machine, and the `/src` directories on both containers +refer to the same file system. In other words, any file that resides in the `/Users/stmu4541/projects/docker/src` host +directory or the `/src` directory in the non-IDL container, is visible to the `/src` directory of the IDL container. + +## Jenkins as the Non-IDL Container + +Using a Jenkins-based principal container is not significantly different than the example above, except that the +`CentOS` image is replaced with a Jenkins image -- in this case, `jenkins/jenkins:lts-centos`, which is based on the +latest `CentOS` OS. + +```bash +docker container run --name=jenkins -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker -p 8080:8080 jenkins/jenkins:lts-centos +``` + +Note that a Dockerfile is not required for this simple example since the container is created from the official `CentOS` +Jenkins image (`jenkins/jenkins:lts-centos`). A Dockerfile is required only when it is necessary to install additional +tools, such as Ant, Gradle, various compilers, etc. + + +## Run a Containerized IDLDE and Export the Display to the Host Machine + +Also, see [Export Display from Docker Container - Proof of Concept Confluence +page](https://confluence.lasp.colorado.edu/display/DS/Export+Display+from+Docker+Container+-+Proof+of+Concept) for a +general discussion of exporting a display from a Docker container. + +The following steps are necessary to run a containerized IDLDE and export the display to the host machine: + +1) A web browser must be installed in the container (or, ideally, as below, the image from which the container is + generated). +2) A very short IDLDE Dockerfile is necessary to add a web browser to the IDL Image. This will create a new IDLDE image. +3) Source the `idl_setup` script during image creation. +4) The IDLDE container must be run with the `DISPLAY` environment parameter set to that of the host machine. +5) For Linux, the IDLDE container must be run with the host machine X socket bind mounted to the container. + +Create a very short IDLDE Dockerfile that sources the IDL setup script and adds a Firefox web browser to the base IDL +image. Note that the base image (named `idl_image` in the following Dockerfile) must exist before this Dockerfile can be +processed (see above). + +```Dockerfile +FROM idl_image +RUN source /usr/local/harris/idl/bin/idl_setup.bash \ + && /usr/bin/yum update -y \ + && /usr/bin/yum install -y firefox +CMD [ "/usr/local/harris/idl/bin/idlde" ] +``` + +Build the IDLDE image as follows: + +```bash +docker image build -t idlde_image . +``` + +As always, this image build command must be run in the same directory as the relevant Dockerfile. + +### Run the Containerized IDLDE + +#### Pull the IDLDE Image from the LASP Image Registry + +The image can be built locally using the above Dockerfiles and the `docker image build` command, or it can be obtained +from the [LASP Image Registry](lasp_docker_registry). To pull the image, log into the LASP Image Registry and use the +`pull` command: + +```bash +docker pull docker-registry.pdmz.lasp.colorado.edu/tsis/idlde_centos7:latest +``` + +I recommend re-tagging the image for convenience (this will not create an additional copy of the image, it acts more +like a symbolic link to the same image): + +```bash +docker image tag docker-registry.pdmz.lasp.colorado.edu/tsis/idlde_centos7 idlde_image +``` + +#### Run the Containerized IDLDE on Linux + +Run the IDLDE container with the container `DISPLAY` environment parameter set to that of the host machine and bind +mount the host machine X socket to the container: + +```bash +docker container run -it --rm -e DISPLAY=${DISPLAY} -v /tmp/.X11-unix:/tmp/.X11-unix idlde_image +``` + +#### Run the Containerized IDLDE on MacOS + +The instructions for MacOS adopt a different approach than that for Linux. While there may be a more consistent +implementation to run the containerized IDLDE on both Linux and MacOS, the following steps will suffice. Note that +XQuartz must be running on the Mac. + +First, `localhost` must allow X forwarding (`127.0.0.1` resolves to `localhost`): + +```bash +xhost + 127.0.0.1 +``` + +Then simply run the container with the `DISPLAY` environment parameter set to: + +```bash +docker container run -e DISPLAY=host.docker.internal:0 idlde_image +``` + +## Useful Links +* [Official Docker documentation](https://docs.docker.com/) + +## Acronyms + +* **gcc** = GNU (GNU's Not Unix) Compiler Collection +* **IDL** = Interactive Data Language +* **IDLDE** = IDL Development Environment +* **MySQL** = My Structured Query Language +* **Nginx** = Engine-X +* **OS** = Operating System +* **VPN** = Virtual Private Network +* **yum** = Yellowdog Updater, Modified + +*Credit: Content taken from a Confluence guide written by Steven Mueller* \ No newline at end of file diff --git a/docs/source/workflows/docker/lasp_docker_registry.md b/docs/source/workflows/docker/lasp_docker_registry.md index ab6cbf8..a34c9f1 100644 --- a/docs/source/workflows/docker/lasp_docker_registry.md +++ b/docs/source/workflows/docker/lasp_docker_registry.md @@ -1 +1,172 @@ -# LASP Docker Registry \ No newline at end of file +# LASP Docker Registry + +## Purpose for this guideline + +This document provides guidelines on how to use the LASP Docker registry for publishing and accessing Docker images. + +## Overview + +The Web Team manages an on-premises Docker registry exclusively used by LASP. The purpose of this registry is to enable +teams within LASP to publish and access Docker images. These Docker images can be created ad-hoc or in an automated +fashion using a Dockerfile located in a corresponding Bitbucket repository. Additionally, the registry can be made +available for access from the internet, behind WebIAM authentication to be used by cloud resources such as AWS. + +The LASP Docker Registry is the Sonatype Nexus Repository Pro version. It runs in the DMZ and is behind WebIAM user +authentication. + +## Accessing the Registry + +The Web UI for Nexus is located at [https://artifacts.pdmz.lasp.colorado.edu](https://artifacts.pdmz.lasp.colorado.edu). +It is not necessary to log into the server to search and browse public repositories using the left-hand navigation menu. + +> **Warning**: The UI is only accessible from inside the LASP Network. + +The internal URL for the Docker repository when using Docker `push`/`pull` commands is +`docker-registry.pdmz.lasp.colorado.edu`. The same repository can also be accessed externally at +`lasp-registry.colorado.edu`. + +The difference in URLs is that the Nexus server is intended to be used for different types of artifacts that can be +served and managed via HTTPS. The Docker registry is a special repository and is running on a different port and +protocol. It cannot be accessed via HTTPS. + +The LASP Docker registry can be accessed outside the LASP Network using Docker CLI commands (i.e `docker push` or +`docker pull`) by the URL `lasp-registry.colorado.edu`. This allows users to access Docker images from AWS, for example. + +## Namespaces + +The LASP Docker registry is organized by Namespaces. This is just a sub-folder or path that is used to group all related +images together. Namespaces can be organized by teams or missions. Once a Namespace has been identified, ACLs will be +created in Nexus that allow only specific WebIAM groups to create or push images to the Registry as well as delete +images. Images will be referred to in Docker as `/:` or more precisely +`//:`. See [Creating an Image](#creating-an-image) below for more information. + +## Browsing Images + +Access the Web UI via the URL above. Click on **Browse**: + +![Browse for images](../../_static/lasp_docker_registry_browse1.png) + +Click **"lasp-registry"**: + +![Browse for images](../../_static/lasp_docker_registry_browse2.png) + +Pick a team or project and expand it. Here you can see the available images under the "web" Namespace: + +![Browse for images](../../_static/lasp_docker_registry_browse3.png) + +> **Info**: Each Layer of a Docker image is composed of "Blobs". These are kept outside of the Namespace, but are +> referenced and used by the manifests. + +You can find each available tag and its relevant metadata here. + +## Creating an Image + +### Manually + +1. From the root directory where your Dockerfile lives, build a local image specifying an image name and tag (i.e. + `image_name:tag_version`): + +```bash +$ docker build --force-rm -t webtcad-landing:1.0.2 . +Sending build context to Docker daemon 22.78 MB +Step 1 : FROM nginx:1.12.2 + ---> dfe062ee1dc8 +... +... +... +Step 8 : RUN chown -R nginx:nginx /usr/share/nginx/html && chown root:root /etc/nginx/nginx.conf + ---> Running in 8497cc7f30ed + ---> 28cb8c0df12b +Removing intermediate container 8497cc7f30ed +Successfully built 28cb8c0df12b +``` + +2. Tag your new image with the format `//image_name:tag`: + +```bash +$ docker tag 28cb8c0df12b docker-registry.pdmz.lasp.colorado.edu/web/webtcad-landing:1.0.2 +``` + +> **Info**: Note the "web" namespace in the URL above. This will change depending on your particular Namespace. + +3. Login to the remote registry using your username/password: + +```bash +$ docker login docker-registry.pdmz.lasp.colorado.edu +Username: +``` + +4. Push the image into the repository/registry: + +```bash +$ docker push docker-registry.pdmz.lasp.colorado.edu/web/webtcad-landing:1.0.2 +``` + +5. Logout of the registry when complete. This removes credentials stored in a local file. + +```bash +$ docker logout +``` + +> **Info**: Don't forget to delete your local images if you no longer need them. + + +### Automated + +To script the process of creating an image, you can use something like Ansible with its +["docker_image"](https://docs.ansible.com/ansible/latest/collections/community/docker/docker_image_module.html) module +or something simple as a build script in your Bitbucket repo or Jenkins Job with the above commands invoked in a Shell +Builder. The Web Team utilizes all three methods when creating images. + +## Deleting + +Although deleting a Docker image can be done via API commands against the registry, it is best done via the Web UI. To +do so, login to the Registry and then Browse to the particular image:tag you wish to delete: + +![Delete an image](../../_static/lasp_docker_registry_delete_image1.png) + +Click the "Delete Asset" button. + +If you no longer need that particular image at all, you can delete the folder associated with it by selecting the folder +and clicking "Delete Folder": + +![Delete an image](../../_static/lasp_docker_registry_delete_image2.png) + + +## Pulling an Image + +1. Login to the remote registry using your username/password. Note that this is only necessary if you are accessing a +docker image that is NOT in a public namespace: + +```bash +$ docker login docker-registry.pdmz.lasp.colorado.edu +Username: +``` + +2. Pull the image from the repository/registry: + +```bash +$ docker image pull docker-registry.pdmz.lasp.colorado.edu/web/webtcad-landing:1.0.2 +``` + +3. Logout of the registry when complete. This removes credentials stored in a local file: + +```bash +$ docker logout +``` + +## Requesting Access + +Write access and new Namespaces require The Web Team to create WebIAM groups and Nexus ACLs. Please submit a Jira +WEBSUPPORT ticket or send an email to `web.support@lasp.colorado.edu` + +## Acronyms + +* **AWS** = Amazon Web Services +* **CLI** = Command-Line Interface +* **DMZ** = DeMiliterized Zone +* **HTTPS** = HypterText Transfer Protocol Secure +* **UI** = User Interface +* **URL** = Uniform Resource Locator + +*Credit: Content taken from a Confluence guide written by Maxine Hartnett* \ No newline at end of file diff --git a/docs/source/workflows/docker/multi_stage_builds.md b/docs/source/workflows/docker/multi_stage_builds.md index 71e2615..5334549 100644 --- a/docs/source/workflows/docker/multi_stage_builds.md +++ b/docs/source/workflows/docker/multi_stage_builds.md @@ -9,14 +9,14 @@ This documentation was inspired by the processes of containerizing the [Jenkins TIM test suite](https://confluence.lasp.colorado.edu/display/DS/TIM+Containers+Archive), and providing multiple ways of running the tests -- both through Intellij, and locally, plut the ability to add additional ways of testing if they came up. Additionally, there are a few complications with the ways that [Red Hat VMs run on the Mac M1 -chips](./running_docker_with_m1.md) that was introduced in 2020. All of these requirements led the TIM developers to use +chips](running_docker_with_m1) that was introduced in 2020. All of these requirements led the TIM developers to use something that allows for a lot of flexibility and simplification in one Dockerfile: multi-stage builds and a docker compose file. This guideline will go over the general use case of both technologies and the specific use case of using them for local TIM tests. ## Multi-stage Build -### Multi-stage build overview +### Multi-stage Build Overview A multi-stage build is a type of Dockerfile which allows you to split up steps into different stages. This is useful if you want to have most of the Dockerfile be the same, but have specific ending steps be different for different @@ -28,7 +28,7 @@ on Mac vs Linux, etc. You can also use multi-stage builds to start from differen For additional info on multi-stage builds, you can check out the [Docker documentation](https://docs.docker.com/build/building/multi-stage/). -### Creating and using a multi-stage build +### Creating and Using a Multi-stage Build Creating a multi-stage build is simple. You can name the first stage, where you build off a base image, using `FROM AS `. After that, you can use it to build off of in later steps -- in this case, you can see @@ -54,7 +54,7 @@ RUN g++ -o /binary source.cpp To specify the stage, you can use the `--target ` flag when building from the command line, or `target: ` when building from a docker compose file. -### TIM containerization multi-stage builds +### TIM Containerization Multi-stage Builds In the TIM test container, multi-stage builds are used to differentiate between running the tests locally in a terminal and running the commands through Intellij. In the local terminal, the code is copied in through an external shell @@ -75,7 +75,7 @@ Jenkins will most likely use the same base target as local testing -- but for pr production code into the container, this can be added as a separate target. Multi-stage builds allow us to put all these use cases into one Dockerfile, while the docker compose file allows us to simplify using that end file. -## Docker compose +## Docker Compose A [docker compose file](https://docs.docker.com/reference/compose-file/) is often used to create a network of different docker containers representing different services. However, it is also a handy way of automatically creating volumes, @@ -83,7 +83,7 @@ secrets, environment variables, and various other aspects of a Docker container. the same arguments in your `docker build` or `docker run` commands, that can often be automated into a docker compose file. -### Docker compose overview +### Docker Compose Overview Using a docker compose file allows you to simplify the commands needed to start and run docker containers. For example, let's say you need to run a Docker container using a Dockerfile in a different directory with an attached volume and an @@ -137,7 +137,7 @@ the docker compose file built. Docker compose is a very powerful tool, and everything you can do when building on the command line can also be done in docker compose. For more info, you can check out the [docker compose documentation](https://docs.docker.com/compose/). -### TIM `docker-compose.yml` file explained +### TIM `docker-compose.yml` File Explained All the Docker functionality can easily be accessed using the docker compose file. Currently, this is set up for only a few different use cases, but it can be updated if needed. @@ -225,6 +225,10 @@ Here is what each piece of the service setting means: * `environment`: This can be used to pass environment variables into the docker container. Currently, this is only used for the `single_test` service, to set the test that you want to run. +## Useful Links +* [Official Docker documentation](https://docs.docker.com/) +* [Multi-stage Builds Documentation](https://docs.docker.com/build/building/multi-stage/) + ## Acronyms * **apk** = Alpine Package Keeper diff --git a/docs/source/workflows/docker/running_docker_with_m1.md b/docs/source/workflows/docker/running_docker_with_m1.md index 5dfdd11..c5051d0 100644 --- a/docs/source/workflows/docker/running_docker_with_m1.md +++ b/docs/source/workflows/docker/running_docker_with_m1.md @@ -29,7 +29,7 @@ Red Hat has some [compatibility issues](https://access.redhat.com/discussions/59 method used for virtualization. This is because (according to the RHEL discussion boards) Red Hat is built for a default page size of 64k, while the M1 CPU page size is 16k. One way to fix this is to rebuild for a different page size. -### Short-term solution +### Short-term Solution A solution is to use the [`platform` option](https://devblogs.microsoft.com/premier-developer/mixing-windows-and-linux-containers-with-docker-compose/) in [docker compose](https://docs.docker.com/compose/). It's an option that allows you to select the platform to build @@ -62,7 +62,7 @@ The `platform` keyword is also available on `docker build` and `docker run` comm `--platform linux/amd64` on `docker build` and `docker run` commands you can force the platform without needing to use `docker compose`. -## Docker container hanging on M1 +## Docker Container Hanging on M1 [This is a known issue with Docker](https://github.com/docker/for-mac/issues/5590). Docker has already released some patches to try and fix it, but it could still be encountered. Basically, the Docker container will hang permanently