-
Notifications
You must be signed in to change notification settings - Fork 2
Development Environment Setup
Note: Dont forget to pull the repository from Github before starting here :) The project is split into two development phases:
- Design and development
- Production
Here we will discuss and describe the Development environment and its setup and usage.
As Zitadel is one of the core security features for this project, it should always be considered when development on PIX is furthered. You can access the development server for Zitadel on http://devzitadel.cloud.ut.ee. Keep in mind that the development server will always communicate through HTTP, while the production server will only work over HTTPS.
Zitadel requires three configuration files (yaml format) to setup. Examples can be found on : https://zitadel.com/docs/self-hosting/manage/configure. You shouldn't need any more than this to get started locally. Since our zitadel environment is hosted on http://devzitadel.cloud.ut.ee. We also need to configure a reverse proxy using Nginx.
Here's the docker-compose.yaml and the nginx.conf: The masterkey required for Zitadel needs to be 32 characters long. TLS is disabled because we only serve over HTTP for development.
version: "3.8"
services:
nginx:
image: nginx
networks:
- "zitadel"
restart: always
ports:
- "80:80"
volumes:
- ./nginx/:/etc/nginx/conf.d/:ro
depends_on:
- zitadel
command: '/bin/sh -c ''while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g "daemon off;"'''
zitadel:
restart: "always"
container_name: zitadel
networks:
- "zitadel"
image: "ghcr.io/zitadel/zitadel:stable"
command: 'start-from-init --config /example-zitadel-config.yaml --config /example-zitadel-secrets.yaml --steps /example-zitadel-init-steps.yaml --masterkey "MasterkeyNeedsToHave32Characters" --tlsMode disabled'
depends_on:
certs:
condition: "service_completed_successfully"
ports:
- "8080:8080"
volumes:
- "./example-zitadel-config.yaml:/example-zitadel-config.yaml:ro"
- "./example-zitadel-secrets.yaml:/example-zitadel-secrets.yaml:ro"
- "./example-zitadel-init-steps.yaml:/example-zitadel-init-steps.yaml:ro"
- "zitadel-certs:/crdb-certs:ro"
certs:
image: "cockroachdb/cockroach:v22.2.2"
entrypoint: ["/bin/bash", "-c"]
command:
[
"cp /certs/* /zitadel-certs/ && cockroach cert create-client --overwrite --certs-dir /zitadel-certs/ --ca-key /zitadel-certs/ca.key zitadel_user && chown 1000:1000 /zitadel-certs/*",
]
volumes:
- "certs:/certs:ro"
- "zitadel-certs:/zitadel-certs:rw"
depends_on:
my-cockroach-db:
condition: "service_healthy"
my-cockroach-db:
restart: "always"
networks:
- "zitadel"
image: "cockroachdb/cockroach:v22.2.2"
command: "start-single-node --advertise-addr my-cockroach-db"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health?ready=1"]
interval: "10s"
timeout: "30s"
retries: 5
start_period: "20s"
ports:
- "9090:8080"
- "26257:26257"
volumes:
- "certs:/cockroach/certs:rw"
- "data:/cockroach/cockroach-data:rw"
networks:
zitadel:
volumes:
certs:
zitadel-certs:
data:
server {
listen 80;
listen [::]:80;
server_name devzitadel.cloud.ut.ee; # Or your own domain name
location / {
grpc_pass grpc://zitadel:8080; # zitadel -> docker-compose container reference
grpc_set_header Host $host;
}
}
Once the docker containers are all up and running (keep in mind that Zitadel requires a few minutes to set up fully) we can go to our domain name and log in with the admin details.
Further steps to set up the project and service users are described in https://github.com/AutomatedProcessImprovement/process-improvement-explorer/wiki/Zitadel-:-How-to-setup-and-configure-the-PIX-‐-Identity-Management
Note: If you want to host Zitadel locally, some steps are slightly different. For this you can refer to the Zitadel documentation. However, I suggest you use the devzitadel.cloud.ut.ee environment for ease of use.
Ensure you are on the DEV_MAIN branch of the project if you plan on making changes to the project. Else, you can stay on the main branch.
Starting the database is simple; You take the docker-compose.yaml file from the root of the repository and you start the DB container. Double check the container is running and you are good to go. It should be running on port 5432.
version: '3.8'
services:
db:
image: postgres
restart: unless-stopped
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=pix
ports:
- '5432:5432'
volumes:
- db:/var/lib/postgresql/data
volumes:
db:
driver: local
To start the backend, execute the following command:
uvicorn main:app --reload # Will run on port 8000.
Then run init.py script, located in ./backend/init.py to initialize the DB tables in case you haven't done that yet.
Note: If you want the Prosimos API to be available for development as well, you will need to pull the prosimos-microservice repository from Github to the root of the project and give it its own directory called ./prosimos-microservice
To run the frontend, execute the following command in your terminal:
npm run dev
The application should now be hosted on http://localhost:80. You can change the port in vite.config.ts, but keep in mind that you will have to update the port on multiple places then.
After these steps, you should now have access to the entire PIX development environment. Meaning you can:
- Register new users and log in.
- Create a new project, and related actions to projects.
- Upload files to a project, and related action to files.
- Open the Prosimos frontend when the right files are selected. (If you also installed the prosimos-microservice and started the docker containers related to it you can also run a simulation.)
You can now develop new components and reliably test on a local environment before pushing to the production branch.
-
Archive (ignore this, for history reasons only)
- Jonas' Notes
- Developer Notes
- System Design
-
Developer Notes
-
System Design
-
Policies