Backend repository for infraohjelmointi API service in City of Helsinki.
Instructions in this README.md assume that you know what docker and docker-compose are, and you already have both installed locally. Also you understand what docker-compose up -d means. This helps to keep the README.md concise.
To make our commits more informative those should be written in a format of Conventional Commits i.e. a suitable prefix should be added in the beginning of every commit e.g. feat: built a notification or refactor:... etc. The Conventional Commits could be properly configured to the project in the future.
Hotfixes should be done by creating a hotfix branch out of main and then merge that to main and develop to avoid doing any rebases.
The common way of merging branches is using normal merges i.e. not using squash merging unless there is a situation when squashing should be done.
In order to create placeholder for your own environment variables file, make a local .env.template
copy:
cp .env.template .env
Then you can run docker image as detached mode with:
docker-compose up
-
Access development server on localhost:8000
-
Login to admin interface with
admin
and 🥥 at localhost:8000/admin -
Done!
This list is a 'TL;DR'. Steps are described more detailed on this README file under Populate database.
- Hierarchy and project data
- Import Location/Class hierarchy structure
- Import Planning (TS) and Budget (TAE) files in bulk together
- Populate database
Project data and finances can be imported using excel files into the infra tool.
When importing, you need to run scripts in the container:
docker exec -it infraohjelmointi-api sh
Importing Location/Class hierarchy structure and Planning (TS) and Budget (TAE) files:
-
Location/Class hierarchy structure
./import-excels.sh -c path/to/hierarchy.xlsx
-
Planning and Budget files (e.g. in
Excels
folder):./import-excels.sh -d path/to/Excels/
Import project location options:
python manage.py locationimporter --file path/to/locationdata.xlsx
Update projects' missing projectDistrict_id
value with infraohjelmointi_api_projectdistrict.id
.
psql $DATABASE_URL
\i update-districts.sql
Import new person information into responsible persons list. The list can be found from project form:
python manage.py responsiblepersons --file path/to/filename.xlsx
Add new API token:
python manage.py generatetoken --name AppNameToken
This creates a new User which name is --name
value.
Delete API token:
python manage.py generatetoken --name ExistingAPITokenName --deletetoken
-
We use
pip
to manage python packages we need -
After adding a new package to requirements.txt file, compile it and re-build the Docker image so that the container would have access to the new package
docker-compose up --build
Tests are written for django management commands and the endpoints. They can be found in the following location:
infraohjelmointi_api/tests
Run the tests
python manage.py test
An optional verbosity parameter can be added to get a more descriptive view of the tests
python manage.py test -v 1/2/3
The codebase should always have a test coverage % higher than 65%. It is usualy measured with SonarCloud in the PR pipeline, but if needed to get the % locally, a report can be created with pytest-cov.
-
If not installed inside the container, you need to install pytest-django and pytest-cov
pip install pytest-django pip install pytest-cov
-
Run
pytest --cov=infraohjelmointi_api/
to get the test coverage report from the whole project. You can also specify folders or files by changing the value given to
--cov=
Infra tool project data and financial data can be imported from external sources.
To synchronize project data with SAP in local environment, VPN service provided by platta should be running.
Populate DB with SAP costs and commitments using management command:
python manage.py sapsynchronizer
All projects in DB will also be synced with SAP to update SAP costs and commitments at midnight through the CRON job and script:
./sync-from-sap.sh
The CRON job is added on both prod and dev environments.
More documentation on Confluence.
Sync all project data in the DB with ProjectWise:
python manage.py projectimporter --sync-projects-with-pw
Sync project by PW id in the DB with ProjectWise
python manage.py projectimporter --sync-project-from-pw pw_id
Projects are also synced to PW service when a PATCH request is made to the projecs endpoint.
Scripts were used when dev and prod environments were setup for the first time.
More documentation on Confluence.
While populating the database the script file import-excels.sh
was used to create the hierarchy (/import-excels.sh -c path/to/hierarchy.xlsx
) and importing all Excels (./import-excels.sh -d path/to/Excels/
).
Import Location/Class hierarchy structure. File import-excels.sh
uses this:
python manage.py hierarchies --file path/to/hierarchy.xlsx
In some contexts, hierarchy is known as "luokkajako".
Import only Planning project data (files with "TS"):
python manage.py projectimporter --import-from-plan path/to/planningFile.xlsx
Import only Budget project data (files with "TAE"):
python manage.py projectimporter --import-from-budget path/to/budgetFile.xlsx
./import-excels.sh -d path/to/Excels
includes both of TS and TAE import commands.
- Create a release PR from develop to main
- Wait for the PR pipeline to run and check that all checks pass
- Merge the PR
- Trigger build-infraohjelmointi-api-stageprod
- Approve pipeline run in Azure DevOps. Deploy pipelines are triggered by the build pipeline but prod deploy needs to be approved separately (=2 approvals in total). To approve:
- Open the pipeline run you want to approve (from left menu, select Pipelines)
- Select the correct pipeline
- Select the run you need to approve
- Wait and click a button to approve it (pipeline run is paused until you approve).
- Steps on Confluence with pictures: Confluence
- Link to Azure DevOps services: Azure DevOps
Technical documentation can be found from Confluence.