Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MTA deployment steps for Postgres #394

Merged
merged 4 commits into from
Sep 5, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 98 additions & 0 deletions guides/databases-postgres.md
Original file line number Diff line number Diff line change
Expand Up @@ -311,6 +311,104 @@ When deploying to Cloud Foundry, this can be accomplished by providing a simple

5. Finally, package and deploy that, for example using [MTA-based deployment](deployment/to-cf#build-mta).

## Setup with Existing Projects

Here's a step by step guide to add PostgreSQL to an existing project and deploy to SAP BTP. We assume that the following prerequiistes are fulfilled:

1. An existing instance of PostgreSQL running. For this example the instance name `my-postgres-db` is used.
2. Service definition(s) and data model are in place (content in _/srv_ and _/db_ folder)

### Add Postgres dependencies
```
npm install @cap-js/postgres
```
This automatically hooks itself into the production profile of CAP. Once the CAP service is deployed in the BTP and the production profile is active, the Postgres adapter is used.

### Add Standard CAP Dependencies
```
cds add xsuaa,mta --for production
```

### Modify the mta.yaml

1. Add the Postgres instance as existing service to the `resource` section:
::: code-group
```yaml [mta.yaml]
- name: my-postgres-db
type: org.cloudfoundry.existing-service
```
:::

2. Add a deployer task/module, to deploy the data model to the Postgres instance as part of the standard deployment.
```yaml
- name: pg-db-deployer
type: hdb
path: gen/pg
parameters:
buildpack: nodejs_buildpack
requires:
- name: my-postgres-db
```

- Make sure to use the type `hdb` and NOT `nodejs` as the nodejs type will try to restart the service over and over again.
- The deployer path points to a _gen/pg_ directory we need to create as part of the deployment process. See next step.
- The deployer also defines the dependency/binding to the postgres instance to have the credentials available at deploy time.

3. Add dependencies to your CAP service module
::: code-group
```yaml [mta.yaml]
requires:
- name: my-postgres-db
- name: pg-db-deployer
```
:::

This configuration creates a binding to the Postgres instance and waits for the deployer to finish before deploying the service.

4. To generate the content into the `gen/pg` folder, we reference a shell script in the `custom` builder section. The complete section should look like this:
::: code-group
```yaml [mta.yaml]
build-parameters:
before-all:
- builder: custom
commands:
- npx cds build --production
- ./scripts/pgbuild.sh
```
:::

### Create the Shell Script
The shell script specified in the previous step is a simple combination of all the commands outlined in the CAP documentation. It creates the necessary artifacts in the _gen/pg_ directory. Here are the simple steps:

1. Create a directory _/scripts_ in the root of the project
2. Create a file _pgbuild.sh_ in the _/scripts_ directory and change the permissions to make it executable:
```
chmod +x pgbuild.sh
```
3. Add the following content to the _pgbuild.sh_ file:
```bash
#!/usr/bin/env bash

echo ** Starting Postgres build **

echo - creating dir gen/pg/db -
mkdir -p gen/pg/db

echo - compiling model -
cds compile '*' > gen/pg/db/csn.json

echo - copy .csv files -
cp -r db/data gen/pg/db/data

echo '{"dependencies": { "@sap/cds": "*", "@cap-js/postgres": "*"}, "scripts": { "start": "cds-deploy",}}' > gen/pg/package.json

```

### Deploy

Package and deploy your project, for example using [MTA-based deployment](deployment/to-cf#build-mta).


## Automatic Schema Evolution { #schema-evolution }

When redeploying after you changed your CDS models, like adding fields, automatic schema evolution is applied. Whenever you run `cds deploy` (or `cds-deploy`) it executes these steps:
Expand Down