Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor Refactoring and Documentation Update #2

Merged
merged 5 commits into from
Feb 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: Lint
on:
push:
branches:
- master
pull_request:

jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4

- name: Lint
run: make lint
1 change: 1 addition & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ ENV S3_PATH 'backup'
ENV S3_ENDPOINT **None**
ENV S3_S3V4 no
ENV SCHEDULE **None**
ENV SUCCESS_WEBHOOK **None**

ADD entrypoint.sh .
ADD backup.sh .
Expand Down
4 changes: 4 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
SHELL_FILES := $(wildcard *.sh)

lint:
@shellcheck --enable=require-variable-braces $(SHELL_FILES) && echo "ShellCheck passed"
10 changes: 4 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This is a fork of [karser/postgres-backup-s3](https://github.com/karser/docker-i

Docker:
```sh
$ docker run -e S3_ACCESS_KEY_ID=key -e S3_SECRET_ACCESS_KEY=secret -e S3_BUCKET=my-bucket -e S3_PREFIX=backup -e POSTGRES_DATABASE=dbname -e POSTGRES_USER=user -e POSTGRES_PASSWORD=password -e POSTGRES_HOST=localhost f213/postgres-backup-s3
$ docker run -e S3_ACCESS_KEY_ID=key -e S3_SECRET_ACCESS_KEY=secret -e S3_BUCKET=my-bucket -e S3_PREFIX=backup -e POSTGRES_DATABASE=dbname -e POSTGRES_USER=user -e POSTGRES_PASSWORD=password -e POSTGRES_HOST=localhost -e SCHEDULE="@daily" f213/postgres-backup-s3
```

Docker Compose:
Expand All @@ -28,7 +28,7 @@ postgres-backup:
test: curl http://localhost:1880

environment:
SCHEDULE: 0 30 */2 * * * # every 2 hours at HH:30
f213 marked this conversation as resolved.
Show resolved Hide resolved
SCHEDULE: 0 30 */2 * * * * # every 2 hours at HH:30
S3_REGION: region
S3_ACCESS_KEY_ID: key
S3_SECRET_ACCESS_KEY: secret
Expand All @@ -42,8 +42,6 @@ postgres-backup:
SUCCESS_WEBHOOK: https://sb-ping.ru/8pp9RGwDDPzTL2R8MRb8Ae
```

### Automatic Periodic Backups
### Crontab format

You can additionally set the `SCHEDULE` environment variable like `-e SCHEDULE="@daily"` to run the backup automatically.
f213 marked this conversation as resolved.
Show resolved Hide resolved

More information about the scheduling can be found [here](http://godoc.org/github.com/robfig/cron#hdr-Predefined_schedules).
Schedule format with years support. More information about the scheduling can be found [here](https://github.com/aptible/supercronic/tree/master?tab=readme-ov-file#crontab-format)
31 changes: 17 additions & 14 deletions backup.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
#! /bin/sh

# shellcheck disable=SC3040 # expecting 'pipefail' derrictive is availabe in the shell
# shellcheck disable=SC2086 # POSTGRES_HOST_OPTS and AWS_ARGS should be splitted by spaces intentionally

set -e
set -o pipefail

Expand All @@ -25,8 +28,8 @@ fi

if [ "${POSTGRES_HOST}" = "**None**" ]; then
if [ -n "${POSTGRES_PORT_5432_TCP_ADDR}" ]; then
POSTGRES_HOST=$POSTGRES_PORT_5432_TCP_ADDR
POSTGRES_PORT=$POSTGRES_PORT_5432_TCP_PORT
POSTGRES_HOST="${POSTGRES_PORT_5432_TCP_ADDR}"
POSTGRES_PORT="${POSTGRES_PORT_5432_TCP_PORT}"
else
echo "You need to set the POSTGRES_HOST environment variable."
exit 1
Expand All @@ -43,33 +46,33 @@ if [ "${POSTGRES_PASSWORD}" = "**None**" ]; then
exit 1
fi

if [ "${S3_ENDPOINT}" == "**None**" ]; then
f213 marked this conversation as resolved.
Show resolved Hide resolved
if [ "${S3_ENDPOINT}" = "**None**" ]; then
AWS_ARGS=""
else
AWS_ARGS="--endpoint-url ${S3_ENDPOINT}"
fi

# env vars needed for aws tools
export AWS_ACCESS_KEY_ID=$S3_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$S3_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=$S3_REGION
export AWS_ACCESS_KEY_ID="${S3_ACCESS_KEY_ID}"
export AWS_SECRET_ACCESS_KEY="${S3_SECRET_ACCESS_KEY}"
export AWS_DEFAULT_REGION="${S3_REGION}"

export PGPASSWORD=$POSTGRES_PASSWORD
POSTGRES_HOST_OPTS="-h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER $POSTGRES_EXTRA_OPTS"
export PGPASSWORD="${POSTGRES_PASSWORD}"
POSTGRES_HOST_OPTS="-h ${POSTGRES_HOST} -p ${POSTGRES_PORT} -U ${POSTGRES_USER} ${POSTGRES_EXTRA_OPTS}"

echo "Creating dump of ${POSTGRES_DATABASE} database from ${POSTGRES_HOST}..."

pg_dump -Fc $POSTGRES_HOST_OPTS $POSTGRES_DATABASE > db.dump
pg_dump -Fc ${POSTGRES_HOST_OPTS} "${POSTGRES_DATABASE}" > db.dump

echo "Uploading dump to $S3_BUCKET"
echo "Uploading dump to ${S3_BUCKET}"

cat db.dump | aws $AWS_ARGS s3 cp - s3://$S3_BUCKET/$S3_PREFIX/${POSTGRES_DATABASE}_$(date +"%Y-%m-%dT%H:%M:%SZ").dump || exit 2
aws ${AWS_ARGS} s3 cp db.dump "s3://${S3_BUCKET}/${S3_PREFIX}/${POSTGRES_DATABASE}_$(date +"%Y-%m-%dT%H:%M:%SZ").dump" || exit 2

echo "DB backup uploaded successfully"

rm db.dump

if [ -n $SUCCESS_WEBHOOK ]; then
f213 marked this conversation as resolved.
Show resolved Hide resolved
echo "Notifying $SUCCESS_WEBHOOK"
curl -m 10 --retry 5 $SUCCESS_WEBHOOK
if [ ! "${SUCCESS_WEBHOOK}" = "**None**" ]; then
echo "Notifying ${SUCCESS_WEBHOOK}"
curl -m 10 --retry 5 "${SUCCESS_WEBHOOK}"
fi