Backstroke is a Github bot to keep repository forks up to date with their upstream. While it used to be a monolith, it's now a number of smaller microservices.
- server, the service that handles User auth and Link CRUD. In addition, this service adds link operations to a queue when a link is out of date.
- worker, a worker that eats off the queue of link operations, performs the operations, and stores the results.
- legacy, a service that maintains backwards
compatibility for legacy features like Backstroke classic and forwarding of old api requests to
the
server
service. - dashboard, a react-based frontend to the api
provided by the
server
service. Mainly handles logging in and Link CRUD. - www, the Backstroke website found at https://backstroke.co.
Deployment is orchestrated with docker-compose
, which is powered by
docker containers. The docker-compose.yml
file
in the root of this repository provides some default settings that hold true no matter the
environment Backstroke is being deployed in. To deploy Backstroke, kick off docker-compose
with a command similar to docker-compose -f docker-compose.yml -f my-environment-docker-compose.yml up
. In a nutshell, this combines the two configuration files together which allows you to customize
each service without modifying the main docker-compose.yml
. Depending on the environment
Backstroke is being deployed into, the contents of the second docker-compose file will vary.
In development, we want to run all services locally and make it as easy as possible to reset the state of the entire application.
- A Github Personal access token for the user that will be making pull requests.
- A Github oauth application for Backstroke to use to
login users. The callback url should be
http://localhost:8000/auth/github/callback
.
- Copy this example
development-docker-compose.yml
file into your project ensuring that you replace all<user specific values>
with your GitHub details:
version: "2.2"
services:
# A database for the `server` service
database:
image: library/postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
POSTGRES_DB: docker
worker:
depends_on:
- redis
# Create mock pull requests.
# Remove `--pr mock` if you want to make actual pull requests (make sure you know what you're
# doing!).
command: yarn start -- --pr mock
environment:
REDIS_URL: redis://redis:6379
GITHUB_TOKEN: <insert github personal access token here>
server:
depends_on:
- database
environment:
DEBUG: backstroke:*
DATABASE_URL: postgres://docker:docker@database:5432/docker
DATABASE_REQUIRE_SSL: 'false'
GITHUB_TOKEN: <insert github personal access token here>
GITHUB_CLIENT_ID: <insert github client id here>
GITHUB_CLIENT_SECRET: <insert github client secret here>
GITHUB_CALLBACK_URL: <insert gitthub oauth callback here>
SESSION_SECRET: "backstroke development session secret"
CORS_ORIGIN_REGEXP: .*
# The locations of a couple other services
APP_URL: http://localhost:3000
API_URL: http://localhost:8000
ROOT_URL: https://backstroke.co
legacy:
environment:
GITHUB_TOKEN: <insert github personal access token here>
# Development version of the dashboard.
dashboard:
image: backstroke/dashboard
ports:
- '3000:3000'
environment:
NODE_ENV: development
PORT: 3000
volumes:
database:
-
Run
docker-compose -f docker-compose.yml -f development-docker-compose.yml up
. -
Migrate the database. Here's how.
-
Visit http://localhost:8000 for the
server
service, http://localhost:3000 for thedashboard
service, or http://localhost:4000 for thelegacy
service. These ports were defined in the first docker-compose file.
In the above configuration, changing server code locally won't restart the service. If you'd like for the service to restart when you save your code, two things are required:
- Change the
command
of the service to use nodemon. - Mount your code into the service as a volume
Here's an example for the server
service above:
...
server:
# 1. Use nodemon.
command: yarn start-dev
# 2. Mount code into container.
volumes:
- "./path/to/my/code/from/this/repository:/app"
environment:
...
...
All node-based services should have a start-dev
npm task associated with them. If not, open an
issue.
The worker will make real, live github pull requests by default. Don't spam other people's repositories! However, there is a flag that can be enabled to cause the worker to make mock pull requests instead of real ones.
Sometimes, you'd like to test a service in isolation. Using docker-compose isn't really all that helpful in this case. If docker-compose isn't helpful, don't feel like you have to use it.
In production, we use a very similar configuration to development, but with a few key differences:
- The database isn't hosted in a container, it's external and linked into the project.
- All services have a
restart: always
section to ensure that they will be restarted if they were to crash. - Also, we host another container used to ship logs to an instance of the ELK stack (currently through logz.io) though this isn't required.
Since production deployments can change based on many 3rd-party factors, there isn't an example in this section. Most likely you should be able to start with the development configuration and talor it to your needs.
When I deploy the prodiction version of Backstroke, I run ./up.sh
, which does a couple things:
- Provisions a droplet
- Assigns a floating ip
- Copies over the docekr compose file and configurations
- Restarts all containers with docker-compose.
There's a bit of downtime (~10s), but it works for the small scale deployment that Backstroke requires.
When hacking on Backstroke, here are a few tasks that are handy to be able to accomplish when debugging a problem.
$ docker ps
# Note down the container id for redis
$ docker exec -it <CONTAINERID> redis-cli
> HGETALL rsmq:webhookQueue:Q
OR
$ docker exec -it $(docker ps -q --filter "label=com.docker.compose.service=redis") redis-cli
This list also contains statistics, so it's a bit of a pain to read. But all the items that are long, random ids are the items in the queue.
$ docker ps
# Note down the container id for redis
$ docker exec -it <CONTAINERID> redis-cli
> KEYS webhook:status:*
> GET webhook:status:myoperationid
OR
$ docker exec -it $(docker ps -q --filter "label=com.docker.compose.service=redis") redis-cli
$ docker ps
# Note down the container id for the service you want
$ docker logs -f <CONTAINERID>
$ docker ps
# Note down the container id for the server
$ docker exec -it <CONTAINERID> yarn migrate
OR
$ docker exec -it $(docker ps -q --filter "label=com.docker.compose.service=server" --filter ancestor=backstroke/server) yarn migrate
Typically, this job runs about every 10 minutes and adds new webhook operations to the queue. If you'd like to run it on your own (for development reasons), try this:
$ docker ps
# Note down the container id for the operation-dispatcher
$ docker exec -it <CONTAINERID> yarn start:once
OR
$ docker exec -it $(docker ps -q --filter "label=com.docker.compose.service=server" --filter ancestor=backstroke/server) yarn manual-job
See here for a list of values present in this shell, but the TL;DR is that it's all database models and constructs for interacting with redis.
$ docker ps
# Note down the container id for the server
$ docker exec -it <CONTAINERID> yarn shell
OR
$ docker exec -it $(docker ps -q --filter "label=com.docker.compose.service=server" --filter ancestor=backstroke/server) yarn shell