Deployed Production API - Mainnet Contract Deployed Rinkeby API - Rinkeby Contract great for testing
Download Docker stable version
Install hasura cli, script in ./graphql/install-hasura-cli.sh
docker volume create --name redis_bounties
docker volume create --name psql_bounties
docker-compose up
Locally, you will now be syncing directly from the contract. You may access the api at:
The API will automatically restart if you make code changes. To turn off the services, run docker-compose down
. Keep in mind, the volumes make it so your DB and redis cache will be in the same state if you start the services again with docker-compose up
. If you would like to wipe out your databases and start again, then run:
docker-compose down
docker volume rm redis_bounties
docker volume rm psql_bounties
docker volume create --name redis_bounties
docker volume create --name psql_bounties
docker-compose up
cd graphql/hasura
hasura migrate apply
If you add additional packages to a package.json or to the requirements.txt file, you'll need to rebuild the individual service. To rebuild all services, you may run:
docker-compose down
docker-compose build
docker-compose up
By default, the sync will connect to mainNet. To change to a rinkeby sync or other, you will need to adjust the eth_network key in the environment file. As an example, it can be changed to eth_network=rinkeby
.
Visit the production or rinkeby endpoint. Both default to the swagger documentation ui. The local version also serves the documentation.
The frontend or client can be any third party service or collaborator that integrates with the standard bounties ethereum contract. This API works as a caching and storage layer for what is input into the standard bounties contract. Due to storage costs, the contract puts the majority of the data into IPFS. To understand further, read the documentation on the standard bounties contract.
The contract subscriber listens for events from the contract. In the case a resync is occurring, it will listen to all historical events, starting from the genesis block. In order for the subscriber to know what it has already accessed, the redis cache stores a currentBlock key. Additionally, the redis cache stores the hashes for all transactions that have already been evaluated and stored to the db. The contract subscriber will ignore transactions that have already been written, and will not search through blocks prior to the currentBlock key. When the subscriber picks up on a new event, it looks up the original transaction via web3 and passes the event data along with the original contract function inputs to SQS. An SQS fifo queue is used. This means we will never have duplication on keys and all events will be handled in the order they come through.
The bounties subscriber listens to events that have been passed into SQS by the contract subscriber. The bounties subscriber uses the data from the event, the inputs to the original contract function, and the IPFS stored data to write the appropriate values to the DB via django.
The bounties api is a django API that serves the data that has been written by the bounties subscriber and other running jobs.
sql_jobs are scheduled jobs to help enrich the data. category_ranks.sql produces a table that ranks the most commonly used categories and converges duplicated names.
Other Jobs include:
- track_bounty_expirations.py.
- get_token_values.py. This syncs with coinmarketcap every 5 minutes and updates pricing on each of the bounties in USD.
- Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyLCJodHRwczovL2hhc3VyYS5pby9qd3QvY2xhaW1zIjp7IngtaGFzdXJhLWFsbG93ZWQtcm9sZXMiOlsidXNlciJdLCJ4LWhhc3VyYS1kZWZhdWx0LXJvbGUiOiJ1c2VyIiwieC1oYXN1cmEtdXNlci1pZCI6IjEifX0.vmBpGhlIyPKyUvs6symo8BvViR--VDX_6EmGMoSoTgg
- JWT|Django Secret dnf5mn4NmYhqO8eaxU5RZy3yC9ogzHSKO4iTWuubqUnJyAzjEbRPzH9KCf2w3p5cu0exQ9TV1LVuXv-yOf8