Note: for Windows specific tips see this readme
To startup Kafka, MongoDB, Elasticsearch and Kibana, follow the steps below:
- Create a directory called
exec
inside thedocker-compose-infra
(this) directory, and go to that directory.
This exec
directory is ignored by gitignore, so can't be pushed to GitHub.
mkdir exec
cd exec
- Create the following directories as children of the
docker-compose/exec
directory:
certs
esdata01
kibanadata
logs
tigerbeetle_data
mongodb_data
grafana_data
prometheus_data
prometheus_etc
otel_data
mkdir {certs,esdata01,kibanadata,logs,tigerbeetle_data,mongodb_data,grafana_data,prometheus_data,prometheus_etc,otel_data}
Note: For Mac users you might have to grant full access to these directories, to do that execute in the exec directory:
sudo chmod -R 777 .
- Copy the
.env.sample
to the exec dir:
cp ../.env.sample ./.env
- Copy Prometheus, Grafana and OpenTelemetry's files to the correspondent data directories:
cp ../prometheus.yml ./prometheus_etc/prometheus.yml
cp ../grafana_datasources.yml ./grafana_data/datasource.yml
cp ../otel-collector-config.yaml ./otel_data/config.yaml
-
Review the contents of the
.env
file. If using MacOS update the ROOT_VOLUME_DEVICE_PATH to reflect the absolute path -
Ensure
vm.max_map_count
is set to at least262144
: Example to apply property on live system:
sysctl -w vm.max_map_count=262144 # might require sudo
- Initialise TigerBeetle data
docker run -v $(pwd)/tigerbeetle_data:/data ghcr.io/tigerbeetledb/tigerbeetle \
format --cluster=0 --replica=0 --replica-count=1 /data/0_0.tigerbeetle
Note: on macOS, you might have to add `--cap-add IPC_LOCK, see this page on the official TigerBeetle repo for more info.
docker run --cap-add IPC_LOCK -v $(pwd)/tigerbeetle_data:/data ghcr.io/tigerbeetledb/tigerbeetle \
format --cluster=0 --replica=0 /data/0_0.tigerbeetle
Start the docker containers using docker compose up (in the exec dir)
docker compose -f ../docker-compose-infra.yml --env-file ./.env up -d
# OR for older versions of docker
docker-compose -f ../docker-compose-infra.yml --env-file ./.env up -d
To view the logs of the infrastructure containers, run:
docker compose -f ../docker-compose-infra.yml --env-file ./.env logs -f
# OR for older versions of docker
docker-compose -f ../docker-compose-infra.yml --env-file ./.env logs -f
To stop the infrastructure containers, run:
docker compose -f ../docker-compose-infra.yml --env-file ./.env stop
# OR for older versions of docker
docker-compose -f ../docker-compose-infra.yml --env-file ./.env stop
Once started, the services will available via localhost. Use the credentials set in the .env file.
- ElasticSearch API - https://localhost:9200/
- Kibana - http://localhost:5601
- Kafka Broker - localhost:9092
- Zookeeper - localhost:2181
- RedPanda Kafka Console - http://localhost:8080
- MongoDB - mongodb://localhost:27017
- Mongo Express Console - http://localhost:8081
- Prometheus (metrics collector) - http://localhost:9090
- Grafana (metrics dashboards) - http://localhost:3000
- Jeager (tracing analysis) - http://localhost:16686
Once ElasticSearch has started you should upload the data mappings for the logs and audits indexes using the following commands.
This must be executed once after setting up a new ElasticSearch instance, or when the indexes are updated.
Execute this in the directory containing the files es_mappings_logging.json
and es_mappings_auditing.json
.
When asked, enter the password for the elastic
user in the .env
file.
# Create the logging index
curl -i --insecure -X PUT "https://localhost:9200/ml-logging/" -u "elastic" -H "Content-Type: application/json" --data-binary "@es_mappings_logging.json"
# Create the auditing index
curl -i --insecure -X PUT "https://localhost:9200/ml-auditing/" -u "elastic" -H "Content-Type: application/json" --data-binary "@es_mappings_auditing.json"
NOTE: The master/source for the mappings files is the respective repositories: logging-bc and auditing-bc.
https://www.elastic.co/guide/en/elasticsearch/reference/8.1/explicit-mapping.html https://www.elastic.co/guide/en/elasticsearch/reference/8.1/mapping-types.html
Once the mappings are installed, it is time to import the prebuilt Kibana objects for the DataView and the search.
- Open Kibana (login with credentials in .env file)
- Navigate to (top left burger icon) -> Management / Stack Management -> Kibana / Saved Objects
Or go directly to: http://localhost:5601/app/management/kibana/objects
- Use the Import button on the top right to import the file
kibana-objects.ndjson
located in thedocker-compose
directory (this one).
Go to (top left burger icon) -> Analytics / Discover, and then use the Open option on the top right to open the imported "MojaloopDefaultLogView"
view.
Go to (top left burger icon) -> Analytics / Discover, and then use the Open option on the top right to open the imported "MojaloopDefaultLogView"
view.
To access mongo express web service, use http://localhost:8081/ with the default credentials:
- username: admin
- password: pass
Ref:
Monitor Kafka Events (Download the Kafka clients from https://kafka.apache.org/downloads.html)
./kafka-console-consumer.sh --topic nodejs-rdkafka-svc-integration-test-log-bc-topic --from-beginning --bootstrap-server localhost:9092
docker-compose down -v