MySQL InnoDB Cluster delivers an integrated, native, HA solution for your databases. MySQL InnoDB Cluster consists of:
- MySQL Servers with Group Replication to replicate data to all members of the cluster while providing fault tolerance, automated failover, and elasticity.
- MySQL Router to ensure client requests are load balanced and routed to the correct servers in case of any database failures.
- MySQL Shell to create and administer InnoDB Clusters using the built-in AdminAPI.
For more information, see the official product page and the official user guide.
You can use the example shell scripts (start_three_node_cluster.sh and cleanup_cluster.sh), Docker compose with the provided sample docker-compose.yml file, or you can manage things manually.
A secure method of password generation and management is available using the auto-generated random password that's stored in a file within each container.
All that's necessary to use the secure method is to replace all instances of MYSQL_ROOT_PASSWORD=root
with MYSQL_ROOT_PASSWORD=$(cat secretpassword.txt)
in each example docker command.
The scripted method now uses the secure password management facilities by default.
Helper scripts can be used to either create a cluster, or to tear one down.
To create a three node cluster that includes MySQL Router and MySQL Shell, and connect to the cluster with MySQL Shell:
./start_three_node_cluster.sh
If you want to use a different image (for example when you have built a local variant of the image) you can run the following before invoking start_three_node_cluster.sh:
export INNODB_CLUSTER_IMG=your_username/your_image_name
./cleanup_cluster.sh
Docker compose can be used to create a cluster or to shut it down using the provided sample docker-compose.yml file.
To create a three node cluster that includes MySQL Router and MySQL Shell, execute this command in any directory where the provided docker-compose.yml file exists:
docker-compose up
To shut down the cluster, execute this command in any directory where the provided docker-compose.yml file exists:
docker-compose down
The data for the containers are retained in the ./data subdirectory where docker-compose is executed. So this method may be preferred when you wish to retain the data in between tests.
This manual process essentially documents what the start_three_node_cluster.sh
helper script performs. The main difference is that the examples contain a simple and non-secure password value of 'root'. You can leverage the built-in secure means of password management by using MYSQL_ROOT_PASSWORD=$(cat secretpassword.txt)
instead.
- Create a private network for the containers
docker network create --driver bridge grnet
- Bootstrap the cluster
docker run --name=mysqlgr1 --hostname=mysqlgr1 --network=grnet -e MYSQL_ROOT_PASSWORD=root -e BOOTSTRAP=1 -itd mattalord/innodb-cluster && docker logs mysqlgr1 | grep GROUP_NAME
This will spit out the GROUP_NAME
to use for subsequent nodes. Please be aware that GROUP_NAME
can be set manually as well, i.e. -e GROUP_NAME=0E46897B-8746-49A8-A660-92D6936CBDC4
(it has to be UUID only!), by default GROUP_NAME
is an autogenerated UUID4, for example,
the output will contain something similar to:
You will need to specify GROUP_NAME=a94c5c6a-ecc6-4274-b6c1-70bd759ac27f
if you want to add another node to this cluster
You will use this variable when adding additional nodes below. In other words, replace the example value a94c5c6a-ecc6-4274-b6c1-70bd759ac27f
below with yours.
-
Add a second node to the cluster via a seed node
docker run --name=mysqlgr2 --hostname=mysqlgr2 --network=grnet -e MYSQL_ROOT_PASSWORD=root -e GROUP_NAME="a94c5c6a-ecc6-4274-b6c1-70bd759ac27f" -e GROUP_SEEDS="mysqlgr1:6606" -itd mattalord/innodb-cluster
-
Add a third node to the cluster via a seed node
docker run --name=mysqlgr3 --hostname=mysqlgr3 --network=grnet -e MYSQL_ROOT_PASSWORD=root -e GROUP_NAME="a94c5c6a-ecc6-4274-b6c1-70bd759ac27f" -e GROUP_SEEDS="mysqlgr1:6606" -itd mattalord/innodb-cluster
-
Optionally add additional nodes via a seed node using the same process ...
-
Add a router for the cluster
docker run --name=mysqlrouter1 --hostname=mysqlrouter1 --network=grnet -e NODE_TYPE=router -e MYSQL_HOST=mysqlgr1 -e MYSQL_ROOT_PASSWORD=root -itd mattalord/innodb-cluster
-
Connect to the cluster via the mysql command-line client or MySQL Shell on one of the nodes
To use the classic mysql command-line client:
docker exec -it mysqlgr1 mysql -hmysqlgr1 -uroot -proot
There you can view the cluster membership status from the mysql console:
SELECT * from performance_schema.replication_group_members;
To use the MySQL Shell:
docker exec -it mysqlgr1 mysqlsh --uri=root:root@mysqlgr1:3306
There you can view the cluster status with:
dba.getCluster().status()
docker exec -it mysqlrouter1 bash
To test the RW
port, which always goes to the PRIMARY node:
mysql -h localhost --protocol=tcp -P6446 -e 'SELECT @@global.server_uuid'
To test the RO
port, which is round-robin load balanced to the SECONDARY nodes:
mysql -h localhost --protocol=tcp -P6447 -e 'SELECT @@global.server_uuid'
If you'd also like to test out my example arbitrator, you can do so easily once you have a working cluster. For example, if you're using ./start_three_node_cluster.sh
then you can run a myarbitratord container this way:
docker run --rm --name=myarbitratord --network=grnet --hostname=myarbitratord --entrypoint=/bin/bash -itd golang -c "go get github.com/mattlord/myarbitratord && /go/bin/myarbitratord -mysql-password '$(cat secretpassword.txt)' -seed-host myinnodbcluster"
You can then see the runtime stats with:
docker exec -it myarbitratord curl localhost:8099/stats
Docker 1.12 added support for custom health check commands which can be defined in the Dockerfile or on the command-line with docker run. For Group Replication nodes you could use a healthcheck command like this so that the container will end/exit when the node is in the ERROR or OFFLINE state:
docker run --name=mysqlgr3 --hostname=mysqlgr3 --network=grnet -e MYSQL_ROOT_PASSWORD=root -e GROUP_NAME="a94c5c6a-ecc6-4274-b6c1-70bd759ac27f" -e GROUP_SEEDS="mysqlgr1:6606" --health-cmd='mysql -nsLNE -e "select member_state from performance_schema.replication_group_members where member_id=@@server_uuid;" 2>/dev/null | grep -v "*" | egrep -v "ERROR|OFFLINE"' --health-start-period=60s --health-timeout=10s --health-interval=5s --health-retries=1 -itd mattalord/innodb-cluster
The start period of 60 seconds gives the container plenty of time to initialize and reach a normal state. The timeout of 10 seconds is more than ample to ensure that we're able to connect to mysql, execute the query, and get the results. The interval of 5 seconds means that we only want to run this healthcheck command every 5 seconds, and the interval of 1 means that we consider the container unhealthy after 1 failed healthcheck.
If you're like me and you use Docker on macOS, it's helpful to know that Docker actually executes the containers inside an Alpine Linux VM which in turn runs inside of a native xhyve hypervisor. You can access the console for that VM using:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
From there you can see the docker networking, volumes (/var/lib/docker), etc. Knowing how this all works "under the hood" will certainly come in handy sooner or later. Whenever you want to detach and close your console session just use:
CTRL-A-\
Docker on Windows -- assuming you're using Linux containers (the latest Windows 10 and Windows Server 2016 builds can also utilize NT kernel features to run containerized windows processes) -- works in a similar way, but uses Hyper-V as the native hypervisor.
If you want to dig further on how Docker provides Linux containers on non-Linux hosts, you can read up on the LinuxKit project.