Running MySQL Cluster 7.5 in Docker, part 2

After the heady excitement of getting my first MySQL Cluster 7.5.4 set up nicely running in docker, I quickly discovered that I wanted to re-factor most of it, implement the bits I’d left out, and extend it more to meet some of my other testing needs, like being able to run multiple deployments of similar types in parallel for simple CI.

I’ve now released this as v1.0.

The output is a little different to before, but now it’s possible to set up multiple clusters, of different shapes if you like, on different docker networks. You simply provide a unique value for the new –base-network and –name parameters when using the build and start commands.

Lets say you want to create a couple of clusters, like these:

Simple MySQL Cluster

four_node_cluster

To start the first smaller cluster, you would issue:

$ cluster.py build --base-network 172.18 --management-nodes 1 --data-nodes 2 --sql-nodes 1
2016-10-28T10:06:23.308000: Running: docker build -t markleith/mysql-cluster-mgmd:7.5 -f management-node/Dockerfile management-node
2016-10-28T10:06:32.208000: Running: docker build -t markleith/mysql-cluster-ndbmtd:7.5 -f data-node/Dockerfile data-node
2016-10-28T10:06:32.539000: Running: docker build -t markleith/mysql-cluster-sql:7.5 -f sql-node/Dockerfile sql-node

$ cluster.py start --name myc1 --base-network 172.18 --management-nodes 1 --data-nodes 2 --sql-nodes 1
2016-10-28T10:06:46.656000: Running: docker network ls
2016-10-28T10:06:46.712000: Info: myc1 network not found, creating
2016-10-28T10:06:46.714000: Running: docker network create --subnet=172.18.0.0/16 myc1
2016-10-28T10:06:47.132000: Running: docker ps -q -a --filter "name=myc1-mgmd49"
2016-10-28T10:06:47.202000: Running: docker run -d -P --net myc1 --name myc1-mgmd49 --ip 172.18.0.149 -e NODE_ID=49 -e NOWAIT=50 -e CONNECTSTRING= markleith/mysql-cluster-mgmd:7.5
2016-10-28T10:06:48.550000: Running: docker port myc1-mgmd49 1186/tcp
2016-10-28T10:06:48.619000: Running: docker ps -q -a --filter "name=myc1-ndbmtd1"
2016-10-28T10:06:48.670000: Running: docker run -d -P --net myc1 --name myc1-ndbmtd1 --ip 172.18.0.11 -e NODE_ID=1 -e CONNECTSTRING=myc1-mgmd49:1186 markleith/mysql-cluster-ndbmtd:7.5
2016-10-28T10:06:50.211000: Running: docker port myc1-ndbmtd1 11860/tcp
2016-10-28T10:06:50.298000: Running: docker ps -q -a --filter "name=myc1-ndbmtd2"
2016-10-28T10:06:50.359000: Running: docker run -d -P --net myc1 --name myc1-ndbmtd2 --ip 172.18.0.12 -e NODE_ID=2 -e CONNECTSTRING=myc1-mgmd49:1186 markleith/mysql-cluster-ndbmtd:7.5
2016-10-28T10:06:51.838000: Running: docker port myc1-ndbmtd2 11860/tcp
2016-10-28T10:06:51.889000: Running: docker ps -q -a --filter "name=myc1-sql51"
2016-10-28T10:06:51.945000: Running: docker run -d -P --net myc1 --name myc1-sql51 --ip 172.18.0.151 -e NODE_ID=51 -e CONNECTSTRING=myc1-mgmd49:1186 markleith/mysql-cluster-sql:7.5
2016-10-28T10:06:53.389000: Running: docker port myc1-sql51 3306/tcp
2016-10-28T10:06:53.448000: Info: Started: [ "node" : { "name" : "myc1-mgmd49", "bound_port" : 33052, "node_type" : "mgmd" } ,  "node" : { "name" : "myc1-ndbmtd1", "bound_port" : 33053, "node_type" : "ndbmtd" } ,  "node" : { "name" : "myc1-ndbmtd2", "bound_port" : 33054, "node_type" : "ndbmtd" } ,  "node" : { "name" : "myc1-sql51", "bound_port" : 33055, "node_type" : "sql" } ]

This creates the initial cluster config.ini on build, then creates the network “myc1“, using that also as the prefix for the container names, and sets that all up on the “172.18” network IP range on start.

To create the second cluster side by side we now just use a different –name and –base-network:

$ cluster.py build --base-network 172.19 --management-nodes 2 --data-nodes 4 --sql-nodes 2
2016-10-28T10:07:23.486000: Running: docker build -t markleith/mysql-cluster-mgmd:7.5 -f management-node/Dockerfile management-node
2016-10-28T10:07:42.201000: Running: docker build -t markleith/mysql-cluster-ndbmtd:7.5 -f data-node/Dockerfile data-node
2016-10-28T10:07:42.482000: Running: docker build -t markleith/mysql-cluster-sql:7.5 -f sql-node/Dockerfile sql-node

$ cluster.py start --name myc2 --base-network 172.19 --management-nodes 2 --data-nodes 4 --sql-nodes 2
2016-10-28T10:07:56.739000: Running: docker network ls
2016-10-28T10:07:56.798000: Info: myc2 network not found, creating
2016-10-28T10:07:56.800000: Running: docker network create --subnet=172.19.0.0/16 myc2
2016-10-28T10:07:57.432000: Running: docker ps -q -a --filter "name=myc2-mgmd49"
2016-10-28T10:07:57.592000: Running: docker run -d -P --net myc2 --name myc2-mgmd49 --ip 172.19.0.149 -e NODE_ID=49 -e NOWAIT=50 -e CONNECTSTRING= markleith/mysql-cluster-mgmd:7.5
2016-10-28T10:07:59.850000: Running: docker port myc2-mgmd49 1186/tcp
2016-10-28T10:07:59.903000: Running: docker ps -q -a --filter "name=myc2-mgmd50"
2016-10-28T10:07:59.954000: Running: docker run -d -P --net myc2 --name myc2-mgmd50 --ip 172.19.0.150 -e NODE_ID=50 -e NOWAIT=49 -e CONNECTSTRING=myc2-mgmd49:1186 markleith/mysql-cluster-mgmd:7.5
2016-10-28T10:08:02.066000: Running: docker port myc2-mgmd50 1186/tcp
2016-10-28T10:08:02.120000: Running: docker ps -q -a --filter "name=myc2-ndbmtd1"
2016-10-28T10:08:02.187000: Running: docker run -d -P --net myc2 --name myc2-ndbmtd1 --ip 172.19.0.11 -e NODE_ID=1 -e CONNECTSTRING=myc2-mgmd49:1186,myc2-mgmd50:1186 markleith/mysql-cluster-ndbmtd:7.5
2016-10-28T10:08:04.644000: Running: docker port myc2-ndbmtd1 11860/tcp
2016-10-28T10:08:04.700000: Running: docker ps -q -a --filter "name=myc2-ndbmtd2"
2016-10-28T10:08:04.758000: Running: docker run -d -P --net myc2 --name myc2-ndbmtd2 --ip 172.19.0.12 -e NODE_ID=2 -e CONNECTSTRING=myc2-mgmd49:1186,myc2-mgmd50:1186 markleith/mysql-cluster-ndbmtd:7.5
2016-10-28T10:08:08.152000: Running: docker port myc2-ndbmtd2 11860/tcp
2016-10-28T10:08:08.232000: Running: docker ps -q -a --filter "name=myc2-sql51"
2016-10-28T10:08:08.281000: Running: docker run -d -P --net myc2 --name myc2-sql51 --ip 172.19.0.151 -e NODE_ID=51 -e CONNECTSTRING=myc2-mgmd49:1186,myc2-mgmd50:1186 markleith/mysql-cluster-sql:7.5
2016-10-28T10:08:17.201000: Running: docker port myc2-sql51 3306/tcp
2016-10-28T10:08:17.283000: Running: docker ps -q -a --filter "name=myc2-sql52"
2016-10-28T10:08:17.348000: Running: docker run -d -P --net myc2 --name myc2-sql52 --ip 172.19.0.152 -e NODE_ID=52 -e CONNECTSTRING=myc2-mgmd49:1186,myc2-mgmd50:1186 markleith/mysql-cluster-sql:7.5
2016-10-28T10:08:29.808000: Running: docker port myc2-sql52 3306/tcp
2016-10-28T10:08:30.127000: Info: Started: [ "node" : { "name" : "myc2-mgmd49", "bound_port" : 33056, "node_type" : "mgmd" } ,  "node" : { "name" : "myc2-mgmd50", "bound_port" : 33057, "node_type" : "mgmd" } ,  "node" : { "name" : "myc2-ndbmtd1", "bound_port" : 33058, "node_type" : "ndbmtd" } ,  "node" : { "name" : "myc2-ndbmtd2", "bound_port" : 33059, "node_type" : "ndbmtd" } ,  "node" : { "name" : "myc2-sql51", "bound_port" : 33060, "node_type" : "sql" } ,  "node" : { "name" : "myc2-sql52", "bound_port" : 33061, "node_type" : "sql" } ]

Both start up and run just fine side by side:

$ docker ps
CONTAINER ID        IMAGE                                COMMAND                  CREATED              STATUS              PORTS                      NAMES
e32d4ae024fc        markleith/mysql-cluster-sql:7.5      "/home/mysql/run-mysq"   21 seconds ago       Up 9 seconds        0.0.0.0:33061->3306/tcp    myc2-sql52
038ce476e860        markleith/mysql-cluster-sql:7.5      "/home/mysql/run-mysq"   30 seconds ago       Up 20 seconds       0.0.0.0:33060->3306/tcp    myc2-sql51
32d202bd5d2d        markleith/mysql-cluster-ndbmtd:7.5   "/home/mysql/run-data"   34 seconds ago       Up 29 seconds       0.0.0.0:33059->11860/tcp   myc2-ndbmtd2
0b8f06de740a        markleith/mysql-cluster-ndbmtd:7.5   "/home/mysql/run-data"   36 seconds ago       Up 32 seconds       0.0.0.0:33058->11860/tcp   myc2-ndbmtd1
83bd9674e339        markleith/mysql-cluster-mgmd:7.5     "/home/mysql/run-mgmd"   39 seconds ago       Up 35 seconds       0.0.0.0:33057->1186/tcp    myc2-mgmd50
36cea82543f0        markleith/mysql-cluster-mgmd:7.5     "/home/mysql/run-mgmd"   41 seconds ago       Up 37 seconds       0.0.0.0:33056->1186/tcp    myc2-mgmd49
613b6c18ebd6        markleith/mysql-cluster-sql:7.5      "/home/mysql/run-mysq"   About a minute ago   Up About a minute   0.0.0.0:33055->3306/tcp    myc1-sql51
31b739edcdb4        markleith/mysql-cluster-ndbmtd:7.5   "/home/mysql/run-data"   About a minute ago   Up About a minute   0.0.0.0:33054->11860/tcp   myc1-ndbmtd2
18e19136accb        markleith/mysql-cluster-ndbmtd:7.5   "/home/mysql/run-data"   About a minute ago   Up About a minute   0.0.0.0:33053->11860/tcp   myc1-ndbmtd1
721b3abb7140        a62fba3c15f2                         "/home/mysql/run-mgmd"   About a minute ago   Up About a minute   0.0.0.0:33052->1186/tcp    myc1-mgmd49

You can now shut down all of a clusters containers using the stop command:

$ cluster.py stop
2016-10-28T09:29:38.076000: Running: docker network ls
2016-10-28T09:29:38.391000: Running: docker network inspect --format="{{range $i, $c := .Containers}}{{$i}},{{end}}" mycluster
2016-10-28T09:29:38.456000: Running: docker stop 3c781c3517a2 41c3bfcba7d1 4210e83036a3 66289dc0b529 7bb378282d22 afd8d427c751 f021167e7be7 fc0de2b342ff
2016-10-28T09:31:03.673000: Info: Stopping containers done

When using the start command it now recognizes that the container names already exist, and instead issues a docker start command on them, instead of a docker run:

 cluster.py start
2016-10-28T09:37:54.937000: Running: docker network ls
2016-10-28T09:37:55.047000: Info: mycluster network found, checking if any containers are already running
2016-10-28T09:37:55.049000: Running: docker network inspect --format="{{range $i, $c := .Containers}}{{$i}},{{end}}" mycluster
2016-10-28T09:37:55.133000: Running: docker ps -q -a --filter "name=mycluster-mgmd49"
2016-10-28T09:37:55.336000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" 3c781c3517a2
2016-10-28T09:37:55.381000: Running: docker start 3c781c3517a2
2016-10-28T09:37:57.149000: Running: docker port mycluster-mgmd49 1186/tcp
2016-10-28T09:37:57.200000: Running: docker ps -q -a --filter "name=mycluster-mgmd50"
2016-10-28T09:37:57.279000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" 66289dc0b529
2016-10-28T09:37:57.325000: Running: docker start 66289dc0b529
2016-10-28T09:37:58.377000: Running: docker port mycluster-mgmd50 1186/tcp
2016-10-28T09:37:58.432000: Running: docker ps -q -a --filter "name=mycluster-ndbmtd1"
2016-10-28T09:37:58.497000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" 41c3bfcba7d1
2016-10-28T09:37:58.541000: Running: docker start 41c3bfcba7d1
2016-10-28T09:37:59.751000: Running: docker port mycluster-ndbmtd1 11860/tcp
2016-10-28T09:37:59.820000: Running: docker ps -q -a --filter "name=mycluster-ndbmtd2"
2016-10-28T09:37:59.873000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" f021167e7be7
2016-10-28T09:37:59.917000: Running: docker start f021167e7be7
2016-10-28T09:38:00.999000: Running: docker port mycluster-ndbmtd2 11860/tcp
2016-10-28T09:38:01.080000: Running: docker ps -q -a --filter "name=mycluster-ndbmtd3"
2016-10-28T09:38:01.135000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" 7bb378282d22
2016-10-28T09:38:01.178000: Running: docker start 7bb378282d22
2016-10-28T09:38:02.369000: Running: docker port mycluster-ndbmtd3 11860/tcp
2016-10-28T09:38:02.424000: Running: docker ps -q -a --filter "name=mycluster-ndbmtd4"
2016-10-28T09:38:02.492000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" afd8d427c751
2016-10-28T09:38:02.534000: Running: docker start afd8d427c751
2016-10-28T09:38:03.551000: Running: docker port mycluster-ndbmtd4 11860/tcp
2016-10-28T09:38:03.649000: Running: docker ps -q -a --filter "name=mycluster-sql51"
2016-10-28T09:38:03.713000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" 4210e83036a3
2016-10-28T09:38:03.775000: Running: docker start 4210e83036a3
2016-10-28T09:38:14.329000: Running: docker port mycluster-sql51 3306/tcp
2016-10-28T09:38:14.444000: Running: docker ps -q -a --filter "name=mycluster-sql52"
2016-10-28T09:39:04.837000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" fc0de2b342ff
2016-10-28T09:39:48.183000: Running: docker start fc0de2b342ff
2016-10-28T09:40:01.093000: Running: docker port mycluster-sql52 3306/tcp
2016-10-28T09:40:01.195000: Info: Started: [ "node" : { "name" : "mycluster-mgmd49", "bound_port" : 33044, "node_type" : "mgmd" } ,  "node" : { "name" : "mycluster-mgmd50", "bound_port" : 33045, "node_type" : "mgmd" } ,  "node" : { "name" : "mycluster-ndbmtd1", "bound_port" : 33046, "node_type" : "ndbmtd" } ,  "node" : { "name" : "mycluster-ndbmtd2", "bound_port" : 33047, "node_type" : "ndbmtd" } ,  "node" : { "name" : "mycluster-ndbmtd3", "bound_port" : 33048, "node_type" : "ndbmtd" } ,  "node" : { "name" : "mycluster-ndbmtd4", "bound_port" : 33049, "node_type" : "ndbmtd" } ,  "node" : { "name" : "mycluster-sql51", "bound_port" : 33050, "node_type" : "sql" } ,  "node" : { "name" : "mycluster-sql52", "bound_port" : 33051, "node_type" : "sql" } ]

And you can have the whole thing clean itself up with the clean command:

$ cluster.py clean
2016-10-28T09:54:31.418000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-mgmd:7.5" --format "{{.ID}}"
2016-10-28T09:54:31.499000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-ndbmtd:7.5" --format "{{.ID}}"
2016-10-28T09:54:31.565000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-sql:7.5" --format "{{.ID}}"
2016-10-28T09:54:31.626000: Running: docker stop 66289dc0b529 3c781c3517a2 afd8d427c751 7bb378282d22 f021167e7be7 41c3bfcba7d1 fc0de2b342ff 4210e83036a3
2016-10-28T09:55:53.496000: Running: docker rm 66289dc0b529 3c781c3517a2 afd8d427c751 7bb378282d22 f021167e7be7 41c3bfcba7d1 fc0de2b342ff 4210e83036a3
2016-10-28T09:55:55.302000: Running: docker network ls
2016-10-28T09:55:55.404000: Running: docker network rm mycluster
2016-10-28T09:55:56.036000: Running: docker network ls

Not only can you get this to shut down and remove all containers in one go, but you can get it to remove the images and clean up any dangling images too:

$ cluster.py clean --images --dangling
2016-10-28T13:59:44.884000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-mgmd:7.5" --format "{{.ID}}"
2016-10-28T13:59:44.942000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-ndbmtd:7.5" --format "{{.ID}}"
2016-10-28T13:59:44.999000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-sql:7.5" --format "{{.ID}}"
2016-10-28T13:59:45.055000: Running: docker stop 83bd9674e339 36cea82543f0 32d202bd5d2d 0b8f06de740a e32d4ae024fc 038ce476e860 613b6c18ebd6
2016-10-28T14:00:48.068000: Running: docker rm 83bd9674e339 36cea82543f0 32d202bd5d2d 0b8f06de740a e32d4ae024fc 038ce476e860 613b6c18ebd6
2016-10-28T14:08:34.234000: Running: docker images markleith/mysql-cluster-* --format "{{.ID}}"
2016-10-28T14:08:34.334000: Info: Removing markleith/mysql-cluster-* images
2016-10-28T14:08:34.335000: Running: docker rmi 2412e8b33a79 a4b24729a83b 26072d4d4fe9
2016-10-28T14:08:35.488000: Running: docker images --filter "dangling=true"  --format "{{.ID}}"
2016-10-28T14:08:35.556000: Info: Removing dangling images
2016-10-28T14:08:35.557000: Running: docker rmi a62fba3c15f2 226a6721f1ae 9428a35054c0 2682265f46ba
2016-10-28T14:08:35.732000: Running: docker network ls
2016-10-28T14:08:35.787000: Running: docker network ls

Things I’ve considered but haven’t really done anything about:

  • Adding persistence of configuration and/or data
  • Adding some kind of status command

Other feedback welcome.

Running MySQL Cluster 7.5 in Docker

I’ve been wanting an easy way to play around with MySQL Cluster lately, the latest 7.5 release is now based on the MySQL 5.7 branch, so it also has the sys schema packaged, adding more areas for me to add sys integration that is more cluster specific – especially given all of the new tables that were added within ndbinfo.

There’s a couple of examples of docker images that wrap older MySQL Cluster packages out there, but:

  • Nothing up to date, I wanted to use MySQL Cluster 7.5.4
  • Nothing that uses docker networks, rather than the old link style
  • Nothing that helps to orchestrate starting the containers, I don’t want anything fancy, just to start a Cluster of a certain shape (n of each node type), all on a local machine, for purely testing and playing around with.

So, with my searching failing to satisfy, I instead just created my own.

So now you can build and start a cluster with commands as simple as:

$ cluster.py --debug build
2016-10-25T16:04:33.229000: Arguments: Namespace(data_nodes=4, debug=True, func=<function build at 0x0000000002E63BA8>, management_nodes=2, sql_nodes=2)
2016-10-25T16:04:33.235000: Running: docker build -t markleith/mysqlcluster75:ndb_mgmd -f management-node/Dockerfile management-node
2016-10-25T16:04:33.511000: Running: docker build -t markleith/mysqlcluster75:ndbmtd -f data-node/Dockerfile data-node
2016-10-25T16:04:33.809000: Running: docker build -t markleith/mysqlcluster75:sql -f sql-node/Dockerfile sql-node

$ cluster.py --debug start
2016-10-25T16:04:37.007000: Arguments: Namespace(data_nodes=4, debug=True, func=<function start at 0x0000000002F73C18>, management_nodes=2, network='myclusternet', sql_nodes=2)
2016-10-25T16:04:37.012000: Running: docker network ls
2016-10-25T16:04:37.076000: myclusternet network found, using existing
2016-10-25T16:04:37.078000: Running: docker run -d -P --net myclusternet --name mymgmd49 --ip 172.18.0.249 -e NODE_ID=49 -e NOWAIT=50 -e CONNECTSTRING= markleith/mysqlcluster75:ndb_mgmd
2016-10-25T16:04:38.799000: Running: docker port mymgmd49 1186/tcp
2016-10-25T16:04:38.885000: Added: Node(mymgmd49 : 32800 : mgmd)
2016-10-25T16:04:38.887000: Running: docker run -d -P --net myclusternet --name mymgmd50 --ip 172.18.0.250 -e NODE_ID=50 -e NOWAIT=49 -e CONNECTSTRING=mymgmd49:1186 markleith/mysqlcluster75:ndb_mgmd
2016-10-25T16:04:40.338000: Running: docker port mymgmd50 1186/tcp
2016-10-25T16:04:40.394000: Added: Node(mymgmd50 : 32801 : mgmd)
2016-10-25T16:04:40.396000: Running: docker run -d -P --net myclusternet --name myndbmtd1 --ip 172.18.0.11 -e NODE_ID=1 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:ndbmtd
2016-10-25T16:04:41.925000: Running: docker port myndbmtd1 11860/tcp
2016-10-25T16:04:41.987000: Added: Node(myndbmtd1 : 32802 : ndbmtd)
2016-10-25T16:04:41.989000: Running: docker run -d -P --net myclusternet --name myndbmtd2 --ip 172.18.0.12 -e NODE_ID=2 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:ndbmtd
2016-10-25T16:04:43.280000: Running: docker port myndbmtd2 11860/tcp
2016-10-25T16:04:43.336000: Added: Node(myndbmtd2 : 32803 : ndbmtd)
2016-10-25T16:04:43.338000: Running: docker run -d -P --net myclusternet --name myndbmtd3 --ip 172.18.0.13 -e NODE_ID=3 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:ndbmtd
2016-10-25T16:04:44.855000: Running: docker port myndbmtd3 11860/tcp
2016-10-25T16:04:44.936000: Added: Node(myndbmtd3 : 32804 : ndbmtd)
2016-10-25T16:04:44.937000: Running: docker run -d -P --net myclusternet --name myndbmtd4 --ip 172.18.0.14 -e NODE_ID=4 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:ndbmtd
2016-10-25T16:04:49.039000: Running: docker port myndbmtd4 11860/tcp
2016-10-25T16:04:49.117000: Added: Node(myndbmtd4 : 32805 : ndbmtd)
2016-10-25T16:04:49.119000: Running: docker run -d -P --net myclusternet --name mysqlndb51 --ip 172.18.0.151 -e NODE_ID=51 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:sql
2016-10-25T16:05:06.190000: Running: docker port mysqlndb51 3306/tcp
2016-10-25T16:05:06.264000: Added: Node(mysqlndb51 : 32806 : sql)
2016-10-25T16:05:06.266000: Running: docker run -d -P --net myclusternet --name mysqlndb52 --ip 172.18.0.152 -e NODE_ID=52 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:sql
2016-10-25T16:05:12.735000: Running: docker port mysqlndb52 3306/tcp
2016-10-25T16:05:13.104000: Added: Node(mysqlndb52 : 32807 : sql)
2016-10-25T16:05:13.105000: Started: [Node(mymgmd49 : 32800 : mgmd), Node(mymgmd50 : 32801 : mgmd), Node(myndbmtd1 : 32802 : ndbmtd), Node(myndbmtd2 : 32803 : ndbmtd), Node(myndbmtd3 : 32804 : ndbmtd), Node(myndbmtd4 : 32805 : ndbmtd), Node(mysqlndb51 : 32806 : sql), Node(mysqlndb52 : 32807 : sql)]

Once the containers have fully initialized, you can log in to one of the SQL nodes from the exposed port listed, for the mysqlndb52 container above, 32807 for example:

$ mysql -u root -pmysql -h 127.0.0.1 -P 32807
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.16-ndb-7.5.4-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show tables from ndbinfo;
+---------------------------------+
| Tables_in_ndbinfo               |
+---------------------------------+
| arbitrator_validity_detail      |
| arbitrator_validity_summary     |
| blocks                          |
| cluster_locks                   |
| cluster_operations              |
| cluster_transactions            |
| config_params                   |
| config_values                   |
| counters                        |
| cpustat                         |
| cpustat_1sec                    |
| cpustat_20sec                   |
| cpustat_50ms                    |
| dict_obj_info                   |
| dict_obj_types                  |
| disk_write_speed_aggregate      |
| disk_write_speed_aggregate_node |
| disk_write_speed_base           |
| diskpagebuffer                  |
| locks_per_fragment              |
| logbuffers                      |
| logspaces                       |
| membership                      |
| memory_per_fragment             |
| memoryusage                     |
| nodes                           |
| operations_per_fragment         |
| resources                       |
| restart_info                    |
| server_locks                    |
| server_operations               |
| server_transactions             |
| table_distribution_status       |
| table_fragments                 |
| table_info                      |
| table_replicas                  |
| tc_time_track_stats             |
| threadblocks                    |
| threads                         |
| threadstat                      |
| transporters                    |
+---------------------------------+
41 rows in set (0.00 sec)

mysql> select node_id, memory_type, sys.format_bytes(used) used, sys.format_bytes(total) total from ndbinfo.memoryusage;
+---------+---------------------+------------+-----------+
| node_id | memory_type         | used       | total     |
+---------+---------------------+------------+-----------+
|       1 | Data memory         | 704.00 KiB | 80.00 MiB |
|       1 | Index memory        | 104.00 KiB | 18.25 MiB |
|       1 | Long message buffer | 384.00 KiB | 32.00 MiB |
|       2 | Data memory         | 704.00 KiB | 80.00 MiB |
|       2 | Index memory        | 104.00 KiB | 18.25 MiB |
|       2 | Long message buffer | 256.00 KiB | 32.00 MiB |
|       3 | Data memory         | 704.00 KiB | 80.00 MiB |
|       3 | Index memory        | 104.00 KiB | 18.25 MiB |
|       3 | Long message buffer | 256.00 KiB | 32.00 MiB |
|       4 | Data memory         | 704.00 KiB | 80.00 MiB |
|       4 | Index memory        | 104.00 KiB | 18.25 MiB |
|       4 | Long message buffer | 256.00 KiB | 32.00 MiB |
+---------+---------------------+------------+-----------+
12 rows in set (0.42 sec)

You can set the number of management, data and SQL nodes to create with options on the build and start commands:

$ cluster.py -h
usage: cluster.py [-h] [--debug] {build,start,stop,clean} ...

Create a test MySQL Cluster deployment in docker

positional arguments:
  {build,start,stop,clean}
    build               Build the cluster containers
    start               Start up the cluster containers
    stop                Stop the cluster containers
    clean               Stop and remove the cluster containers

optional arguments:
  -h, --help            show this help message and exit
  --debug               Whether to print debug info

$ cluster.py build -h
usage: cluster.py build [-h] [-m MANAGEMENT_NODES] [-d DATA_NODES]
                        [-s SQL_NODES]

optional arguments:
  -h, --help            show this help message and exit
  -m MANAGEMENT_NODES, --management-nodes MANAGEMENT_NODES
                        Number of Management nodes to run (default: 2; max: 2)
  -d DATA_NODES, --data-nodes DATA_NODES
                        Number of NDB nodes to run (default: 4; max: 48)
  -s SQL_NODES, --sql-nodes SQL_NODES
                        Number of SQL nodes to run (default: 2)

$ cluster.py start -h
usage: cluster.py start [-h] [-n NETWORK] [-m MANAGEMENT_NODES]
                        [-d DATA_NODES] [-s SQL_NODES]

optional arguments:
  -h, --help            show this help message and exit
  -n NETWORK, --network NETWORK
                        Name of the docker network to use
  -m MANAGEMENT_NODES, --management-nodes MANAGEMENT_NODES
                        Number of Management nodes to run (default: 2; max: 2)
  -d DATA_NODES, --data-nodes DATA_NODES
                        Number of NDB nodes to run (default: 4; max: 48)
  -s SQL_NODES, --sql-nodes SQL_NODES
                        Number of SQL nodes to run (default: 2)

Note: This is not meant to grow in to some multi-machine, production oriented, MySQL Cluster deployment orchestrator, it’s purely meant for a fully local test set up. If you want to deploy MySQL Cluster in docker, in production, this is not the script for you, sorry. If you want to play around locally with a test setup though – I think this is ideal personally (it’s helping me!).

Hope it helps somebody else out there too.

Slides for Oracle OpenWorld and Percona Live Amsterdam, 2016

I’ve uploaded the slides from my latest talks at OpenWorld and Percona Live, available below. These are mostly an updated version of previous talks, with some new info added here and there..