Running MySQL Cluster 7.5 in Docker, part 2

After the heady excitement of getting my first MySQL Cluster 7.5.4 set up nicely running in docker, I quickly discovered that I wanted to re-factor most of it, implement the bits I’d left out, and extend it more to meet some of my other testing needs, like being able to run multiple deployments of similar types in parallel for simple CI.

I’ve now released this as v1.0.

The output is a little different to before, but now it’s possible to set up multiple clusters, of different shapes if you like, on different docker networks. You simply provide a unique value for the new –base-network and –name parameters when using the build and start commands.

Lets say you want to create a couple of clusters, like these:

Simple MySQL Cluster

four_node_cluster

To start the first smaller cluster, you would issue:

$ cluster.py build --base-network 172.18 --management-nodes 1 --data-nodes 2 --sql-nodes 1
2016-10-28T10:06:23.308000: Running: docker build -t markleith/mysql-cluster-mgmd:7.5 -f management-node/Dockerfile management-node
2016-10-28T10:06:32.208000: Running: docker build -t markleith/mysql-cluster-ndbmtd:7.5 -f data-node/Dockerfile data-node
2016-10-28T10:06:32.539000: Running: docker build -t markleith/mysql-cluster-sql:7.5 -f sql-node/Dockerfile sql-node

$ cluster.py start --name myc1 --base-network 172.18 --management-nodes 1 --data-nodes 2 --sql-nodes 1
2016-10-28T10:06:46.656000: Running: docker network ls
2016-10-28T10:06:46.712000: Info: myc1 network not found, creating
2016-10-28T10:06:46.714000: Running: docker network create --subnet=172.18.0.0/16 myc1
2016-10-28T10:06:47.132000: Running: docker ps -q -a --filter "name=myc1-mgmd49"
2016-10-28T10:06:47.202000: Running: docker run -d -P --net myc1 --name myc1-mgmd49 --ip 172.18.0.149 -e NODE_ID=49 -e NOWAIT=50 -e CONNECTSTRING= markleith/mysql-cluster-mgmd:7.5
2016-10-28T10:06:48.550000: Running: docker port myc1-mgmd49 1186/tcp
2016-10-28T10:06:48.619000: Running: docker ps -q -a --filter "name=myc1-ndbmtd1"
2016-10-28T10:06:48.670000: Running: docker run -d -P --net myc1 --name myc1-ndbmtd1 --ip 172.18.0.11 -e NODE_ID=1 -e CONNECTSTRING=myc1-mgmd49:1186 markleith/mysql-cluster-ndbmtd:7.5
2016-10-28T10:06:50.211000: Running: docker port myc1-ndbmtd1 11860/tcp
2016-10-28T10:06:50.298000: Running: docker ps -q -a --filter "name=myc1-ndbmtd2"
2016-10-28T10:06:50.359000: Running: docker run -d -P --net myc1 --name myc1-ndbmtd2 --ip 172.18.0.12 -e NODE_ID=2 -e CONNECTSTRING=myc1-mgmd49:1186 markleith/mysql-cluster-ndbmtd:7.5
2016-10-28T10:06:51.838000: Running: docker port myc1-ndbmtd2 11860/tcp
2016-10-28T10:06:51.889000: Running: docker ps -q -a --filter "name=myc1-sql51"
2016-10-28T10:06:51.945000: Running: docker run -d -P --net myc1 --name myc1-sql51 --ip 172.18.0.151 -e NODE_ID=51 -e CONNECTSTRING=myc1-mgmd49:1186 markleith/mysql-cluster-sql:7.5
2016-10-28T10:06:53.389000: Running: docker port myc1-sql51 3306/tcp
2016-10-28T10:06:53.448000: Info: Started: [ "node" : { "name" : "myc1-mgmd49", "bound_port" : 33052, "node_type" : "mgmd" } ,  "node" : { "name" : "myc1-ndbmtd1", "bound_port" : 33053, "node_type" : "ndbmtd" } ,  "node" : { "name" : "myc1-ndbmtd2", "bound_port" : 33054, "node_type" : "ndbmtd" } ,  "node" : { "name" : "myc1-sql51", "bound_port" : 33055, "node_type" : "sql" } ]

This creates the initial cluster config.ini on build, then creates the network “myc1“, using that also as the prefix for the container names, and sets that all up on the “172.18” network IP range on start.

To create the second cluster side by side we now just use a different –name and –base-network:

$ cluster.py build --base-network 172.19 --management-nodes 2 --data-nodes 4 --sql-nodes 2
2016-10-28T10:07:23.486000: Running: docker build -t markleith/mysql-cluster-mgmd:7.5 -f management-node/Dockerfile management-node
2016-10-28T10:07:42.201000: Running: docker build -t markleith/mysql-cluster-ndbmtd:7.5 -f data-node/Dockerfile data-node
2016-10-28T10:07:42.482000: Running: docker build -t markleith/mysql-cluster-sql:7.5 -f sql-node/Dockerfile sql-node

$ cluster.py start --name myc2 --base-network 172.19 --management-nodes 2 --data-nodes 4 --sql-nodes 2
2016-10-28T10:07:56.739000: Running: docker network ls
2016-10-28T10:07:56.798000: Info: myc2 network not found, creating
2016-10-28T10:07:56.800000: Running: docker network create --subnet=172.19.0.0/16 myc2
2016-10-28T10:07:57.432000: Running: docker ps -q -a --filter "name=myc2-mgmd49"
2016-10-28T10:07:57.592000: Running: docker run -d -P --net myc2 --name myc2-mgmd49 --ip 172.19.0.149 -e NODE_ID=49 -e NOWAIT=50 -e CONNECTSTRING= markleith/mysql-cluster-mgmd:7.5
2016-10-28T10:07:59.850000: Running: docker port myc2-mgmd49 1186/tcp
2016-10-28T10:07:59.903000: Running: docker ps -q -a --filter "name=myc2-mgmd50"
2016-10-28T10:07:59.954000: Running: docker run -d -P --net myc2 --name myc2-mgmd50 --ip 172.19.0.150 -e NODE_ID=50 -e NOWAIT=49 -e CONNECTSTRING=myc2-mgmd49:1186 markleith/mysql-cluster-mgmd:7.5
2016-10-28T10:08:02.066000: Running: docker port myc2-mgmd50 1186/tcp
2016-10-28T10:08:02.120000: Running: docker ps -q -a --filter "name=myc2-ndbmtd1"
2016-10-28T10:08:02.187000: Running: docker run -d -P --net myc2 --name myc2-ndbmtd1 --ip 172.19.0.11 -e NODE_ID=1 -e CONNECTSTRING=myc2-mgmd49:1186,myc2-mgmd50:1186 markleith/mysql-cluster-ndbmtd:7.5
2016-10-28T10:08:04.644000: Running: docker port myc2-ndbmtd1 11860/tcp
2016-10-28T10:08:04.700000: Running: docker ps -q -a --filter "name=myc2-ndbmtd2"
2016-10-28T10:08:04.758000: Running: docker run -d -P --net myc2 --name myc2-ndbmtd2 --ip 172.19.0.12 -e NODE_ID=2 -e CONNECTSTRING=myc2-mgmd49:1186,myc2-mgmd50:1186 markleith/mysql-cluster-ndbmtd:7.5
2016-10-28T10:08:08.152000: Running: docker port myc2-ndbmtd2 11860/tcp
2016-10-28T10:08:08.232000: Running: docker ps -q -a --filter "name=myc2-sql51"
2016-10-28T10:08:08.281000: Running: docker run -d -P --net myc2 --name myc2-sql51 --ip 172.19.0.151 -e NODE_ID=51 -e CONNECTSTRING=myc2-mgmd49:1186,myc2-mgmd50:1186 markleith/mysql-cluster-sql:7.5
2016-10-28T10:08:17.201000: Running: docker port myc2-sql51 3306/tcp
2016-10-28T10:08:17.283000: Running: docker ps -q -a --filter "name=myc2-sql52"
2016-10-28T10:08:17.348000: Running: docker run -d -P --net myc2 --name myc2-sql52 --ip 172.19.0.152 -e NODE_ID=52 -e CONNECTSTRING=myc2-mgmd49:1186,myc2-mgmd50:1186 markleith/mysql-cluster-sql:7.5
2016-10-28T10:08:29.808000: Running: docker port myc2-sql52 3306/tcp
2016-10-28T10:08:30.127000: Info: Started: [ "node" : { "name" : "myc2-mgmd49", "bound_port" : 33056, "node_type" : "mgmd" } ,  "node" : { "name" : "myc2-mgmd50", "bound_port" : 33057, "node_type" : "mgmd" } ,  "node" : { "name" : "myc2-ndbmtd1", "bound_port" : 33058, "node_type" : "ndbmtd" } ,  "node" : { "name" : "myc2-ndbmtd2", "bound_port" : 33059, "node_type" : "ndbmtd" } ,  "node" : { "name" : "myc2-sql51", "bound_port" : 33060, "node_type" : "sql" } ,  "node" : { "name" : "myc2-sql52", "bound_port" : 33061, "node_type" : "sql" } ]

Both start up and run just fine side by side:

$ docker ps
CONTAINER ID        IMAGE                                COMMAND                  CREATED              STATUS              PORTS                      NAMES
e32d4ae024fc        markleith/mysql-cluster-sql:7.5      "/home/mysql/run-mysq"   21 seconds ago       Up 9 seconds        0.0.0.0:33061->3306/tcp    myc2-sql52
038ce476e860        markleith/mysql-cluster-sql:7.5      "/home/mysql/run-mysq"   30 seconds ago       Up 20 seconds       0.0.0.0:33060->3306/tcp    myc2-sql51
32d202bd5d2d        markleith/mysql-cluster-ndbmtd:7.5   "/home/mysql/run-data"   34 seconds ago       Up 29 seconds       0.0.0.0:33059->11860/tcp   myc2-ndbmtd2
0b8f06de740a        markleith/mysql-cluster-ndbmtd:7.5   "/home/mysql/run-data"   36 seconds ago       Up 32 seconds       0.0.0.0:33058->11860/tcp   myc2-ndbmtd1
83bd9674e339        markleith/mysql-cluster-mgmd:7.5     "/home/mysql/run-mgmd"   39 seconds ago       Up 35 seconds       0.0.0.0:33057->1186/tcp    myc2-mgmd50
36cea82543f0        markleith/mysql-cluster-mgmd:7.5     "/home/mysql/run-mgmd"   41 seconds ago       Up 37 seconds       0.0.0.0:33056->1186/tcp    myc2-mgmd49
613b6c18ebd6        markleith/mysql-cluster-sql:7.5      "/home/mysql/run-mysq"   About a minute ago   Up About a minute   0.0.0.0:33055->3306/tcp    myc1-sql51
31b739edcdb4        markleith/mysql-cluster-ndbmtd:7.5   "/home/mysql/run-data"   About a minute ago   Up About a minute   0.0.0.0:33054->11860/tcp   myc1-ndbmtd2
18e19136accb        markleith/mysql-cluster-ndbmtd:7.5   "/home/mysql/run-data"   About a minute ago   Up About a minute   0.0.0.0:33053->11860/tcp   myc1-ndbmtd1
721b3abb7140        a62fba3c15f2                         "/home/mysql/run-mgmd"   About a minute ago   Up About a minute   0.0.0.0:33052->1186/tcp    myc1-mgmd49

You can now shut down all of a clusters containers using the stop command:

$ cluster.py stop
2016-10-28T09:29:38.076000: Running: docker network ls
2016-10-28T09:29:38.391000: Running: docker network inspect --format="{{range $i, $c := .Containers}}{{$i}},{{end}}" mycluster
2016-10-28T09:29:38.456000: Running: docker stop 3c781c3517a2 41c3bfcba7d1 4210e83036a3 66289dc0b529 7bb378282d22 afd8d427c751 f021167e7be7 fc0de2b342ff
2016-10-28T09:31:03.673000: Info: Stopping containers done

When using the start command it now recognizes that the container names already exist, and instead issues a docker start command on them, instead of a docker run:

 cluster.py start
2016-10-28T09:37:54.937000: Running: docker network ls
2016-10-28T09:37:55.047000: Info: mycluster network found, checking if any containers are already running
2016-10-28T09:37:55.049000: Running: docker network inspect --format="{{range $i, $c := .Containers}}{{$i}},{{end}}" mycluster
2016-10-28T09:37:55.133000: Running: docker ps -q -a --filter "name=mycluster-mgmd49"
2016-10-28T09:37:55.336000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" 3c781c3517a2
2016-10-28T09:37:55.381000: Running: docker start 3c781c3517a2
2016-10-28T09:37:57.149000: Running: docker port mycluster-mgmd49 1186/tcp
2016-10-28T09:37:57.200000: Running: docker ps -q -a --filter "name=mycluster-mgmd50"
2016-10-28T09:37:57.279000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" 66289dc0b529
2016-10-28T09:37:57.325000: Running: docker start 66289dc0b529
2016-10-28T09:37:58.377000: Running: docker port mycluster-mgmd50 1186/tcp
2016-10-28T09:37:58.432000: Running: docker ps -q -a --filter "name=mycluster-ndbmtd1"
2016-10-28T09:37:58.497000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" 41c3bfcba7d1
2016-10-28T09:37:58.541000: Running: docker start 41c3bfcba7d1
2016-10-28T09:37:59.751000: Running: docker port mycluster-ndbmtd1 11860/tcp
2016-10-28T09:37:59.820000: Running: docker ps -q -a --filter "name=mycluster-ndbmtd2"
2016-10-28T09:37:59.873000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" f021167e7be7
2016-10-28T09:37:59.917000: Running: docker start f021167e7be7
2016-10-28T09:38:00.999000: Running: docker port mycluster-ndbmtd2 11860/tcp
2016-10-28T09:38:01.080000: Running: docker ps -q -a --filter "name=mycluster-ndbmtd3"
2016-10-28T09:38:01.135000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" 7bb378282d22
2016-10-28T09:38:01.178000: Running: docker start 7bb378282d22
2016-10-28T09:38:02.369000: Running: docker port mycluster-ndbmtd3 11860/tcp
2016-10-28T09:38:02.424000: Running: docker ps -q -a --filter "name=mycluster-ndbmtd4"
2016-10-28T09:38:02.492000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" afd8d427c751
2016-10-28T09:38:02.534000: Running: docker start afd8d427c751
2016-10-28T09:38:03.551000: Running: docker port mycluster-ndbmtd4 11860/tcp
2016-10-28T09:38:03.649000: Running: docker ps -q -a --filter "name=mycluster-sql51"
2016-10-28T09:38:03.713000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" 4210e83036a3
2016-10-28T09:38:03.775000: Running: docker start 4210e83036a3
2016-10-28T09:38:14.329000: Running: docker port mycluster-sql51 3306/tcp
2016-10-28T09:38:14.444000: Running: docker ps -q -a --filter "name=mycluster-sql52"
2016-10-28T09:39:04.837000: Running: docker inspect --format "{{range $i, $n := .NetworkSettings.Networks}}{{$i}},{{end}}" fc0de2b342ff
2016-10-28T09:39:48.183000: Running: docker start fc0de2b342ff
2016-10-28T09:40:01.093000: Running: docker port mycluster-sql52 3306/tcp
2016-10-28T09:40:01.195000: Info: Started: [ "node" : { "name" : "mycluster-mgmd49", "bound_port" : 33044, "node_type" : "mgmd" } ,  "node" : { "name" : "mycluster-mgmd50", "bound_port" : 33045, "node_type" : "mgmd" } ,  "node" : { "name" : "mycluster-ndbmtd1", "bound_port" : 33046, "node_type" : "ndbmtd" } ,  "node" : { "name" : "mycluster-ndbmtd2", "bound_port" : 33047, "node_type" : "ndbmtd" } ,  "node" : { "name" : "mycluster-ndbmtd3", "bound_port" : 33048, "node_type" : "ndbmtd" } ,  "node" : { "name" : "mycluster-ndbmtd4", "bound_port" : 33049, "node_type" : "ndbmtd" } ,  "node" : { "name" : "mycluster-sql51", "bound_port" : 33050, "node_type" : "sql" } ,  "node" : { "name" : "mycluster-sql52", "bound_port" : 33051, "node_type" : "sql" } ]

And you can have the whole thing clean itself up with the clean command:

$ cluster.py clean
2016-10-28T09:54:31.418000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-mgmd:7.5" --format "{{.ID}}"
2016-10-28T09:54:31.499000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-ndbmtd:7.5" --format "{{.ID}}"
2016-10-28T09:54:31.565000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-sql:7.5" --format "{{.ID}}"
2016-10-28T09:54:31.626000: Running: docker stop 66289dc0b529 3c781c3517a2 afd8d427c751 7bb378282d22 f021167e7be7 41c3bfcba7d1 fc0de2b342ff 4210e83036a3
2016-10-28T09:55:53.496000: Running: docker rm 66289dc0b529 3c781c3517a2 afd8d427c751 7bb378282d22 f021167e7be7 41c3bfcba7d1 fc0de2b342ff 4210e83036a3
2016-10-28T09:55:55.302000: Running: docker network ls
2016-10-28T09:55:55.404000: Running: docker network rm mycluster
2016-10-28T09:55:56.036000: Running: docker network ls

Not only can you get this to shut down and remove all containers in one go, but you can get it to remove the images and clean up any dangling images too:

$ cluster.py clean --images --dangling
2016-10-28T13:59:44.884000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-mgmd:7.5" --format "{{.ID}}"
2016-10-28T13:59:44.942000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-ndbmtd:7.5" --format "{{.ID}}"
2016-10-28T13:59:44.999000: Running: docker ps -a --filter "ancestor=markleith/mysql-cluster-sql:7.5" --format "{{.ID}}"
2016-10-28T13:59:45.055000: Running: docker stop 83bd9674e339 36cea82543f0 32d202bd5d2d 0b8f06de740a e32d4ae024fc 038ce476e860 613b6c18ebd6
2016-10-28T14:00:48.068000: Running: docker rm 83bd9674e339 36cea82543f0 32d202bd5d2d 0b8f06de740a e32d4ae024fc 038ce476e860 613b6c18ebd6
2016-10-28T14:08:34.234000: Running: docker images markleith/mysql-cluster-* --format "{{.ID}}"
2016-10-28T14:08:34.334000: Info: Removing markleith/mysql-cluster-* images
2016-10-28T14:08:34.335000: Running: docker rmi 2412e8b33a79 a4b24729a83b 26072d4d4fe9
2016-10-28T14:08:35.488000: Running: docker images --filter "dangling=true"  --format "{{.ID}}"
2016-10-28T14:08:35.556000: Info: Removing dangling images
2016-10-28T14:08:35.557000: Running: docker rmi a62fba3c15f2 226a6721f1ae 9428a35054c0 2682265f46ba
2016-10-28T14:08:35.732000: Running: docker network ls
2016-10-28T14:08:35.787000: Running: docker network ls

Things I’ve considered but haven’t really done anything about:

  • Adding persistence of configuration and/or data
  • Adding some kind of status command

Other feedback welcome.

Running MySQL Cluster 7.5 in Docker

I’ve been wanting an easy way to play around with MySQL Cluster lately, the latest 7.5 release is now based on the MySQL 5.7 branch, so it also has the sys schema packaged, adding more areas for me to add sys integration that is more cluster specific – especially given all of the new tables that were added within ndbinfo.

There’s a couple of examples of docker images that wrap older MySQL Cluster packages out there, but:

  • Nothing up to date, I wanted to use MySQL Cluster 7.5.4
  • Nothing that uses docker networks, rather than the old link style
  • Nothing that helps to orchestrate starting the containers, I don’t want anything fancy, just to start a Cluster of a certain shape (n of each node type), all on a local machine, for purely testing and playing around with.

So, with my searching failing to satisfy, I instead just created my own.

So now you can build and start a cluster with commands as simple as:

$ cluster.py --debug build
2016-10-25T16:04:33.229000: Arguments: Namespace(data_nodes=4, debug=True, func=<function build at 0x0000000002E63BA8>, management_nodes=2, sql_nodes=2)
2016-10-25T16:04:33.235000: Running: docker build -t markleith/mysqlcluster75:ndb_mgmd -f management-node/Dockerfile management-node
2016-10-25T16:04:33.511000: Running: docker build -t markleith/mysqlcluster75:ndbmtd -f data-node/Dockerfile data-node
2016-10-25T16:04:33.809000: Running: docker build -t markleith/mysqlcluster75:sql -f sql-node/Dockerfile sql-node

$ cluster.py --debug start
2016-10-25T16:04:37.007000: Arguments: Namespace(data_nodes=4, debug=True, func=<function start at 0x0000000002F73C18>, management_nodes=2, network='myclusternet', sql_nodes=2)
2016-10-25T16:04:37.012000: Running: docker network ls
2016-10-25T16:04:37.076000: myclusternet network found, using existing
2016-10-25T16:04:37.078000: Running: docker run -d -P --net myclusternet --name mymgmd49 --ip 172.18.0.249 -e NODE_ID=49 -e NOWAIT=50 -e CONNECTSTRING= markleith/mysqlcluster75:ndb_mgmd
2016-10-25T16:04:38.799000: Running: docker port mymgmd49 1186/tcp
2016-10-25T16:04:38.885000: Added: Node(mymgmd49 : 32800 : mgmd)
2016-10-25T16:04:38.887000: Running: docker run -d -P --net myclusternet --name mymgmd50 --ip 172.18.0.250 -e NODE_ID=50 -e NOWAIT=49 -e CONNECTSTRING=mymgmd49:1186 markleith/mysqlcluster75:ndb_mgmd
2016-10-25T16:04:40.338000: Running: docker port mymgmd50 1186/tcp
2016-10-25T16:04:40.394000: Added: Node(mymgmd50 : 32801 : mgmd)
2016-10-25T16:04:40.396000: Running: docker run -d -P --net myclusternet --name myndbmtd1 --ip 172.18.0.11 -e NODE_ID=1 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:ndbmtd
2016-10-25T16:04:41.925000: Running: docker port myndbmtd1 11860/tcp
2016-10-25T16:04:41.987000: Added: Node(myndbmtd1 : 32802 : ndbmtd)
2016-10-25T16:04:41.989000: Running: docker run -d -P --net myclusternet --name myndbmtd2 --ip 172.18.0.12 -e NODE_ID=2 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:ndbmtd
2016-10-25T16:04:43.280000: Running: docker port myndbmtd2 11860/tcp
2016-10-25T16:04:43.336000: Added: Node(myndbmtd2 : 32803 : ndbmtd)
2016-10-25T16:04:43.338000: Running: docker run -d -P --net myclusternet --name myndbmtd3 --ip 172.18.0.13 -e NODE_ID=3 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:ndbmtd
2016-10-25T16:04:44.855000: Running: docker port myndbmtd3 11860/tcp
2016-10-25T16:04:44.936000: Added: Node(myndbmtd3 : 32804 : ndbmtd)
2016-10-25T16:04:44.937000: Running: docker run -d -P --net myclusternet --name myndbmtd4 --ip 172.18.0.14 -e NODE_ID=4 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:ndbmtd
2016-10-25T16:04:49.039000: Running: docker port myndbmtd4 11860/tcp
2016-10-25T16:04:49.117000: Added: Node(myndbmtd4 : 32805 : ndbmtd)
2016-10-25T16:04:49.119000: Running: docker run -d -P --net myclusternet --name mysqlndb51 --ip 172.18.0.151 -e NODE_ID=51 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:sql
2016-10-25T16:05:06.190000: Running: docker port mysqlndb51 3306/tcp
2016-10-25T16:05:06.264000: Added: Node(mysqlndb51 : 32806 : sql)
2016-10-25T16:05:06.266000: Running: docker run -d -P --net myclusternet --name mysqlndb52 --ip 172.18.0.152 -e NODE_ID=52 -e CONNECTSTRING=mymgmd49:1186,mymgmd50:1186 markleith/mysqlcluster75:sql
2016-10-25T16:05:12.735000: Running: docker port mysqlndb52 3306/tcp
2016-10-25T16:05:13.104000: Added: Node(mysqlndb52 : 32807 : sql)
2016-10-25T16:05:13.105000: Started: [Node(mymgmd49 : 32800 : mgmd), Node(mymgmd50 : 32801 : mgmd), Node(myndbmtd1 : 32802 : ndbmtd), Node(myndbmtd2 : 32803 : ndbmtd), Node(myndbmtd3 : 32804 : ndbmtd), Node(myndbmtd4 : 32805 : ndbmtd), Node(mysqlndb51 : 32806 : sql), Node(mysqlndb52 : 32807 : sql)]

Once the containers have fully initialized, you can log in to one of the SQL nodes from the exposed port listed, for the mysqlndb52 container above, 32807 for example:

$ mysql -u root -pmysql -h 127.0.0.1 -P 32807
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.16-ndb-7.5.4-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show tables from ndbinfo;
+---------------------------------+
| Tables_in_ndbinfo               |
+---------------------------------+
| arbitrator_validity_detail      |
| arbitrator_validity_summary     |
| blocks                          |
| cluster_locks                   |
| cluster_operations              |
| cluster_transactions            |
| config_params                   |
| config_values                   |
| counters                        |
| cpustat                         |
| cpustat_1sec                    |
| cpustat_20sec                   |
| cpustat_50ms                    |
| dict_obj_info                   |
| dict_obj_types                  |
| disk_write_speed_aggregate      |
| disk_write_speed_aggregate_node |
| disk_write_speed_base           |
| diskpagebuffer                  |
| locks_per_fragment              |
| logbuffers                      |
| logspaces                       |
| membership                      |
| memory_per_fragment             |
| memoryusage                     |
| nodes                           |
| operations_per_fragment         |
| resources                       |
| restart_info                    |
| server_locks                    |
| server_operations               |
| server_transactions             |
| table_distribution_status       |
| table_fragments                 |
| table_info                      |
| table_replicas                  |
| tc_time_track_stats             |
| threadblocks                    |
| threads                         |
| threadstat                      |
| transporters                    |
+---------------------------------+
41 rows in set (0.00 sec)

mysql> select node_id, memory_type, sys.format_bytes(used) used, sys.format_bytes(total) total from ndbinfo.memoryusage;
+---------+---------------------+------------+-----------+
| node_id | memory_type         | used       | total     |
+---------+---------------------+------------+-----------+
|       1 | Data memory         | 704.00 KiB | 80.00 MiB |
|       1 | Index memory        | 104.00 KiB | 18.25 MiB |
|       1 | Long message buffer | 384.00 KiB | 32.00 MiB |
|       2 | Data memory         | 704.00 KiB | 80.00 MiB |
|       2 | Index memory        | 104.00 KiB | 18.25 MiB |
|       2 | Long message buffer | 256.00 KiB | 32.00 MiB |
|       3 | Data memory         | 704.00 KiB | 80.00 MiB |
|       3 | Index memory        | 104.00 KiB | 18.25 MiB |
|       3 | Long message buffer | 256.00 KiB | 32.00 MiB |
|       4 | Data memory         | 704.00 KiB | 80.00 MiB |
|       4 | Index memory        | 104.00 KiB | 18.25 MiB |
|       4 | Long message buffer | 256.00 KiB | 32.00 MiB |
+---------+---------------------+------------+-----------+
12 rows in set (0.42 sec)

You can set the number of management, data and SQL nodes to create with options on the build and start commands:

$ cluster.py -h
usage: cluster.py [-h] [--debug] {build,start,stop,clean} ...

Create a test MySQL Cluster deployment in docker

positional arguments:
  {build,start,stop,clean}
    build               Build the cluster containers
    start               Start up the cluster containers
    stop                Stop the cluster containers
    clean               Stop and remove the cluster containers

optional arguments:
  -h, --help            show this help message and exit
  --debug               Whether to print debug info

$ cluster.py build -h
usage: cluster.py build [-h] [-m MANAGEMENT_NODES] [-d DATA_NODES]
                        [-s SQL_NODES]

optional arguments:
  -h, --help            show this help message and exit
  -m MANAGEMENT_NODES, --management-nodes MANAGEMENT_NODES
                        Number of Management nodes to run (default: 2; max: 2)
  -d DATA_NODES, --data-nodes DATA_NODES
                        Number of NDB nodes to run (default: 4; max: 48)
  -s SQL_NODES, --sql-nodes SQL_NODES
                        Number of SQL nodes to run (default: 2)

$ cluster.py start -h
usage: cluster.py start [-h] [-n NETWORK] [-m MANAGEMENT_NODES]
                        [-d DATA_NODES] [-s SQL_NODES]

optional arguments:
  -h, --help            show this help message and exit
  -n NETWORK, --network NETWORK
                        Name of the docker network to use
  -m MANAGEMENT_NODES, --management-nodes MANAGEMENT_NODES
                        Number of Management nodes to run (default: 2; max: 2)
  -d DATA_NODES, --data-nodes DATA_NODES
                        Number of NDB nodes to run (default: 4; max: 48)
  -s SQL_NODES, --sql-nodes SQL_NODES
                        Number of SQL nodes to run (default: 2)

Note: This is not meant to grow in to some multi-machine, production oriented, MySQL Cluster deployment orchestrator, it’s purely meant for a fully local test set up. If you want to deploy MySQL Cluster in docker, in production, this is not the script for you, sorry. If you want to play around locally with a test setup though – I think this is ideal personally (it’s helping me!).

Hope it helps somebody else out there too.

Slides for Oracle OpenWorld and Percona Live Amsterdam, 2016

I’ve uploaded the slides from my latest talks at OpenWorld and Percona Live, available below. These are mostly an updated version of previous talks, with some new info added here and there..

MySQL sys version 1.5.1 released

MySQL sys version 1.5.1 has just been released.

This is a purely bug fix release, and has been merged in to the upcoming MySQL 5.7.14 release.

Here’s a full summary of the changes:

Improvements

  • A quote_identifier function was added, which can be used to properly backtick identifier names
  • The `Tls_version` column was added to the output from the `mysql.slave_master_info` table, from the `diagnostics` procedure (backported from 5.7 upstream change)

Bug Fixes

  • MySQL Bug #77853 / Oracle Bug #21512106 – The `format_path` function did not consider directory boundaries when comparing variables to paths – it now does. Also fixed to no longer translate backslashes within Windows paths to forward slash
  • Oracle Bug #21663578 – Fixed an instability within the `sysschema.v_schema_tables_with_full_table_scans` test
  • Oracle Bug #21970078 – The `host_summary` view could fail with a division by zero error
  • MySQL Bug #78874 / Oracle Bug #22066096 – The `ps_setup_show_enabled` procedure showed all rows for the `performance_schema.setup_objects` table, rather than only those that are enabled
  • MySQL Bug #80569 / Oracle Bug #22848110 – The `max_latency` column for the `host_summary_by_statement_latency` view incorrectly showed the SUM of latency
  • MySQL Bug #80833 / Oracle Bug #22988461 – The `pages_hashed` and `pages_old` columns within the `innodb_buffer_stats_by_schema` and `innodb_buffer_stats_by_table` views were calculated incorrectly (Contributed by Tsubasa Tanaka)
  • MySQL Bug #78823 / Oracle Bug #22011361 – The `create_synonym_db` procedure failed when using reserved words as the synonym name (this change also introduced the quote_identifier function mentioned above Contributed by Paul Dubois)
  • MySQL Bug #81564 / Oracle Bug #23335880 – The `ps_setup_show_enabled` and `ps_setup_show_disabled` procedures were fixed to:
    • Show `user@host` instead of `host@user` for accounts
    • Fixed the column header for `disabled_users` within `ps_setup_show_disabled`
    • Explicitly ordered all output for test stability
    • Show disabled users for 5.7.6+
  • Oracle Bug #21970806 – The `sysschema.fn_ps_thread_trx_info` test was unstable
  • Oracle Bug #23621189 – The `ps_trace_statement_digest` procedure ran EXPLAIN incorrectly in certain cases (such as on a SHOW statement, no query being specified, or not having a full qualified table), the procedure now catches these issues and ignores them

Slides for Performance Schema and Sys Schema in MySQL 5.7 talk at Percona Live Europe 2015

Here are the slides for my talk at Percona Live Europe in Amsterdam, 2015, for my “Performance Schema and Sys Schema in MySQL 5.7”:

Thanks for all that attended, and the good questions and feedback!

MySQL sys version 1.5.0 released

MySQL sys version 1.5.0 has just been released.

It has significant contributions from both external contributors (thank you Daniël and Shlomi) and internal (thank you Jesper and Morgan). Jesper particularly made many significant improvements in this release.

Here’s a full summary of the changes:

Improvements

  • The `format_bytes` function now shows no decimal places when outputting a simple bytes value
  • The `processlist`/`x$processlist` views where improved, changes include:
    • The `pid` and `program_name` of the connection are shown, if set within the `performance_schema.session_connect_attrs` table (Contributed by Daniël van Eeden)
    • Issue #50 – The current statement progress is reported via the new stage progress reporting within Performance Schema stages within 5.7 (such as ALTER TABLE progress reporting)
    • Issue #60 – A new `statement_latency` column was added to all versions, which reports the current statement latency with picosecond precision from the `performance_schema.events_statements_current` table, when enabled
    • Some transaction information was exposed, with the `trx_latency` (for the current or last transaction depending on `trx_state`), `trx_state` (ACTIVE, COMMITTED, ROLLED BACK), and `trx_autocommit` (YES/NO) columns
  • A new `metrics` view has been added. On 5.7 this provides a union view of the performance_schema.global_status and information_schema.innodb_metrics tables, along with P_S memory and the current time, as a single metrics output. On 5.6 it provides a union view of the information_schema.global_status and information_schema.innodb_metrics tables, along with the current time. (Contributed by Jesper Wisborg Krogh)
  • New `session`/`x$session` views have been added, which give the same output as the `processlist` view counterparts, but filtered to only show foreground connections (Contributed by Morgan Tocker)
  • A new `session_ssl_status` view was added, which shows the SSL version, ciper and session resuse statistics for each connection (Contributed by Daniël van Eeden)
  • A new `schema_auto_increment_columns` view was added, that shows statistics on each auto_incrment within the instance, including the `auto_increment_ratio`, so you can easily monitor how full specific auto_increment columns are (Contributed by Shlomi Noach)
  • A new `schema_redundant_indexes` view was added, that shows indexes made redundant (or duplicated) by other more dominant indexes. Also includes the the helper view `x$schema_flattened_keys`. (Contributed by Shlomi Noach)
  • New `schema_table_lock_waits`/`x$schema_table_lock_waits` views have been added, which show any sessions that are waiting for table level metadata locks, and the sessions that are blocking them. Resolves Git Issue #57, inspired by the suggestion from Daniël van Eeden
  • The `innodb_lock_waits` view had the following columns added to it, following a manually merged contribution from Shlomi Noach for a similar view
    • `wait_age_secs` – the current row lock wait time in seconds
    • `sql_kill_blocking_query` – the “KILL QUERY <connection id>” command to run to kill the blocking session current statement
    • `sql_kill_blocking_connection` – the “KILL <connection id>” command to run to kill the blocking session
  • A new `table_exists` procedure was added, which checks for the existence of table, and if it exists, returns the type (BASE TABLE, VIEW, TEMPORARY) (Contributed by Jesper Wisborg Krogh)
  • A new `execute_prepared_stmt()` procedure was added, which takes a SQL statement as an input variable and executes it as a prepared statement (Contributed by Jesper Wisborg Krogh)
  • A new `statement_performance_analyzer()` procedure was added, that allows reporting on the statements that are have been running over snapshot periods (Contributed by Jesper Wisborg Krogh)
  • A new `diagnostics()` procedure was added, which creates a large diagnostics report based upon most of the new instrumentation now available, computed over a configurable number of snapshot intervals (Contributed by Jesper Wisborg Krogh)
  • A 5.7 specific `ps_trace_thread()` procedure was added, which now shows the hierarchy of transactions and stored routines, as well as statements, stages and waits, if enabled
  • Added a new `ps_thread_account()` stored function, that returns the “user@host” account for a given Performance Schema thread id
  • Added a new `ps_thread_trx_info()` stored function which outputs, for a given thread id, the transactions, and statements that those transactions have executed, as a JSON object
  • Added new `list_add()` and `list_drop()` stored functions, that take a string csv list, and either add or remove items from that list respectively. Can be used to easily update variables that take such lists, like `sql_mode`.
  • The `ps_thread_id` stored function now returns the thread id for the current connection if NULL is passed for the in_connection_id parameter
  • Added a new `version_major()` stored function, which returns the major version of MySQL Server (Contributed by Jesper Wisborg Krogh)
  • Added a new `version_minor()` stored function, which returns the minor (release series) version of MySQL Server (Contributed by Jesper Wisborg Krogh)
  • Added a new `version_patch()` stored function, which returns the patch release version of MySQL Server (Contributed by Jesper Wisborg Krogh)
  • The `ps_is_account_enabled` function was updated to take a VARCHAR(32) user input on 5.7, as a part of WL#2284
  • The generate_sql_file.sh script had a number of improvements:
    • Generated files are now output in to a “gen” directory, that is ignored by git
    • Added using a new default “mysql.sys@localhost” user (that has the account locked) for the MySQL 5.7+ integration as the DEFINER for all objects
    • Added a warning to the top of the generated integration file to also submit changes to the sys project
    • Improved the the option of skipping binary logs, so that all routines can load as well – those that used SET sql_log_bin will now select a warning when being used instead of setting the option

Bug Fixes

  • Git Issue #51 – Fixed the `generate_sql_file.sh` script to also replace the definer in the before_setup.sql output
  • Git Issue #52 – Removed apostrophe from the `format_statement` function comment because TOAD no likey
  • Git Issue #56 – Installation failed on 5.6 with ONLY_FULL_GROUP_BY enabled
  • Git Issue #76 – Fixes for the new show_compatibility_56 variable. 5.7 versions of the `format_path()` function and `ps_check_lost_instrumentation` view were added, that use performance_schema.global_status/global_variables instead of information_schema.global_status/global_variables
  • Git Issue #79 – Fixed grammar within `statements_with_runtimes_in_95th_percentile` view descriptions
  • Oracle Bug #21484593 / Oracle Bug #21281955 – The `format_path()` function incorrectly took and returned a VARCHAR(260) instead of VARCHAR(512) (as the underlying is exposed as in Performance Schema) causing sporadic test failures
  • Oracle Bug #21550271 – Fixed the `ps_setup_reset_to_default` for 5.7 with the addition of the new `history` column on the `performance_schema.setup_actors` table
  • Oracle Bug #21550054 – It is possible that the views can show data that overflows when aggregating very large values, reset all statistics before each test to ensure no overflows
  • Oracle Bug #21647101 – Fixed the `ps_is_instrument_default_enabed` and `ps_is_instrument_default_timed` to take in to account the new instruments added within 5.7
  • MySQL Bug #77848 – Added the missing ps_setup_instruments_cleanup.inc
  • Fixed the `ps_setup_reset_to_default()` procedure to also set the new `ENABLED` column within `performance_schema.setup_actors` within 5.7
  • The `user_summary_by_file_io`/`x$user_summary_by_file_io` and `host_summary_by_file_io`/`x$host_summary_by_file_io` tables were incorrectly aggregating all wait events, not just `wait/io/file/%`

Implementation Details

  • Tests were improved via 5.7 integration
  • Template files were added for stored procedures and functions
  • Improved the sys_config_cleanup.inc procedure in tests to be able to reset the sys_config table completely (including the set_by column to NULL). The triggers can now be set to not update the column by setting the @sys.ignore_sys_config_triggers user variable to true

Quickly tell who is writing to a MySQL replica

Many of us have been there in the past, you get an alert telling you that replication has stopped because of an error, you dig in to it to find that you’re getting an error for an update event that is trying to update a non-existent row, or a duplicate key error because the row ID for some INSERT already exists.

Even with the server set to read only (and not using the new super_read_only variable from MySQL 5.7.8), these problems can still happen – how many of you have seen over-zealous ops trying to “quickly fix” some problem only to royally screw up your data integrity?

The question then becomes – “who or what is making changes on my replica that shouldn’t be?!?”.

The only way to find this out in the past, and still “the conventional wisdom” (I just saw it recommended in a discussion in the MySQL support group) was to turn on the binary log on the replica, make sure log_slave_updates is not enabled, and then see what events turn up in the binary log.

But if you are using 5.6, you can already get an idea of who is executing what kinds of DML statements very easily, with the default statement instrumentation in Performance Schema, via the performance_schema.events_statements_summary_by_user_by_event_name and performance_schema.events_statements_summary_by_host_by_event_name tables. These have a structure like this:

mysql> desc performance_schema.events_statements_summary_by_user_by_event_name;
+-----------------------------+---------------------+------+-----+---------+-------+
| Field                       | Type                | Null | Key | Default | Extra |
+-----------------------------+---------------------+------+-----+---------+-------+
| USER                        | char(16)            | YES  |     | NULL    |       |
| EVENT_NAME                  | varchar(128)        | NO   |     | NULL    |       |
| COUNT_STAR                  | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_TIMER_WAIT              | bigint(20) unsigned | NO   |     | NULL    |       |
| MIN_TIMER_WAIT              | bigint(20) unsigned | NO   |     | NULL    |       |
| AVG_TIMER_WAIT              | bigint(20) unsigned | NO   |     | NULL    |       |
| MAX_TIMER_WAIT              | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_LOCK_TIME               | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_ERRORS                  | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_WARNINGS                | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_ROWS_AFFECTED           | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_ROWS_SENT               | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_ROWS_EXAMINED           | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_CREATED_TMP_DISK_TABLES | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_CREATED_TMP_TABLES      | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_SELECT_FULL_JOIN        | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_SELECT_FULL_RANGE_JOIN  | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_SELECT_RANGE            | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_SELECT_RANGE_CHECK      | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_SELECT_SCAN             | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_SORT_MERGE_PASSES       | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_SORT_RANGE              | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_SORT_ROWS               | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_SORT_SCAN               | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_NO_INDEX_USED           | bigint(20) unsigned | NO   |     | NULL    |       |
| SUM_NO_GOOD_INDEX_USED      | bigint(20) unsigned | NO   |     | NULL    |       |
+-----------------------------+---------------------+------+-----+---------+-------+

The data within them is like the merged output of “SHOW GLOBAL STATUS LIKE ‘Com_%'” (only by user here for example, or by host/account in the other base summary views), with the regular performance schema statement data (latency, lock time, result info, sorting info, temporary table info etc.).

So you can get an idea of the query profile of users:

mysql> select user, event_name, count_star, sum_timer_wait, sum_rows_affected
    ->   from performance_schema.events_statements_summary_by_user_by_event_name
    ->  where sum_timer_wait > 0
    ->  order by user, sum_timer_wait desc;
+------+----------------------------------+------------+-------------------+-------------------+
| user | event_name                       | count_star | sum_timer_wait    | sum_rows_affected |
+------+----------------------------------+------------+-------------------+-------------------+
| root | statement/sql/insert             |    4654750 | 10889839312962492 |           6147833 |
| root | statement/sql/select             |    1763971 |  2685440113284300 |                16 |
| root | statement/sql/commit             |    3549022 |  2076707580306888 |                 0 |
| root | statement/sql/update             |     692793 |   429644976873960 |            692961 |
| root | statement/sql/alter_table        |       2294 |   349258266736668 |                 0 |
| root | statement/sql/set_option         |    3638143 |   175255961793804 |                 0 |
| root | statement/sql/truncate           |       4416 |   169644392962740 |                 0 |
| root | statement/sql/create_table       |       2042 |   119925607500528 |                 0 |
| root | statement/sql/show_fields        |      15795 |   119878604121228 |                 0 |
| root | statement/sql/delete             |     699378 |   116732970789132 |             40298 |
| root | statement/com/Init DB            |     218258 |    90260554011048 |                 0 |
| root | statement/sql/call_procedure     |          5 |    84983311882308 |                 0 |
| root | statement/sql/show_binlogs       |       5832 |    73956599513124 |                 0 |
| root | statement/sql/create_index       |       1530 |    53709192098256 |                 0 |
| root | statement/sql/show_tables        |      37924 |    45532153498296 |                 0 |
| root | statement/sql/show_warnings      |     892204 |    40929165782628 |                 0 |
| root | statement/sql/rollback           |     224391 |    30362071807164 |                 0 |
| root | statement/sql/show_variables     |      30521 |    25090556161620 |                 0 |
| root | statement/sql/show_status        |      17403 |    12531077415552 |                 0 |
| root | statement/com/Ping               |     358412 |    10763876159712 |                 0 |
| root | statement/sql/show_engine_status |       6964 |     9874889717568 |                 0 |
| root | statement/sql/show_keys          |      15739 |     6773811516912 |                 0 |
| root | statement/sql/show_create_table  |      34677 |     4967611443876 |                 0 |
| root | statement/sql/drop_db            |         78 |     3803154575244 |               289 |
| root | statement/com/Quit               |     566175 |     2860317274356 |                 0 |
| root | statement/sql/show_slave_status  |      11592 |     2394918433356 |                 0 |
| root | statement/sql/show_databases     |       1187 |     1722377297184 |                 0 |
| root | statement/sql/delete_multi       |       5737 |      946103948844 |                10 |
| root | statement/sql/show_master_status |       6961 |      861468069984 |                 0 |
| root | statement/sql/execute_sql        |          4 |      649259110140 |                 0 |
| root | statement/sql/create_db          |       1139 |      558651095628 |              1139 |
| root | statement/sql/change_db          |        363 |       49894596144 |                 0 |
| root | statement/sql/prepare_sql        |          4 |       31004518200 |                 0 |
| root | statement/sql/error              |        363 |       28835029596 |                 0 |
| root | statement/sql/create_procedure   |          4 |       27429645936 |                 0 |
| root | statement/sql/show_collations    |         47 |       24375874704 |                 0 |
| root | statement/sql/drop_event         |         77 |       12535227684 |                 0 |
| root | statement/sql/show_processlist   |         42 |        8276037864 |                 0 |
| root | statement/com/Set option         |        134 |        6454170660 |                 0 |
| root | statement/sql/drop_procedure     |          4 |        1955618676 |                 0 |
| root | statement/sql/create_function    |          1 |         329743728 |                 0 |
| root | statement/sql/dealloc_sql        |          4 |         304913628 |                 0 |
| root | statement/sql/empty_query        |          7 |         214863132 |                 0 |
| root | statement/sql/drop_function      |          1 |         137393220 |                 0 |
+------+----------------------------------+------------+-------------------+-------------------+
44 rows in set (0.00 sec)

This data is what forms the basis for the sys.user_summary_by_statement_type view:

mysql> select * from sys.user_summary_by_statement_type;
+------+--------------------+---------+---------------+-------------+--------------+-----------+---------------+---------------+------------+
| user | statement          | total   | total_latency | max_latency | lock_latency | rows_sent | rows_examined | rows_affected | full_scans |
+------+--------------------+---------+---------------+-------------+--------------+-----------+---------------+---------------+------------+
| root | insert             | 4656002 | 3.03 h        | 2.31 s      | 16.27 m      |         0 |             0 |       6149526 |          0 |
| root | select             | 1764359 | 44.77 m       | 25.23 s     | 9.03 m       |   4496820 |     132814197 |            16 |     395133 |
| root | commit             | 3549836 | 34.62 m       | 11.31 s     | 0 ps         |         0 |             0 |             0 |          0 |
| root | update             |  693048 | 7.16 m        | 1.19 s      | 39.14 s      |         0 |        727567 |        693216 |          0 |
| root | alter_table        |    2294 | 5.82 m        | 3.26 s      | 47.69 s      |         0 |             0 |             0 |          0 |
| root | set_option         | 3638446 | 2.92 m        | 56.50 ms    | 0 ps         |         0 |             0 |             0 |          0 |
| root | truncate           |    4416 | 2.83 m        | 727.25 ms   | 5.07 s       |         0 |             0 |             0 |          0 |
| root | create_table       |    2042 | 2.00 m        | 1.17 s      | 0 ps         |         0 |             0 |             0 |          0 |
| root | show_fields        |   15795 | 2.00 m        | 556.60 ms   | 1.55 m       |    172209 |        172209 |             0 |      15795 |
| root | delete             |  699378 | 1.95 m        | 10.39 s     | 1.03 m       |         0 |        207604 |         40298 |          0 |
| root | Init DB            |  218291 | 1.50 m        | 116.22 ms   | 0 ps         |         0 |             0 |             0 |          0 |
| root | call_procedure     |       5 | 1.42 m        | 38.21 s     | 58.00 ms     |         0 |             0 |             0 |          0 |
| root | show_binlogs       |    5834 | 1.23 m        | 3.81 s      | 0 ps         |         0 |             0 |             0 |          0 |
| root | create_index       |    1530 | 53.71 s       | 811.27 ms   | 20.99 s      |         0 |             0 |             0 |          0 |
| root | show_tables        |   37925 | 45.53 s       | 626.26 ms   | 1.55 s       |     40239 |         40239 |             0 |      37925 |
| root | show_warnings      |  892204 | 40.93 s       | 97.00 ms    | 0 ps         |         0 |             0 |             0 |          0 |
| root | rollback           |  224427 | 30.36 s       | 1.32 s      | 0 ps         |         0 |             0 |             0 |          0 |
| root | show_variables     |   30527 | 25.09 s       | 43.79 ms    | 1.49 s       |   5712888 |       5712888 |             0 |      30527 |
| root | show_status        |   17406 | 12.53 s       | 57.92 ms    | 837.06 ms    |   3966858 |       3966858 |             0 |      17406 |
| root | Ping               |  358451 | 10.77 s       | 3.01 ms     | 0 ps         |         0 |             0 |             0 |          0 |
| root | show_engine_status |    6966 | 9.88 s        | 106.50 ms   | 0 ps         |         0 |             0 |             0 |          0 |
| root | show_keys          |   15739 | 6.77 s        | 95.82 ms    | 1.14 s       |     52285 |         52285 |             0 |      15739 |
| root | show_create_table  |   34677 | 4.97 s        | 78.85 ms    | 0 ps         |         0 |             0 |             0 |          0 |
| root | drop_db            |      78 | 3.80 s        | 1.88 s      | 3.71 s       |         0 |             0 |           289 |          0 |
| root | Quit               |  566250 | 2.86 s        | 2.25 ms     | 0 ps         |         0 |             0 |             0 |          0 |
| root | show_slave_status  |   11594 | 2.40 s        | 32.97 ms    | 0 ps         |         0 |             0 |             0 |          0 |
| root | show_databases     |    1188 | 1.72 s        | 6.25 ms     | 50.00 ms     |      1488 |          1488 |             0 |       1188 |
| root | delete_multi       |    5737 | 946.10 ms     | 39.19 ms    | 313.02 ms    |         0 |           160 |            10 |       1444 |
| root | show_master_status |    6963 | 861.56 ms     | 22.81 ms    | 0 ps         |         0 |             0 |             0 |          0 |
| root | execute_sql        |       4 | 649.26 ms     | 184.78 ms   | 0 ps         |         0 |             0 |             0 |          0 |
| root | create_db          |    1139 | 558.65 ms     | 22.92 ms    | 0 ps         |         0 |             0 |          1139 |          0 |
| root | change_db          |     363 | 49.89 ms      | 692.26 us   | 0 ps         |         0 |             0 |             0 |          0 |
| root | prepare_sql        |       4 | 31.00 ms      | 29.86 ms    | 0 ps         |         0 |             0 |             0 |          0 |
| root | error              |     364 | 28.92 ms      | 347.29 us   | 0 ps         |         0 |             0 |             0 |          0 |
| root | create_procedure   |       4 | 27.43 ms      | 22.09 ms    | 10.00 ms     |         0 |             0 |             0 |          0 |
| root | show_collations    |      47 | 24.38 ms      | 1.37 ms     | 2.00 ms      |     10293 |         10293 |             0 |         47 |
| root | drop_event         |      77 | 12.54 ms      | 1.01 ms     | 4.00 ms      |         0 |             0 |             0 |          0 |
| root | show_processlist   |      42 | 8.28 ms       | 5.87 ms     | 0 ps         |         0 |             0 |             0 |          0 |
| root | Set option         |     134 | 6.45 ms       | 1.52 ms     | 0 ps         |         0 |             0 |             0 |          0 |
| root | drop_procedure     |       4 | 1.96 ms       | 916.73 us   | 1.00 ms      |         0 |             0 |             0 |          0 |
| root | create_function    |       1 | 329.74 us     | 329.74 us   | 0 ps         |         0 |             0 |             0 |          0 |
| root | dealloc_sql        |       4 | 304.91 us     | 87.73 us    | 0 ps         |         0 |             0 |             0 |          0 |
| root | empty_query        |       7 | 214.86 us     | 43.04 us    | 0 ps         |         0 |             0 |             0 |          0 |
| root | drop_function      |       1 | 137.39 us     | 137.39 us   | 1.00 ms      |         0 |             0 |             0 |          0 |
+------+--------------------+---------+---------------+-------------+--------------+-----------+---------------+---------------+------------+

The replication SQL thread statement activity is also recorded within these tables. Within the base performance schema tables, these will have a user / host of NULL, and in the sys schema these are translated to a generic “background” user (an example from my local sandbox topology where one table was created with one row via replication, shown in the “background” user, and the msandbox user has created newly the sys schema):

slave1 [localhost] {msandbox} (sys) > select * from user_summary_by_statement_type;
+------------+-------------------+-------+---------------+-------------+--------------+-----------+---------------+---------------+------------+
| user       | statement         | total | total_latency | max_latency | lock_latency | rows_sent | rows_examined | rows_affected | full_scans |
+------------+-------------------+-------+---------------+-------------+--------------+-----------+---------------+---------------+------------+
| background | create_table      |     1 | 712.32 ms     | 712.32 ms   | 0 ps         |         0 |             0 |             0 |          0 |
| background | insert            |     1 | 546.87 us     | 546.87 us   | 444.00 us    |         0 |             0 |             1 |          0 |
| background | begin             |     1 | 26.83 us      | 26.83 us    | 0 ps         |         0 |             0 |             0 |          0 |
| msandbox   | create_view       |    82 | 13.05 s       | 277.68 ms   | 25.59 ms     |         0 |             0 |             0 |          0 |
| msandbox   | create_trigger    |     2 | 738.17 ms     | 410.11 ms   | 837.00 us    |         0 |             0 |             0 |          0 |
| msandbox   | create_table      |     1 | 315.86 ms     | 315.86 ms   | 0 ps         |         0 |             0 |             0 |          0 |
| msandbox   | Field List        |   135 | 262.68 ms     | 67.72 ms    | 24.98 ms     |         0 |             0 |             0 |          0 |
| msandbox   | select            |    17 | 183.08 ms     | 78.50 ms    | 59.71 ms     |       406 |          2371 |             0 |          7 |
| msandbox   | show_databases    |     5 | 173.19 ms     | 125.45 ms   | 1.53 ms      |        21 |            21 |             0 |          5 |
| msandbox   | drop_function     |    14 | 66.93 ms      | 66.00 ms    | 66.24 ms     |         0 |             0 |             0 |          0 |
| msandbox   | insert            |     2 | 66.18 ms      | 65.09 ms    | 574.00 us    |         0 |             0 |             6 |          0 |
| msandbox   | show_tables       |     2 | 47.10 ms      | 46.69 ms    | 123.00 us    |       135 |           135 |             0 |          2 |
| msandbox   | Init DB           |     3 | 46.19 ms      | 46.09 ms    | 0 ps         |         0 |             0 |             0 |          0 |
| msandbox   | show_create_table |     1 | 36.10 ms      | 36.10 ms    | 0 ps         |         0 |             0 |             0 |          0 |
| msandbox   | show_slave_status |     2 | 34.82 ms      | 34.69 ms    | 0 ps         |         0 |             0 |             0 |          0 |
| msandbox   | create_procedure  |    26 | 24.32 ms      | 6.57 ms     | 18.21 ms     |         0 |             0 |             0 |          0 |
| msandbox   | create_function   |    14 | 5.46 ms       | 965.16 us   | 3.38 ms      |         0 |             0 |             0 |          0 |
| msandbox   | drop_procedure    |    26 | 2.04 ms       | 155.18 us   | 838.00 us    |         0 |             0 |             0 |          0 |
| msandbox   | create_db         |     1 | 733.32 us     | 733.32 us   | 0 ps         |         0 |             0 |             1 |          0 |
| msandbox   | set_option        |     6 | 638.51 us     | 307.45 us   | 0 ps         |         0 |             0 |             0 |          0 |
| msandbox   | drop_trigger      |     2 | 164.66 us     | 93.56 us    | 0 ps         |         0 |             0 |             0 |          0 |
| msandbox   | Quit              |     3 | 94.20 us      | 69.39 us    | 0 ps         |         0 |             0 |             0 |          0 |
+------------+-------------------+-------+---------------+-------------+--------------+-----------+---------------+---------------+------------+
22 rows in set (0.00 sec)

So the quick answer to the question “who is currently changing data on my replica, that is not the replication SQL thread, and how?” is:

slave1 [localhost] {msandbox} (sys) > SELECT * FROM sys.user_summary_by_statement_type WHERE user != 'background' AND rows_affected > 0;
+----------+-----------+-------+---------------+-------------+--------------+-----------+---------------+---------------+------------+
| user     | statement | total | total_latency | max_latency | lock_latency | rows_sent | rows_examined | rows_affected | full_scans |
+----------+-----------+-------+---------------+-------------+--------------+-----------+---------------+---------------+------------+
| msandbox | insert    |     2 | 66.18 ms      | 65.09 ms    | 574.00 us    |         0 |             0 |             6 |          0 |
| msandbox | create_db |     1 | 733.32 us     | 733.32 us   | 0 ps         |         0 |             0 |             1 |          0 |
+----------+-----------+-------+---------------+-------------+--------------+-----------+---------------+---------------+------------+
2 rows in set (0.00 sec)

Or direct from Performance Schema:

slave1 [localhost] {msandbox} (sys) > SELECT * FROM performance_schema.events_statements_summary_by_user_by_event_name WHERE user IS NOT NULL AND sum_rows_affected > 0\G
*************************** 1. row ***************************
                       USER: msandbox
                 EVENT_NAME: statement/sql/insert
                 COUNT_STAR: 2
             SUM_TIMER_WAIT: 66179499000
             MIN_TIMER_WAIT: 1090899000
             AVG_TIMER_WAIT: 33089749000
             MAX_TIMER_WAIT: 65088600000
              SUM_LOCK_TIME: 574000000
                 SUM_ERRORS: 0
               SUM_WARNINGS: 0
          SUM_ROWS_AFFECTED: 6
              SUM_ROWS_SENT: 0
          SUM_ROWS_EXAMINED: 0
SUM_CREATED_TMP_DISK_TABLES: 0
     SUM_CREATED_TMP_TABLES: 0
       SUM_SELECT_FULL_JOIN: 0
 SUM_SELECT_FULL_RANGE_JOIN: 0
           SUM_SELECT_RANGE: 0
     SUM_SELECT_RANGE_CHECK: 0
            SUM_SELECT_SCAN: 0
      SUM_SORT_MERGE_PASSES: 0
             SUM_SORT_RANGE: 0
              SUM_SORT_ROWS: 0
              SUM_SORT_SCAN: 0
          SUM_NO_INDEX_USED: 0
     SUM_NO_GOOD_INDEX_USED: 0
*************************** 2. row ***************************
                       USER: msandbox
                 EVENT_NAME: statement/sql/create_db
                 COUNT_STAR: 1
             SUM_TIMER_WAIT: 733319000
             MIN_TIMER_WAIT: 733319000
             AVG_TIMER_WAIT: 733319000
             MAX_TIMER_WAIT: 733319000
              SUM_LOCK_TIME: 0
                 SUM_ERRORS: 0
               SUM_WARNINGS: 0
          SUM_ROWS_AFFECTED: 1
              SUM_ROWS_SENT: 0
          SUM_ROWS_EXAMINED: 0
SUM_CREATED_TMP_DISK_TABLES: 0
     SUM_CREATED_TMP_TABLES: 0
       SUM_SELECT_FULL_JOIN: 0
 SUM_SELECT_FULL_RANGE_JOIN: 0
           SUM_SELECT_RANGE: 0
     SUM_SELECT_RANGE_CHECK: 0
            SUM_SELECT_SCAN: 0
      SUM_SORT_MERGE_PASSES: 0
             SUM_SORT_RANGE: 0
              SUM_SORT_ROWS: 0
              SUM_SORT_SCAN: 0
          SUM_NO_INDEX_USED: 0
     SUM_NO_GOOD_INDEX_USED: 0
2 rows in set (0.00 sec)

You can similarly do this by host, by replacing “user” with “host” in the above view names, if you are trying to track down which host is causing issues, too.

MySQL sys version 1.4.0 released

MySQL sys version 1.4.0 has just been released.

Here’s a full summary of the changes:

Backwards Incompatible Changes

  • The `memory_global_by_current_allocated` views were renamed to `memory_global_by_current_bytes` for consistency with the other memory views
  • The `ps_setup_enable_consumers` procedure was renamed to `ps_setup_disable_consumer` for naming consistency (everything is now singular, not plural)
  • The `format_time` function displayed values in minutes incorrectly, it now rounds to minutes, and uses an ‘m’ suffix, like the rest of the units

Improvements

  • The beginnings of a mysql-test suite have been added
  • The `innodb_lock_waits`/`x$innodb_lock_waits` views were improved (Contributions by both Jesper Wisborg Krogh and Mark Matthews)
    • Added the `wait_started`, `wait_age`, `waiting_trx_started` `waiting_trx_age`, `waiting_trx_rows_locked` and `waiting_trx_rows_modified` columns for waiting transactions
    • Added the `blocking_trx_started`, `blocking_trx_age`, `blocking_trx_rows_locked` and `blocking_trx_rows_modified` for blocking transaction
    • Order the result set so the oldest lock waits are first
    • The `waiting_table` and `waiting_index` were always the same as the `blocking_table` and `blocking_index`. So the blocking_% columns have been removed and the waiting_% columns have been renamed to locked_%
    • The `waiting_lock_type` and `blocking_lock_type` were also always the same. So these were removed and replaced with a single `locked_type` column
    • Renamed the `waiting_thread` and `blocking_thread` to `waiting_pid` and `blocking_pid` respectively to avoid confusion with the threads from the Performance Schema.
  • Added the `sys_get_config` function, used to get configuration parameters from the `sys_config` table – primarily from other sys objects, but can be used individually (Contributed by Jesper Wisborg Krogh)
  • Add an option to generate_sql_file.sh to generate a mysql_install_db / mysqld –initialize format friendly file
  • Added the `ps_is_thread_instrumented` function, to check whether a specified thread is instrumented within Performance Schema
  • Added the `ps_is_consumer_enabled` function, to check whether a specified consumer is enabled within Performance Schema (Contributed by Jesper Wisborg Krogh)
  • Added some further replacements to the `format_path` function (`slave_load_tmpdir`, `innodb_data_home_dir`, `innodb_log_group_home_dir` and `innodb_undo_directory`)

Bug Fixes

  • The 5.6 `host_summary` and `x$host_summary` views incorrectly had the column with `COUNT(DISTINCT accounts.user)` named `unique_hosts` instead of `unique_users` (Contributed by Jesper Wisborg Krogh)
  • Both the `format_time` and `format_bytes` took a BIGINT as input, and output VARCHAR, but BIGINT could be too small for aggregated values for the inputs. Now both functions both use TEXT as their input (Issue #34, Issue #38)
  • The `format_time` function displayed values in minutes incorrectly, it now rounds to minutes, and uses an ‘m’ suffix, like the rest of the units
  • The `sys_config` related triggers had no DEFINER clause set
  • The `ps_setup_disable_thread` procedure always disabled the current thread and was ignoring the connection id given as an argument (Contributed by Jesper Wisborg Krogh)
  • The `ps_trace_thread` procedure had an incorrect calculation of how long the procedure has been running (Contributed by Jesper Wisborg Krogh)

Implementation Details

Various changes were made to allow better generation of integration sql files:

  • The formatting for all comments has been standardized on — line comments. C-style /* comments */ have been removed
    • Issue #35 had one instance of this resolved in this release (contributed by Joe Grasse), but the entire code base has now been done
  • Each object has been created within it’s own file. No longer do x$ views live with their non-x$ counterparts
  • DELIMITERs were standardized to $$

Getting connection information with MySQL 5.7

MySQL 5.7 has had some great improvements within Performance Schema to be able to better trace what connections are doing, from adding memory information, through to transaction information, metadata locking, prepared statements, and even user variables, so far (there is still more to come in the next release – stay tuned).

Of course there are other improvements on top of this as well, such as the new replication tables, but I won’t go in to that here. Today I want to focus on the kind of data you can now get in MySQL 5.7 for debugging what a foreground user connection has been doing.

I’ve written before about how Performance Schema allows you to create a much better alternative to SHOW FULL PROCESSLIST, however that solution was never really satisfying to me. It’s fine for a high level overview of the connections to the database, but there was so much more information available within Performance Schema, even in MySQL 5.6, that can be exposed on a per thread basis.

The problem is that a lot of that extra data that I wanted exposed is really of a structured form, such as the last N statements (or in 5.7, last N transactions even) that a connection has executed, or a summary of wait information, for example. Everybody loves JSON now’a’days, so that was the obvious choice to me – I wanted a JSON object, per thread, of the available data from Performance Schema.

And so started the writing of a query from hell. But the output is, if I may say so myself, a thing of beauty.

Lets whet that appetite a little (and you may need to scroll to the right):

thread_info:
{
  "user": "root@localhost",
  "thd_id": 106950,
  "connection_id": 106927,
  "database": "memtrunkio",
  "current_statement": {
    "time": 1,
    "rows_examined": 0,
    "rows_affected": 0,
    "rows_sent": 0,
    "tmp_tables": 0,
    "tmp_disk_tables": 0,
    "sort_rows": 0,
    "sort_merge_passes": 0,
    "text": "/* mem dbpool.default */ insert into `memtrunkio__quan`.`normalized_statements_by_server_by_schema` (firstSeen, lastSeen, normalized_statement_id, `schema`, server, id) values (1424379489000, 1424430252000, x'083ADE6E
3A814355A823472B787CF079', 'memtrunkio', 'cd6fe2ae-a6ce-11e4-87eb-180373e22685', x'22D1E8D4ECDB30848B8C39456204E29F')  ON DUPLICATE KEY UPDATE
    firstSeen = COALESCE( LEAST(firstSeen, VALUES(firstSeen)), firstSeen),
    lastSeen = COALESCE( GREATEST(lastSeen, VALUES(lastSeen)), lastSeen)"
  },
  "last_transactions": [
    {
      "time": "1.86 s",
      "state": "COMMITTED",
      "mode": "READ WRITE",
      "autocommitted": "NO",
      "gtid": "",
      "isolation": "REPEATABLE READ",
      "statements": [
        {
          "time": "1.37 s",
          "schema": "memtrunkio",
          "rows_examined": 0,
          "rows_affected": 1,
          "rows_sent": 0,
          "tmp_tables": 0,
          "tmp_disk_tables": 0,
          "sort_rows": 0,
          "sort_merge_passes": 0,
          "statement": "/* mem dbpool.default */ insert into `memtrunkio__quan`.`normalized_statements_by_server_by_schema_data` (bytesMax, bytesMin, bytesTotal, collectionDuration, createdTmpDiskTables, createdTmpTables, errorCou
nt, execCount, execTimeMax, execTimeMin, execTimeTotal, lockTimeTotal, noGoodIndexUsedCount, noIndexUsedCount, rowsExaminedTotal, rowsMax, rowsMin, rowsTotal, selectFullJoin, selectFullRangeJoin, selectRange, selectRangeCheck, sel
ectScan, sortMergePasses, sortRange, sortRows, sortScan, timestamp, warningCount, round_robin_bin, normalized_statement_by_server_by_schema_id) values (null, null, null, 120000, 0, 4, 0, 4, 2858, 801, 4231, 1000, 0, 4, 1856, null,
 null, 1856, 0, 0, 0, 0, 4, 0, 0, 0, 0, 1424430258000, 0, 32344, x'CDD64BD9418A30C0B310095788B9DD41') ON DUPLICATE KEY UPDATE bytesMax = IF(VALUES(`timestamp`) >= `timestamp`, VALUES(bytesMax), bytesMax), bytesMin = IF(VALUES(`tim
estamp`) >= `timestamp`, VALUES(bytesMin), bytesMin), bytesTotal = IF(VALUES(`timestamp`) >= `timestamp`, VALUES(bytesTotal), by"
        },
        {
          "time": "269.82 us",
          "schema": "memtrunkio",
          "rows_examined": 0,
          "rows_affected": 1,
          "rows_sent": 0,
          "tmp_tables": 0,
          "tmp_disk_tables": 0,
          "sort_rows": 0,
          "sort_merge_passes": 0,
          "statement": "/* mem dbpool.default */ insert into `memtrunkio__quan`.`example_statements` (bytes, comments, connectionId, errors, execTime, hostFrom, hostTo, noGoodIndexUsed, noIndexUsed, rows, source_location_id, `text
`, timestamp, user, warnings, round_robin_bin, normalized_statement_by_server_by_schema_id) values (null, null, 13949, 0, 1178, 'localhost', null, 0, 1, 464, null, 'SHOW /*!50000 GLOBAL */ VARIABLES', 1424430227189, 'root', 0, 323
43, x'CDD64BD9418A30C0B310095788B9DD41')  ON DUPLICATE KEY UPDATE bytes = IF(VALUES(`timestamp`) >= `timestamp`, VALUES(bytes), bytes), comments = IF(VALUES(`timestamp`) >= `timestamp`, VALUES(comments), comments), connectionId =
IF(VALUES(`timestamp`) >= `timestamp`, VALUES(connectionId), connectionId), errors = IF(VALUES(`timestamp`) >= `timestamp`, VALUES(errors), errors), execTime = IF(VALUES(`timestamp`) >= `timestamp`, VALUES(execTime), execTime), ho
stFrom = IF(VALUES(`timestamp`) >= `timestamp`, VALUES(hostFrom), hostFrom), hostTo = IF(VALUES(`timestamp`) >= `timestamp`, VAL"
        },
        {
          "time": "42.07 ms",
          "schema": "memtrunkio",
          "rows_examined": 0,
          "rows_affected": 0,
          "rows_sent": 0,
          "tmp_tables": 0,
          "tmp_disk_tables": 0,
          "sort_rows": 0,
          "sort_merge_passes": 0,
          "statement": "/* mem dbpool.default */ commit"
        }
      ]
    }
  ],
  "metadata_locks": [
    { "lock_type": "SHARED_WRITE", "lock_duration": "TRANSACTION", "lock_status": "GRANTED", "scope": "memtrunkio__quan.normalized_statements_by_server_by_schema" },
    { "lock_type": "INTENTION_EXCLUSIVE", "lock_duration": "STATEMENT", "lock_status": "GRANTED", "scope": "*.*" }
  ],
  "table_handles": [
    { "schema": "memtrunkio__quan", "table": "normalized_statements_by_server_by_schema" }
  ],
  "memory_summary": {
    "total_memory": "70.30 KiB",
    "memory_details": [
      { "type": "sql/thd::main_mem_root", "count": 3, "allocated": "23.95 KiB", "total_allocated": "2.20 MiB", "high_allocated": "175.36 KiB" },
      { "type": "sql/NET::buff", "count": 1, "allocated": "16.01 KiB", "total_allocated": "16.01 KiB", "high_allocated": "16.01 KiB" },
      { "type": "sql/String::value", "count": 1, "allocated": "16.00 KiB", "total_allocated": "32.01 KiB", "high_allocated": "32.01 KiB" },
      { "type": "mysys/array_buffer", "count": 6, "allocated": "8.92 KiB", "total_allocated": "9.05 KiB", "high_allocated": "8.92 KiB" },
      { "type": "sql/THD::transactions::mem_root", "count": 1, "allocated": "4.02 KiB", "total_allocated": "4.02 KiB", "high_allocated": "4.02 KiB" },
      { "type": "vio/vio", "count": 1, "allocated": "608 bytes", "total_allocated": "608 bytes", "high_allocated": "608 bytes" },
      { "type": "innodb/mem0mem", "count": 1, "allocated": "232 bytes", "total_allocated": "4.29 MiB", "high_allocated": "25.69 KiB" },
      { "type": "sql/THD::Session_tracker", "count": 6, "allocated": "209 bytes", "total_allocated": "209 bytes", "high_allocated": "209 bytes" },
      { "type": "innodb/std", "count": 6, "allocated": "208 bytes", "total_allocated": "38.63 KiB", "high_allocated": "592 bytes" },
      { "type": "sql/THD::variables", "count": 1, "allocated": "64 bytes", "total_allocated": "64 bytes", "high_allocated": "64 bytes" },
      { "type": "innodb/ha_innodb", "count": 1, "allocated": "48 bytes", "total_allocated": "48 bytes", "high_allocated": "48 bytes" },
      { "type": "sql/MYSQL_LOCK", "count": 1, "allocated": "32 bytes", "total_allocated": "800 bytes", "high_allocated": "32 bytes" },
      { "type": "sql/THD::db", "count": 1, "allocated": "11 bytes", "total_allocated": "11 bytes", "high_allocated": "11 bytes" },
      { "type": "sql/Security_context", "count": 1, "allocated": "10 bytes", "total_allocated": "10 bytes", "high_allocated": "10 bytes" },
      { "type": "sql/THD::Session_sysvar_resource_manager", "count": 1, "allocated": "8 bytes", "total_allocated": "8 bytes", "high_allocated": "8 bytes" },
      { "type": "sql/MPVIO_EXT::auth_info", "count": 1, "allocated": "5 bytes", "total_allocated": "5 bytes", "high_allocated": "5 bytes" },
      { "type": "innodb/os0event", "count": 0, "allocated": "0 bytes", "total_allocated": "34.23 KiB", "high_allocated": "112 bytes" },
      { "type": "innodb/dict0mem", "count": 0, "allocated": "0 bytes", "total_allocated": "8.25 KiB", "high_allocated": "27 bytes" },
      { "type": "innodb/btr0sea", "count": 0, "allocated": "0 bytes", "total_allocated": "1.95 KiB", "high_allocated": "1.58 KiB" },
      { "type": "innodb/btr0pcur", "count": 0, "allocated": "0 bytes", "total_allocated": "281 bytes", "high_allocated": "0 bytes" }
    ]
  },
  "stage_summary": [
    { "stage": "update", "count": 24, "total_time": "6.14 s", "max_time": "2.17 s" },
    { "stage": "starting", "count": 36, "total_time": "100.37 ms", "max_time": "41.97 ms" },
    { "stage": "freeing items", "count": 34, "total_time": "4.76 ms", "max_time": "667.76 us" },
    { "stage": "init", "count": 25, "total_time": "1.27 ms", "max_time": "108.59 us" },
    { "stage": "Opening tables", "count": 29, "total_time": "269.16 us", "max_time": "24.17 us" },
    { "stage": "closing tables", "count": 34, "total_time": "217.84 us", "max_time": "95.68 us" },
    { "stage": "query end", "count": 34, "total_time": "80.45 us", "max_time": "7.28 us" },
    { "stage": "System lock", "count": 25, "total_time": "63.90 us", "max_time": "5.30 us" },
    { "stage": "checking permissions", "count": 25, "total_time": "49.99 us", "max_time": "4.30 us" },
    { "stage": "cleaning up", "count": 35, "total_time": "29.80 us", "max_time": "1.32 us" },
    { "stage": "end", "count": 24, "total_time": "24.17 us", "max_time": "3.97 us" }
  ],
  "wait_summary": {
    "total_wait_time": "6.17 s",
    "wait_details": [
      { "wait": "io/table/sql/handler", "count": 38, "time_total": "6.16 s", "time_pct": "99.7837", "time_max": "2.03 s" },
      { "wait": "idle", "count": 36, "time_total": "8.90 ms", "time_pct": "0.1443", "time_max": "3.28 ms" },
      { "wait": "io/socket/sql/client_connection", "count": 156, "time_total": "4.36 ms", "time_pct": "0.0707", "time_max": "666.64 us" },
      { "wait": "lock/table/sql/handler", "count": 25, "time_total": "32.98 us", "time_pct": "0.0005", "time_max": "2.52 us" },
      { "wait": "synch/sxlock/innodb/index_tree_rw_lock", "count": 91, "time_total": "15.49 us", "time_pct": "0.0003", "time_max": "864.76 ns" },
      { "wait": "synch/mutex/innodb/trx_mutex", "count": 156, "time_total": "12.37 us", "time_pct": "0.0002", "time_max": "536.22 ns" },
      { "wait": "synch/mutex/sql/THD::LOCK_query_plan", "count": 118, "time_total": "9.13 us", "time_pct": "0.0001", "time_max": "547.24 ns" },
      { "wait": "synch/mutex/sql/THD::LOCK_thd_data", "count": 87, "time_total": "7.04 us", "time_pct": "0.0001", "time_max": "224.21 ns" },
      { "wait": "synch/mutex/sql/THD::LOCK_thd_query", "count": 70, "time_total": "5.06 us", "time_pct": "0.0001", "time_max": "187.92 ns" },
      { "wait": "synch/mutex/innodb/trx_undo_mutex", "count": 25, "time_total": "2.15 us", "time_pct": "0.0000", "time_max": "230.04 ns" }
    ]
  },
  "user_variables": [
    { }
  ]
}

This is a snapshot of one of the connections from a MySQL Enterprise Monitor server, with this particular connection having inserted some data related to the Query Analyzer, with one past transaction that inserted a normalized and example statement in a single transaction with a final commit, and currently inserting another normalized statement.

Because it is currently executing an INSERT on the `memtrunkio__quan`.`normalized_statements_by_server_by_schema` table, you can also see that it holds a SHARED_WRITE metadata lock (for the duration of the TRANSACTION) on that table, and holds a table handle on that table as well.

Finally, you can see a summary of all of that connections memory usage, and timing information for both stages and lower level waits for those transactions and statements that it has executed.

Whilst developing this output, I’ve been using it almost exclusively against our own MEM schema. In the process it’s managed to help me find some interesting bugs. For instance, here’s a previous incarnation of that output, that helped find a bug where we were not committing between DELETE statements when purging data from all of our instrument tables. Ultimately this meant we were holding open huge transactions, with many many metadata locks being held across all of the instrument tables:

session_info:
{
  user: "mem@localhost",
  thd_id: 86676,
  current_statement: "",
  last_statements: [
    {"/* mem dbpool.default */ SET foreign_key_checks=1"},
    {"/* mem dbpool.default */ SET foreign_key_checks=0"},
    {"/* mem dbpool.default */ DELETE FROM `mem__instruments`.`QueryCacheFragmentationData` WHERE `timestamp` < 1393598543538 LIMIT 1000"},
    {"/* mem dbpool.default */ SET foreign_key_checks=1"},
    {"/* mem dbpool.default */ SET foreign_key_checks=0"},
    {"/* mem dbpool.default */ DELETE FROM `mem__instruments`.`ClusterDataNodeRedoLogspaceUsageData` WHERE `timestamp` < 1393598543538 LIMIT 1000"},
    {"/* mem dbpool.default */ SET foreign_key_checks=1"},
    {"/* mem dbpool.default */ SET foreign_key_checks=0"},
    {"/* mem dbpool.default */ DELETE FROM `mem__instruments`.`PreparedStatementUtilizationData` WHERE `timestamp` < 1393598543539 LIMIT 1000"},
    {"/* mem dbpool.default */ SET foreign_key_checks=1"}
  ],
  metadata_locks: [
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.preparedstatementutilizationdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.clusterdatanoderedologspaceusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.querycachefragmentationdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.semisyncmastertransactionsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbredologiousagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.connectionsaborteddata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbrowdetailsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.semisyncmasterwaitsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.querycachequeriesincachedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.rowaccessesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.tablelocksdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.agentluamemusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.cpupercentage" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.avgrowaccessesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.temporarytablesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbrowlocktimedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.querycachememorydata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.rowwritesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.clusterdiskpagebufferhitratiodata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.clusterdatanodesinreadystatedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.connectionsmaxdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.inventorystatisticssummarizeddata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.threadpoolqueriesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbrowlockdetailsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.binlogcachedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.slaverelaylogspaceusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbsemaphoresdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.diskiototaldata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.loadaveragedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbcheckpointagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.querycacheaveragefreeblocksizedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.querycachelowmemprunesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.rowsreturneddata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.httpbytes" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.hitratiosdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.mysqlconnectionoperationsstatsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.myisamkeybufferactivitydata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.diskioopstotaldata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.supportdiagnosticsadvisor_variables" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbadaptivehashsearchesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbbufferpoolactivitydata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.querycacheefficiencydata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.binlogiousagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.httpcounts_ssl" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.statementstatisticsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbdatafileiousagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.semisyncmasterwaitingsessionsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.connectionpooldata_uipool" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.clusterdatanodeundobufferusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.kbytesinoutdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.agentlatencytime" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.jvmmemoryusage" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.clusterdatanodeundologspaceusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.preparedstatementsqldata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.lockwaitratiodata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.httpclientactivity" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.httptimes_ssl" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.ramusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.databasetransactionsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.statementerrorsandwarningsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.slavesecondsbehinddata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.binlogspaceusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbdatadictionarymemorydata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.ehcachebacklogactivity" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbbufferpoolpendingopsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.threadpoolqueueagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbchangebufferusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.jvmgctime" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.httpcounts" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbchangebufferactivitydata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.jvmgccounts" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.clusterdatanodedatamemoryusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbtransactionsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.mebruntime" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.assetselectorcachesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.cpuaverage" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.httptimes" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.threadpoolqueuesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.myisamindexfileiousagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbrowlocktimestatisticsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.myisamkeybufferusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.preparedstatementapidata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbadaptivehashmemorydata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.openedtablesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.threadcachedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.slowqueriesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.networktrafficadvisor_networktraffic" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.threadpoolqueriesstalleddata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodblockmemorydata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.clusterdatanoderedobufferusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbcompressiontimedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.connectionpooldata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbrowlockwaitsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.qrtidata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbundologsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.swapioopsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.connectionsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.rowscanratiodata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.clusterdatanodeindexmemoryusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.purgeoperationsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.sortingdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.swapusagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.threadpoolcurrentthreadsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.mysqlprocessactivity" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbdoublewritebufferactivitydata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.fsstatistics" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbpendinglogwritesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.querycacheaveragequerysizedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbhistlengthdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.preparedstatementapiinvalidationsdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.myisamdatafileiousagedata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbosfileaccessdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbbufferpooldata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.databaseactivitydata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.semisyncmasteravgwaittimesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__instruments.innodbopenfilesdata" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__quan.explains" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__quan.histograms" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__quan.normalized_statements_by_server_by_schema_data" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__quan.explain_stages" },
    { lock_type: "SHARED_WRITE", lock_duration: "TRANSACTION", lock_status: "GRANTED", scope: "mem__quan.example_statements" }
  ],
  memory: [
    { type: "sql/NET::buff", count: "1", allocated: "16.01 KiB", total_allocated: "16.01 KiB" },
    { type: "sql/String::value", count: "1", allocated: "16.00 KiB", total_allocated: "32.05 KiB" },
    { type: "mysys/array_buffer", count: "7", allocated: "8.98 KiB", total_allocated: "9.11 KiB" },
    { type: "sql/thd::main_mem_root", count: "1", allocated: "8.02 KiB", total_allocated: "1.42 MiB" },
    { type: "sql/THD::transactions::mem_root", count: "1", allocated: "4.02 KiB", total_allocated: "4.02 KiB" },
    { type: "vio/vio", count: "1", allocated: "488 bytes", total_allocated: "488 bytes" },
    { type: "sql/THD::Session_tracker", count: "6", allocated: "209 bytes", total_allocated: "209 bytes" },
    { type: "sql/THD::variables", count: "1", allocated: "64 bytes", total_allocated: "64 bytes" },
    { type: "sql/TABLE", count: "0", allocated: "26 bytes", total_allocated: "25.82 KiB" },
    { type: "sql/Security_context", count: "1", allocated: "10 bytes", total_allocated: "10 bytes" },
    { type: "sql/THD::Session_sysvar_resource_manager", count: "1", allocated: "8 bytes", total_allocated: "8 bytes" },
    { type: "sql/MPVIO_EXT::auth_info", count: "1", allocated: "4 bytes", total_allocated: "4 bytes" },
    { type: "sql/THD::db", count: "1", allocated: "4 bytes", total_allocated: "4 bytes" },
    { type: "myisam/record_buffer", count: "0", allocated: "0 bytes", total_allocated: "1.37 KiB" },
    { type: "myisam/MI_INFO", count: "0", allocated: "0 bytes", total_allocated: "6.44 KiB" },
    { type: "myisam/MYISAM_SHARE", count: "0", allocated: "0 bytes", total_allocated: "8.36 KiB" },
    { type: "mysys/my_file_info", count: "0", allocated: "0 bytes", total_allocated: "584 bytes" },
    { type: "mysys/IO_CACHE", count: "0", allocated: "0 bytes", total_allocated: "32.00 KiB" },
    { type: "sql/MYSQL_LOCK", count: "0", allocated: "0 bytes", total_allocated: "11.23 KiB" },
    { type: "sql/my_bitmap_map", count: "0", allocated: "0 bytes", total_allocated: "648 bytes" },
    { type: "sql/SQL_SELECT::test_quick_select", count: "0", allocated: "0 bytes", total_allocated: "603.25 KiB" },
    { type: "sql/QUICK_RANGE_SELECT::alloc", count: "0", allocated: "0 bytes", total_allocated: "583.41 KiB" }
  ],
  stage_summary: [
    { stage: "starting", count: "486", total_time: "122.07 ms", max_time: "26.19 ms" },
    { stage: "updating", count: "146", total_time: "14.52 ms", max_time: "11.06 ms" },
    { stage: "statistics", count: "40", total_time: "10.23 ms", max_time: "8.18 ms" },
    { stage: "freeing items", count: "485", total_time: "8.20 ms", max_time: "133.35 us" },
    { stage: "System lock", count: "196", total_time: "4.34 ms", max_time: "163.46 us" },
    { stage: "Sending data", count: "40", total_time: "3.79 ms", max_time: "1.60 ms" },
    { stage: "init", count: "196", total_time: "2.82 ms", max_time: "212.24 us" },
    { stage: "Opening tables", count: "442", total_time: "2.54 ms", max_time: "48.02 us" },
    { stage: "removing tmp table", count: "2", total_time: "1.52 ms", max_time: "955.25 us" },
    { stage: "update", count: "10", total_time: "983.24 us", max_time: "281.10 us" },
    { stage: "closing tables", count: "485", total_time: "747.98 us", max_time: "27.34 us" },
    { stage: "query end", count: "485", total_time: "489.16 us", max_time: "4.79 us" },
    { stage: "end", count: "196", total_time: "409.23 us", max_time: "63.80 us" },
    { stage: "cleaning up", count: "486", total_time: "363.86 us", max_time: "4.67 us" },
    { stage: "checking permissions", count: "205", total_time: "340.21 us", max_time: "6.87 us" },
    { stage: "preparing", count: "40", total_time: "257.71 us", max_time: "20.84 us" },
    { stage: "optimizing", count: "40", total_time: "182.29 us", max_time: "10.39 us" },
    { stage: "Creating tmp table", count: "2", total_time: "75.55 us", max_time: "38.71 us" },
    { stage: "executing", count: "40", total_time: "15.72 us", max_time: "1.21 us" }
  ]
}

It’s been an eye opener for many of our transaction semantics (especially as we use Hibernate, it’s not always clear what is happening under the covers).

I’ve implemented this as a stored function, that takes the MySQL connection id as a parameter to dump the info for. So it can be used as part of any statement against for instance the performance_schema.threads or INFORMATION_SCHEMA.PROCESSLIST tables, such as:

mysql> SELECT sys.ps_thread_info(id) AS thread_info
    ->   FROM INFORMATION_SCHEMA.PROCESSLIST
    ->  WHERE Command != 'Sleep'
    ->  ORDER BY Time DESC\G
*************************** 1. row ***************************
thread_info:
{
  "user": "root@localhost",
  "thd_id": 122149,
  "connection_id": 122127,
  "database": "sys",
  "current_statement": {
    "time": 0,
    "rows_examined": 0,
    "rows_affected": 0,
    "rows_sent": 0,
    "tmp_tables": 1,
    "tmp_disk_tables": 1,
    "sort_rows": 1,
...

You can get a copy of the function here.

I’m considering adding it to the sys schema – what do you think?