Categories: Database, DevOps, Docker, MySQL It is easy to simply spin a new MySQL and assume that it is ready for the production. This can’t be further from the preparation of disaster. MySQL very slow on docker. I’m trying to run a simple project with nginx, php and mysql on a droplet with 2 vCPU and 2 GB Ram using the Ubuntu-Docker-Image. The project runs on a really cheap host so far and collects about 1.5GB of data in a database so far and I’m trying to migrate this to a docker environment on Digital Ocean.
If all went well, you’ll see a very long number, which is the container’s ID. If you enter docker ps you’ll also see information about the image that was used to create the container (mysql:latest in this example), when the container was created and how long it’s been running, ports available (should be 3306/tcp) and the name we gave it (test-mysql). When making requests to nginx container on docker on wsl2, the server takes a lot of time to answer. A simple request that returns a plain ##text takes up to 4 seconds. Flowmeappwebflowme1 is the container that's running nginx, as you can see it's only consuming 3.4mb of memory and there are 14.6GB available.
Back in October I have write about possible ways of running multiple MySQL instances on the same hardware. As the months passing by, the project of splitting our database schemas into standalone instances is closing in, so I started to check the different ways.
EDIT: This post is outdated, here is the follow up.
I started with docker, because we’ll use containers anyway with the applications, and I think it is a good idea to minimise the diversity of an infrastructure. I used the docker’s “official” Percona image (it is official by Docker not by Percona!) which is easy to use, and flexible enough. (https://hub.docker.com/_/percona/) This image supports using custom config files, you can mount your existing directories (data & log) for the container, it is nice at the first sight. I found only one caveat with this: if I stop the docker container with **docker kill then the mysql server will be crash, so if you want to clean shut down the instance you have to kill the mysql server inside the container with kill, and after it stopped you can remove the container itself.
My first test was to run our development servers with docker containers instead of native mysql servers which was jolly good. I start the database instance with mounting the current data directory and the binlog directory inside the container, and I use a slightly modified (datafiles & directories) config file inside the container.
There’s only one thing you have to worried about: the local connections will come from the docker0 network interface, so you have to add it’s ip address to the allowed hosts list. (Which is 172.17.0.1 at our site.)
This instance can be used from any of the applications it can be a part of the replica chain, etc.
After I used dockerized mysql for a while in the dev environment, I decided to bechmark it with sysbench.
I recommend using the 0.5 branch (which is available on github) because the 0.4.12 (the stable one) didn’t supports parallel benchmarking of databases so your results won’t be applicable comparing to a real workload. After compiling sysbench, we can start our tests.
During the tests I created once the ‘sbtest’ database manually (in mysql prompt create database sbtest) and after I was populated it with test data with the following command:
I ran the test 3 times with native mysql and 3 times with the containerised one. I recorded the end results from the 3rd runs only for avoiding the problems which could be caused by cold caches.
Here are the results:
Native MySQL:
And here are the results with docker:
General statistics:
total time: 8.8669s
total number of events: 10000
total time taken by event execution: 70.8362s
response time:
min: 4.52ms
avg: 7.08ms
max: 22.28ms
approx. 95 percentile: 9.14ms
“`
The results are disappointing. The MySQL server running in a docker instance performs somewhere between 1/2 and 2/3 of the native one, which is unacceptable.
I started the container with the following command, so it is possible that we can avoid this performance overhead by mounting the data directory more smart (directly from LVM? With some magic mount parameters?) but so far the results are these, and the verdict is “it is not the best idea”.
Docker Mysql Slow Motion
I’ll check the other options & performance tuning later.
Over the course of the last several days I've been plagued with a couple pretty massive performance issues running MySQL under Docker(-compose) in our developer environments. Everyone in our office is running either a Mac or some flavor of Linux. In both cases I spent a couple of days search for a solution, running into several roadblocks, and exhausting many Google search terms. Since I had such a difficult time finding the solution, here are the symptoms I was seeing and what finally worked in our case, maybe it'll save someone else time.
Background
In both cases we were seeing incredibly long query times in certain areas of the site. The particular queries were those which we would expect to see performance issues exacerbated if they existed, large result sets with several joins and sorted. However, running the same queries against the same data on the same machines against MySQL installed natively on the host still ran these queries in fractions of a second, while the Docker instances took several minutes and spiked the CPU usage. This is an existing app with existing data, but only about 6GB on disk, so nothing big.
We are using MySQL's official images on Docker Hub. Using MySQL 5.6 to match what we currently have on our production databases. No customizations to speak of other than setting environment variables for user credentials and mounting named volumes to load and store the data.
The Issue with Mac
This was the fairly simple one to fix. After past experiences with similar CPU pegging performance issues with MySQL, I suspected this to be something to do with IO being limited somewhere. It took a bit of looking around, but I eventually ran across some forum posts talking about fsync flushing issues. I've run across enough similar issues in the past, it's a wonder my mind doesn't start there.
So reading on, I came across this gist which is a script to turn off fsync within the Hyperkit that Docker is running under Docker for Mac. This did the trick. I started up the docker containers and loaded page with the worst offending queries and it loaded in a matter of seconds.
Docker Mysql Slow Start
Since this was an issue with something as low level as fsync, you'd be right in thinking that there's an issue with the driver Docker is using to access the disk. Further reading in one of the Github issues I found that they had apparently made changes in 17.11+ to use`raw` format which no longer needed these fsync changes. Docker 17.12 landed in stable a couple days later and appears to have indeed fixed the issue.
The Issue with Linux
This proved to be a much tricker issue to fix and the solution turned out to be one I tried by chance and not anything I found existing on the web. Like Mac I was seeing CPU spike for complex queries and several minutes to return results. It turns out the Mac issue is wide spread enough that it's nearly impossible to do a Google search that gives you Linux specific steps to try to narrow it down.
Here is a short list of many of the steps that did not work or showed minimal improvements:
- forcing Docker's storage driver to something other than the default (I ended up on overlay2 because it's recommended, but again there's no noticeable difference)
- using a bind (named) volume for storing data files
- using a new mount volume from the host machine
- using a mount volume to the existing data files for my host machines existing MySQL data files, which worked great with a native instance
- upgrading to MySQL 5.7
- providing a custom settings file with tuned settings that worked for the native instance
- running a standalone Docker instance (read: not compose) with minimal parameters provided
- change how my root file system was mounted on boot to include
barrier=0
At this point I was 3 days in and pretty frustrated. I decided to change tactics, and I set out to compare the performance of the Docker vs native instance in a better way. I started out by comparing the EXPLAIN
results for the same query in both environments, and instantly noticed a pretty big difference, the Docker instance wasn't using a pretty important PRIMARY
index which the native instance is using. I dug a little deeper and no matter how the data was stored (bind v mount) if it was going through the Docker instance it wasn't using the index. I verified that the index existing in all instances.
Docker Mysql Slow Mac
Finally, for no reason that I can particularly pin down, I decided to run MySQL's optimize script on all the tables. That fixed the issue. I have no idea why this fixed anything, but it has done the trick for all the Linux systems in our office. If you have an explanation I'd be happy to hear about it down in the comments.