I've tried several things to get to the root of this, but I'm clueless.
Here's the Go program. It's just one file and has a /api/sign
endpoint that accepts POST requests. These POST requests have three fields in the body, and they are logged in a sqlite3 database. Pretty basic stuff.
I wrote a simple Dockerfile to containerize it. Uses golang:1.7.4
to build the binary and copies it over to alpine:3.6
for the final image. Once again, nothing fancy.
I use wrk
to benchmark performance. With 8 threads and 1k connections for 50 seconds (wrk -t8 -c1000 -d50s -s post.lua http://server.com/api/sign
) and a lua script to create the post requests, I measured the number of requests per second between different situations. In all situations, I run wrk
from my laptop and the server is in DigitalOcean VPS (2 vCPUs, 2 GB RAM, SSD, Debian 9.4) that's very close to me.
Directly running the binary produced 2979 requests/sec.
Docker (docker run -it -v $(pwd):/data -p 8080:8080 image
) produced 179 requests/sec.
As you can see, the Docker version is over 16x slower than running the binary directly. Everything else is the same during both experiments.
I've tried the following things and there is practically no improvement in performance in the Docker version:
Tried using host networking instead of bridge. There was a slight increase to around 190 requests/sec, but it's still miserable.
Tried increasing the limit on the number of file descriptors in the container version with --ulimit nofile=262144:262144
. No improvement.
Tried different go versions, nothing.
Tried debian:9.4
for the final image instead of alpine:3.7
in the hope that it's musl that's performing terribly. Nothing here either.
(Edit) Tried running the container without a mounted volume and there's still no performance improvement.
I'm out of ideas at this point. Any help would be much appreciated!
Using an in-memory sqlite3 database completely solved all performance issues!
db, err = sql.Open("sqlite3", "file=dco.sqlite3?mode=memory")
I knew there was a disk I/O penalty hit associated with Docker's abstractions (even on Linux; I've heard it's worse on macOS), but I didn't know it would be ~16x.
Edit: Using an in-memory database isn't really an option most of the time. So I found another sqlite-specific solution. Before all database operations, do this to switch sqlite to WAL mode instead of the default rollback journal:
PRAGMA journal_mode=WAL;
PRAGMA synchronous=NORMAL;
This dramatically improved the Docker version's performance to over 2.7k requests/sec!