I recently had to revisit this way of deploying binaries and was pleasantly surprised ! 15 MB for a completely portable Hello world Haskell container, building in 30 seconds.
Golfed down the setup to a reasonable few lines (Dockerfile, build.sh and GH Actions file) for your reading pleasure.
What does this mean? Docker is just a wrapper for various Linux container features, right?
There is no portability across CPU architectures, and there doesn’t seem to be a stable image format either.
For example
$ docker run -it gcc:5.1 /bin/bash
docker: [DEPRECATION NOTICE]
Docker Image Format v1 and
Docker Image manifest version 2, schema 1 support
is disabled by default and will be
removed in an upcoming release.
Suggest the author of docker.io/library/gcc:5.1 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2.
More information at https://docs.docker.com/go/deprecated-image-specs/.
I find reproducibility more interesting than portability. If I can’t reproduce the binary (e.g. because the apt mirrors went down), I can’t port it to a new architecture either.
In my experience, the Docker ecosystem is extremely vulnerable to these kinds of problems, since there is no restriction on network access in containers.
Cheers, always a pleasure.
For others: Note that UPX creates “shims” that make it look like the binary is static, but actually links will be resolved at run-time (IIRC?), so this does not spare you from having to have a fully-static binary first and then use UPX on it.
There is a proposal to validate statically-built GHC in CI which I think would be useful, especially if the validation is done together with actual Hackage libraries.