Skip to main content

Building and using coverage-instrumented programs with Go

tl;dr We can create coverage-instrumented binaries, run them and aggregate the coverage data from running both the program and the unit tests.

In the Go world, unit testing is tightly integrated with the go tool chain. Write some unit tests, run go test and tell anyone that will listen that you really hope to never have to deal with a build system for the rest of your life.

Since Go 1.2 (Dec. 2013), go test has supported test coverage analysis: with the ‑cover option it will tell you how much of the code is being exercised by the unit tests.

So far, so good.

I've been wanting to do something slightly different for some time though. Imagine you have a command line tool. I'd like to be able to run that tool with different options and inputs, check that everything is OK (using something like bats) and gather coverage data from those runs. Even better, wouldn't be neat to merge the coverage from the unit tests with the one from those program runs and have an aggregated view of the code paths exercised by both kind of testing?

A word about coverage in Go

Coverage instrumentation in Go is done by rewriting the source of an application. The cover tool inserts code to increment a counter at the start of each basic block, a different counter for each basic block of course. Some metadata is kept along side each of the counters: the location of the basic block (source file, start/end line & columns) and the size of the basic block (number of statements).

This rewriting is done automatically by go test when coverage information has been asked by the user (go test -x to see what's happening under the hood). go test then generates an instrumented test binary and runs it.

A more detailed explanation of the cover story can be found on the Go blog.

Another interesting thing is that it's possible to ask go test to write out a file containing the coverage information with the ‑coverprofile option. This file starts with the coverage mode, which is how the coverage counters are incremented. This is one of set, count or atomic (see blog post for details). The rest of the file is the list of basic blocks of the program with their metadata, one block per line:,244.9 3 4

This describes one piece of code from oci.go, composed of 3 statements without branches, starting at line 241, column 29 and finishing at line 244, column 9. This block has been reached 4 times during the execution of the test binary.

Generating coverage instrumented programs

Now, what I really want to do is to compile my program with the coverage instrumentation, not just the test binary. I also want to get the coverage data written to disk when the program finishes.

And that's when we have to start being creative.

We're going to use go test to generate that instrumented program. It's possible to define a custom TestMain function, an entry point of a kind, for the test package. TestMain is often used to setup up the test environment before running the list of unit tests. We can hack it a bit to call our main function and jump to running our normal program instead of the tests! I ended up with something like this:

The current project I'm working on is called cc-runtime, an OCI runtime spawning virtual machines. It definitely deserves its own blog post, but for now, knowing the binary name is enough. Generating a coverage instrumented cc-runtime binary is just a matter of invoking go test:

$ go test -o cc-runtime -covermode count

I haven't used atomic as this binary is really a thin wrapper around a library and doesn't use may goroutines. I'm also assuming that the use of atomic operations in every branch a "quite a bit" higher then the non-atomic addition. I don't care too much if the counter is off by a bit, as long as it's strictly positive.

We can run this binary just as if it were built with go build, except it's really a test binary and we have access to the same command line arguments as we would otherwise. In particular, we can ask to output the coverage profile.

$ ./cc-runtime -test.coverprofile=list.cov list
[ outputs the list of containers ]

And let's have a look at list.cov. Hang on... there's a problem, nothing was generated: we din't get the usual "coverage: xx.x% of statements" at the end of a go test run and there's no list.cov in the current directory. What's going on?

The testing package flushes the various profiles to disk after running all the tests. The problem is that we don't run any test here, we just call main. Fortunately enough, the API to trigger a test run is semi-public: it's not covered by the go1 API guarantee and has "internal only" warnings. Not. Even. Scared. Hacking up a dummy test suite and running is easy enough:

There is still one little detail left. We need to call this FlushProfiles function at the end of the program and that program could very well be using os.Exit anywhere. I couldn't find better than having a tiny exit package implementing the equivalent of the libc atexit() function and forbid direct use of os.Exit in favour of exit.Exit(). It's even testable.

Putting everything together

It's now time for a full example. I have a small calc program that can compute additions and substractions.

$ calc add 4 8

The code isn't exactly challenging:

I've written some unit-tests for the add function only. We're going to run calc itself to cover the remaining statements. But first, let's see the unit tests code with both TestAdd and our hacked up TestMain function. I've swept the hacky bits away in a cover package.

Let's run the unit-tests, asking to save a unit-tests.cov profile.

$ go test -covermode count -coverprofile unit-tests.cov
coverage: 7.1% of statements
ok 0.003s

Huh. 7.1%. Well, we're only testing the 1 statement of the add function after all. It's time for the magic. Let's compile an instrumented calc:

$ go test -o calc -covermode count

And run calc a few times to exercise more code paths. For each run, we'll produce a coverage profile.

$ ./calc -test.coverprofile=sub.cov sub 1 2
$ covertool report sub.cov
coverage: 57.1% of statements

$ ./calc -test.coverprofile=error1.cov foo
expected 3 arguments, got 1
$ covertool report error1.cov
coverage: 21.4% of statements

$ ./calc -test.coverprofile=error2.cov mul 3 4
unknown operation: mul
$ covertool report error2.cov
coverage: 50.0% of statements

We want to aggregate those profiles into one single super-profile. While there are some hints people are interested in merging profiles from several runs (that commit is in go 1.8), the cover tool doesn't seem to support these kind of things easily so I wrote a little utility to do it: covertool

$ covertool merge -o all.cov unit-tests.cov sub.cov error1.cov error2.cov

Unfortunately again, I discovered a bug in Go's cover and so we need covertool to tell us the coverage of the aggregated profile:

$ covertool report all.cov
coverage: 92.9% of statements

Not Bad!

Still not 100% though. Let's fire the HTML coverage viewer to see what we are missing:

$ go tool cover -html=all.cov

Oh, indeed, we're missing 1 statement. We never call add from the command line so that switch case is never covered. Good. Seems like everything is working as intended.

Here be dragons

As fun as this is, it definitely feels like very few people are doing this kind of instrumented binaries. Everything is a bit rough around the edges. I may have missed something obvious, of course, but I'm sure the Internet will tell me if that's the case!

It'd be awesome if we could have something nicely integrated in the future.


  1. Hey,

    Interesting post! I wonder how slower is your instrumented program vs go build compiled?

    I know that go build does many optimizations like escape analysis, etc. I wonder if they happen when building with go test?

    By the way there is a typo "got test -x" should be "go test -x"

    1. From

      "Although that annotating assignment might look expensive, it compiles to a single "move" instruction. Its run-time overhead is therefore modest, adding only about 3% when running a typical (more realistic) test."

      So it doesn't seem to bad! may be an acceptable cost for production even. We could start imagining interesting things then like knowing which code paths are hot in production and which are basically dead code.

      (and thanks, fixed the typo)

    2. Oh, regarding your other comment about optimisation passes. go test -x shows that the test binary is compiled normally, there are no option disabling any of the the various optimisation passes.


Post a Comment

Popular posts from this blog

Augmenting mailing-lists with Patchwork - Another try

The mailing-list problem
Many software projects use mailing-lists, which usually means mailman, not only for discussions around that project, but also for code contributions. A lot of open source projects work that way, including the one I interact with the most, the Linux kernel. A contributor sends patches to a mailing list, these days using git send-email, and waits for feedback or for his/her patches to be picked up for inclusion if fortunate enough.

Problem is, mailing-lists are awful for code contribution.

A few of the issues at hand:
Dealing with patches and emails can be daunting for new contributors,There's no feedback that someone will look into the patch at some point,There's no tracking of which patch has been processed (eg. included into the tree). A shocking number of patches are just dropped as a direct consequence,There's no way to add metadata to a submission. For instance, we can't assign a reviewer from a pool of people working on the project. As a re…

A git pre-commit hook to check the year of copyright notices

Like every year, touching a source file means you also need to update the year of the copyright notice you should have at the top of the file. I always end up forgetting about them, this is where a git pre-commit hook would be ultra-useful, so I wrote one:# # Check if copyright statements include the current year # files=`git diff --cached --name-only` year=`date +"%Y"` for f in $files; do head -10 $f | grep -i copyright 2>&1 1>/dev/null || continue if ! grep -i -e "copyright.*$year" $f 2>&1 1>/dev/null; then missing_copyright_files="$missing_copyright_files $f" fi done if [ -n "$missing_copyright_files" ]; then echo "$year is missing in the copyright notice of the following files:" for f in $missing_copyright_files; do echo " $f" done exit 1 fiHope this helps!