Skip to main content

After-shave

A few concerns have been raised by shave, namely not being able to debug build failure in an automated environment as easily as before, or users giving  useless bug reports of failed builds.

One capital thing to realize is that, even when compiling with make V=1, everything that was not echoed was not showed (MAKEFLAGS=-s).

Thus, I've made a few changes:
  • Add CXX support (yes, that's unrelated, but the question was raised, thanks to Tommi Komulainen for the initial patch),
  • add a --enable-shave option to the configure script,
  • make the Good Old Behaviour the default one,
  • as a side effect, the V and Q variables are now defined in the m4 macro, please remove them from your Makefile.am files.

The rationale for the last point can be summarized as follow:
  • the default behaviour is as portable as before (for non GNU make that is), which is not the case is shave is activated by default,
  • you can still add --enable-shave to you autogen.sh script, bootstraping your project from a SCM will enable shave and that's cool!
  • don't break tools that were relying on automake's output.

Grab the latest version! (git://git.lespiau.name/shave)

Comments

Popular posts from this blog

Building and using coverage-instrumented programs with Go

tl;dr We can create coverage-instrumented binaries, run them and aggregate the coverage data from running both the program and the unit tests.

In the Go world, unit testing is tightly integrated with the go tool chain. Write some unit tests, run go test and tell anyone that will listen that you really hope to never have to deal with a build system for the rest of your life.

Since Go 1.2 (Dec. 2013), go test has supported test coverage analysis: with the ‑cover option it will tell you how much of the code is being exercised by the unit tests.

So far, so good.

I've been wanting to do something slightly different for some time though. Imagine you have a command line tool. I'd like to be able to run that tool with different options and inputs, check that everything is OK (using something like bats) and gather coverage data from those runs. Even better, wouldn't be neat to merge the coverage from the unit tests with the one from those program runs and have an aggregated view of …

Testing for pending migrations in Django

DB migration support has been added in Django 1.7+, superseding South. More specifically, it's possible to automatically generate migrations steps when one or more changes in the application models are detected. Definitely a nice feature!

I've written a small generic unit-test that one should be able to drop into the tests directory of any Django project and that checks there's no pending migrations, ie. if the models are correctly in sync with the migrations declared in the application. Handy to check nobody has forgotten to git add the migration file or that an innocent looking change in models.py doesn't need a migration step generated. Enjoy!

See the code on djangosnippets or as a github gist!

Augmenting mailing-lists with Patchwork - Another try

The mailing-list problem
Many software projects use mailing-lists, which usually means mailman, not only for discussions around that project, but also for code contributions. A lot of open source projects work that way, including the one I interact with the most, the Linux kernel. A contributor sends patches to a mailing list, these days using git send-email, and waits for feedback or for his/her patches to be picked up for inclusion if fortunate enough.

Problem is, mailing-lists are awful for code contribution.

A few of the issues at hand:
Dealing with patches and emails can be daunting for new contributors,There's no feedback that someone will look into the patch at some point,There's no tracking of which patch has been processed (eg. included into the tree). A shocking number of patches are just dropped as a direct consequence,There's no way to add metadata to a submission. For instance, we can't assign a reviewer from a pool of people working on the project. As a re…