Articles / Die Hard Make Habits

Die Hard Make Habits

The make build tool was (and still is) very influential in the sphere of software development tools. Its influence is so powerful that even bad aspects of its design survive in the next generation of build tools. Many generations of developers grew up in the school of make. Like the frog in the slowly heating bowl, they got used to its quirks to the point of not feeling the pain anymore. But they shouldn't be too quick to conclude that make's way is the one and only way. The punishment is to miss the opportunity for significant improvement.

The confusion between actions and targets

Targets are the entities that are acted upon (updated, deleted, etc.), and actions are the transformation you apply to them (update, delete, etc.). Usually, the targets are files or group of files.

The make tool introduced so-called phony targets. For example, "make clean" or "make touch" don't build a dependency tree, they just perform some action on some files. This is unlike "make my_exe", which first constructs a dependency tree and then performs some action on the nodes in this tree. The make tool puts both the names of the targets to be built and the name of the actions in the same namespace. This is ok for amateur software projects, but it is a serious limitation for real products.

To start with, there is the trivial problem of the name clashes. "clean", "depends", "test", etc. are frequent names for "targets" and also a pain in the neck when you want to make an executable named "clean", "depends", "test", etc. If you think that it's easy to just be careful and not give such names to what you want to build, you are simply wrong. Think about a code base with 10,000 C files and 100 contributors and how you would go about making sure that no file will be named clean.c, test.c, etc. (the "etc." here really means an open-ended set). The opposite has to be watched, too: there can be no phony target with a name that happens to be the name of some file in the dependency tree. This is technically possible, but it is not worth the trouble. It is possible to separate targets and actions by, for example, "make action=clean". It would have been even better if the tool had that already implemented for you, as in "make --clean" (because in the alternative "make action=clean" you still have some careful implementation to make in your makefiles).

Besides the name clashes, there is a more subtle and more important problem with all phony targets. The problem is that it is very difficult to have the action of phony targets performed on the same files as the non-phony targets. Is it important to act on the same files? Well, again, it is not that important in very small projects. Let's consider again the example of "make clean". If this removes all binary files with a file pattern, it is probably good enough. Even removing the entire directory with the build output (with "rm -R") might be ok. There are some (but not many) projects for which building from scratch takes a whole day on a capable computing farm.

More important than wasting build time, there may be serious consistency problems. Assume that, in some location in your code base, you can build two targets that don't have an exact set of files. It may be "make target_A" and "make target_B", or it may be "make target flavor=1st" and "make target flavor=2nd". Now, just what is "make clean" supposed to do? Remove the files associated with target A, or with target B, or with both? Again, in small C/C++ projects, this is no serious problem (no flavors, no targets with a variable number of files, etc.), but the same can't be said of larger projects (perhaps using something completely different from C/C++ compilation). Contrast this with the situation in which you could say "make target_A --clean" and "make target_B --clean". There, the tool can implement laser sharp cleaning. As a developer, you would only have to get the dependency tree right. Once it's correct, it's correct for all actions -- update, clean, etc. That would be the end of files overlooked in "clean".

Despite the fact that these problems have been known since the early 1990s, we keep seeing new build tools with the same confusion between targets and actions on targets. For example, Ant has only phony targets, rake has both file targets and non-file targets, etc.

Commit by default instead of print-out by default

The commandline of make is ill designed. It is probably the worst commandline among all tools that have ever been used by more than one person, yet its style is mimicked today by almost all want-to-be build tools.

A first problem, quite trivial, is the fact that the make command by itself, without any argument, commits something. Moreover, it commits something that is impossible to revert. Why do you care? Because if something unexpected happens, you will want to roll back and debug or just run again in a more verbose mode. There's no such possibility with make, and that's just to save typing 2-3 keys (like " -b").

When run without arguments, the majority of commandline tools in this world tell you something: "What is this?", "How do I use it?", etc. They don't do something irreversible. Does this sound like good design to you? Of course, you can do that with make also (as with "make action=update" and having "make" just print out which final target file will be built), but how many make-based build systems do you know that actually do it?

As a side effect, this "commit by default" scares away whoever may want to try a partial build. You may descend into a lower-level source directory and run "make" there, but if it builds more than you expect, it's too late. This aspect, coupled with some other shortcomings, discourages users of make from "starting small" when trying something new.

Passing parameters by choosing the shell's current directory

The most far-reaching bad habit of the make commandline is the fact that too much information is transmitted to the tool by choosing a directory from which to run it.

First, let's get agreement on the basic facts. You run a commandline in one location in the code base, and it builds something. You run the exact same commandline with the same shell environment in another location, and it builds something else. Some information is passed through the name of the current directory, and it is an important piece of information, not just a marginal one like, for example, the desired verbosity level.

What are the semantics of the current directory?

The first question is also the most difficult to answer: Just what exactly is passed through the name of the current directory? Think about how would you describe this in a limited space, as in a ten-line usage message.

One may say that the answer is easy: "The current directory serves only one purpose, fetching a Makefile, which tells make what else to do." That is so incomplete that we can simply say that it is wrong. Most importantly, the current directory still dictates a lot of things inside that Makefile. The same Makefile in another location may do something else (or just fail to do anything). Secondly, I can use the commandline argument -f <path_to_makefile> to fetch it from elsewhere (and some make-based build systems use that at lower levels if you run "make" at the top of a build tree).

The fact is that the current directory may mean almost all (what to build, from what, and which flavor) and may mean not much (for example, with "what" and "how" taken from the shell environment). That's already bad. It means that my make-based build system may be very different from yours. It also makes this basically impossible to document in a generic way (by the build tool authors, not by the build description authors). Contrast this with a commandline like make --source_root=<dir> --output_root=<dir>. That can be documented easily. source_root and output_root may have default values if you are keen on saving typing (even with the current directory as the default). That syntax would make my build system differ from yours less in total and in less fundamental ways.

Mapping parameters to directories

In most make-based build systems, the current directory tells what to build and where to put it, but some systems take this a bit further and let you choose which flavor you build by firing make in a different subdirectory. Consider as example the following directory layout:


where the content of the files in the inner directories is the following:

include ../Makefile_level1

include ../Makefile_common

This describes, in fact, two variants (let's call them "platform" and "language") with, respectively, two and three allowed values. You can issue "make" in the end leaf locations, and you get a flavor of "my_exe" built. This may look extreme, but many systems are fundamentally doing just this kind of unfolding of parameters, including the good old GNU Build System (the ubiquitous one based on autoconf and GNU make).

There are many issues with this kind of approach. How do you decide what you keep as a commandline argument and what you specify through the current directory? What effort is needed to introduce a new allowed value for the language? Or a new value for the platform? Or a new variant? How would you go about documenting the available variants and the allowed values for each of them? You may place cross-linked Readme files next to the Makefiles, but how many codebases you know that have that?

Contrast this with the commandline "make platform=<plat> language=<lang>". Such a commandline makes it easy to document what variants are there and what values are allowed. It doesn't introduce any arbitrary order among variants (which you will have to learn and remember), and it is probably easier to maintain during the lifetime of the codebase.

Sometimes this possibility is implemented as well, so that the lines:


are put in Makefile_common, along with some tricks so that the build result goes to the same location as it would if you fired "make" in a leaf directory. But why bother when you can just use commandline arguments instead of the current directory information from the beginning of your build description?

Building from the binary directory or from the source directory

The user community quickly learned the benefit of having separate root directories for the sources and for the build output. Let's call them the source directory and the binary directory, respectively.

There are many good and bad reasons to separate. One bad reason, in the case of make-based systems, is the fact that make's timestamp checks are not reliable enough and the clean is not very precise (so a manual complete cleaning is needed at times, and that's most practical with "rm -R" on the binary directory). Whatever the reasons, the fact is that you have a source directory structure and a binary directory structure that are somewhat similar. Usually, the binary directory tree is "broader" in order to be able to store all derived files in several flavors without filename clashes.

Here's the tricky question: Where do you put your makefiles? There are two opposed styles:

Close to sources
Makefiles are placed in the source directory. For any built file, the path into the binary directory is computed somewhere inside the Makefile from the path of the source file.
Close to binaries
Makefiles are placed in the binary directory. The path to the sources is computed somehow from the current directory, perhaps with the help of given arguments.

Mixed solutions are also frequently used. For example, you start "make" somewhere in the binary directory (you specify the output path through the shell's current directory) and you point it to a Makefile "close to the sources" (you specify the location of the sources by explicitly giving the location of the Makefile).

Do we really need all these possibilities? It's true that a good build system must be able to accommodate almost any organization of sources, and it must let you design any sensible binary directory structure. But how exactly does the use of the current directory of the shell help with that? What sensible directory structure is made impossible by a commandline syntax like "make --source_root=<dir> --output_root=<dir>"? The use of the current directory by make is not flexibility from the make tool, it is just useless variability in the many make-based build systems.

One last word about the myth of build descriptions "close to sources". In large real-life C/C++ projects, it is not uncommon to have hundreds of directories containing source files. What does it mean to be "close" to 500 different locations? Are you going to spread your build description over 500 small chunks? That's a possibility, but then each build description will contain very little information, probably just a very few trivial lines (often the real information is then the full path of the directory where the description is placed). Smarter make-based build systems moved away from that model at the same time they moved away from the recursive make model. That means that the entire build description is in one place, probably close to the root source directory tree (and possibly "far" from the deepest directory with sources).

I hope that you are convinced by now that using the current directory instead of commandline arguments is not a good idea. If you're still not convinced, try to count how many commandline tools you know that use the current directory to pass crucial information. Any compilers? Linkers? Debuggers? Others?

Inability to list sources

One of the first actions that one would implement on a dependency tree is the ability to print it out. Several flavors are interesting to print: all genuine input, all build output, all implicit dependencies, etc. Personally, before I would implement "incremental build", "forced build", "clean", etc., I would implement "print", and I would test whether my tree is as expected in less trivial cases.

How many systems have something like "make action=listsources"? Even worse, a depressing number of new build tools are proposed without such a simple, basic feature. What hope then for more elaborate, yet useful actions like "clean all files that are not up-to-date"?

No target platform concept

Make doesn't go a long way toward modelling important concepts like the C/C++ tool chain, build platforms, target platforms, optimization levels, etc. This is presented as flexibility; you are free to model them as you see fit, with any set of shell variables, any piece of shared Makefile, etc. Ok, let's buy it as freedom. But there is one place where this freedom hurts badly: The lack of separation between the build platform and the target platform. Even proposing some weak separation (like a few conventional macros or macro prefixes) would have been better than nothing at all. It may well be that the majority of the compilations in this world are not cross-compilations, but that thought will not alleviate the pain of the embedded software engineer. It will only make him curse harder.

When designing a build system, based on make or not, it is not smart to cut yourself out of the community of embedded porting engineers. The porting engineer is usually more closely involved with the build than the average mainstream platform developer. An important part of the software porting process to embedded platforms is the change/adaptation of the build description, and it had better not be a complete rewrite of the build description (to another build system).

It is sad that most build tools since "make" have shown no courtesy toward embedded software engineers. Moreover, the most recent build tools only make it worse. How many of these recent build tools have an option like "tgtplatform=<plat>" to choose what to build for?

Some recent build tools expect to configure everything on the fly when you start a build, and they are proud to present this as progress. They say, "Look, no configuration needed!" They actually detect installed tool chains by some investigation of the build platform they are running on. While this is a desirable feature when not cross-compiling, it will most likely be a pain for the porting engineer. His tool chain will not be detected. He will have to do some manual configuration (which is acceptable), but he will also have to implement something that allows him to switch tool chains (because any porting involves reference builds for the build platform itself). He now has more work than with his previous, "dumber" build system.

With a commandline like "make --tgtplatform=<plat>", the make tool had the opportunity to introduce the world to the fact that cross-compilation exists, but it missed it. When will the chance come again?

In the end

I've listed only a few shortcomings of "make" that tend to survive longer than "make" itself. There are others, but the point is not to count them, not even to get a "total weight" of them. The point is to get in the habit of questioning whether some way of doing something is still the most appropriate way for the situation at hand, to compare alternatives and not choose by inertia. You may discover a new world that feels even cozier than your previous one.

Recent comments

21 Oct 2007 19:18 Avatar xnc

Re: Make has problems for sure...

> However, make is unlikely to be replaced

> any time soon for the same reason that

> vi is unlikely to go away: Although vi

> is a crappy idiosyncratic editor that

> violates just about every user-interface

> principle known to man, it is ubiquitous

> and &quot;good enough&quot;.

Fei, I say. Fei to you and your horse. vi(m) is as user friendly as $PICK_YOUR_FAVORITE_EDITOR, even on a Mac. Idiosyncratic? Yes. Difficult to use? No more so than MS-Windows or KDE or GNOME. Apple used to publish a book &quot;Macintosh Human Interface Guidelines&quot; that really did a great job of breaking down the computer/user interface. Everything else is just as idiosyncratic as vi(m).

And, yes, I am a vim aficionado.

28 Sep 2007 17:15 Avatar buildsmith

Re: Example
First, I would like to thank you for your nice

offer, for your patience and for make me discover (looks like neat tool).

Second, I would like to make clear that my rant is

not primarly against the autotools. It primarly

against the make tool and against some more modern

replacements that (despite the fact that they

were not constrained by any backward compatibility)

missed some oportunities. The autotools

are not in that category (not a replacement

but a patch to make, not free of backward


Now on to the details:

> Where you have NxM % makefiles already,

> I have one - no includes.

You mean you still have N*M but

you don't feel the pain because they are

all "easy" to generate from some unique genuine

input. You are sadly wrong. The thing you

overlook is the fact that not everybody

is in the use case of "generate once

and forget ever after". To me, this approach

is "been there, done that, went away".

We ended rather quickly with a build to keep

the makefiles up to date and a very tricky one

(difficult to detect automatically

when exactly the makefiles need

to be regenerated).

One way to see that this doesn't fly is to catch

a Symbian developer and ask him how many times

he forgot to run the a.bat (that the top script

that regenerates the makefiles in the build system

in the Symbian SDK).



> and what you would do on the shell:


Thank you for spending the time to set up

the example. Reading it reinforces my older

conviction that autoconf is rather OK (despite

its shellish, old-looking syntax) and automake is

much less OK. Not because of the syntax but because of

the same "soup of global variables" as the bare make.

No structure, no attempt to separate

things that should be separate. AM_CFLAGS

can be abused exactly how CFLAGS can be abused.

> at least convince you that autotools does

> the right thing for those who use it.

Right. I'm convinced. The key wording here is

"for those who use it". Meaning that people

are generally smart and they choose and stick

with the tool that maches their needs.

The autotools made a quatum leap

for software distribution as sources on Unix

systems. We should be grateful.

> I'd be delighted to discuss and

> implement with you how your project

> could/would look like if it were to use

> autotools.

Thank you. You know, my company (Nuance Communications)

is hiring right now in my division :-). But you should

be available to relocate to Germany or Belgium...

> I am not trying to convert you,

> ...

> Then you can still decide.

I do appreciate that you don't try to convert me.

There are to many pasionate flames around build tools.

I will give you a bit of a background.

You'll see why I'm difficult to convert and also

what kind of needs I have from a build system.

My company is all about closed source products.

I work in embedded and we see a dozen new target

platforms every year (basically all hardware

manufacturers come with their home-baked GCC

toolchain or their Win CE favor). The build machines are

90% of the time Windows XP (and we don't really

have any choice in that). As far as the build

is concerned, the 13 years history of our products

(speech processing engines) goes like this:

1. A few years of manually maintained makefiles

and Microsoft project files (dsp/dsw)

2. Quite some years of generated makefiles and dsp files

3. Finally 3 years on SCons and continuing

The last solution before SCons was not autotools

or similar (my preferred in that

category is CMAke). No source package and no

executable thing in it. Yet, in our home-grown solution

the makefiles were also generated from higher-level

build descriptions. Python scripts were generating

both the makefiles and the dsp files from XML files.

We never distributed our builds except

to outsourcing partners. But what really killed

the autotools for us was the fact that they use GNU make.

GNU make is a catastrophy when you use a version-control

system with dynamic views (sources can change to older in

time or newer but still older than your compiled objects).

The solution currently in use is based on SCons.

More than one part of our company migrated

independently to SCons. The big argument is

the fact that the command line is part of

the MD5 signature of the built files.

Very reliable builds are the immediate consequence.

Another argument was that it's a radical

solution to the "out-of-date makefile" accidents.

There are also disadvantages to SCons

(see my article "Make alternatives"

or the story of KDE trying SCons and abandoning it).

All in all, the lesson to take away is that

the active development of C code is a usage scenario

very different in needs from distribution

of software as sources.

> each `make` invocation. "--build" is

> even a standardized option. While

> "--enable-debug" is not, it is still

> much more common ...

Yes, --build is a very good thing. Although

the target platform as concept is a bit more than just

selecting the backend of the C compiler.

--enable-debug is poor. This is what could be named

"build optimization" variant/flavor. Any serious

development needs more than just dbg/rls.

Take a look at the Jam build tool setup.

Those people did a fair job for analyzing

the interesting variants for the C/C++ builds.

SCons is as barebone as make in this respect.

> (you would have to add it to all

> flavors of Make; GNU, Solaris, BSD... -

> tough luck).

Aah, good that you remind it! Although in the end

we didn't shoot anymore to support more than 2 make tools

(Opus make and GNU make), that was still such a pain

in the neck (mainly because different syntax for 'if').

Since Python was already present on our

build machines (for generating the makefiles, a.o.),

when we moved to SCons we just eliminated one tool

from the requirements (actually several, all make clones

in one shoot). Everybody felt this as a relief...

> 'make' should do "is this target newer? rebuild",

If this is the core business of make, then this

summarizes well why make is bad. Because

it is so poor at answering reliably the question

"is this target out of date?". I'll stop here,

many classic articles are available on this subject.

SCons is one of the best for this. Both in what

it offers out of the box and in how it allows

you to program your own "is it out-of-date?".

28 Sep 2007 01:40 Avatar jengelh


> where the content of the files in the inner directories is the following: [...] This describes, in fact, two variants (let's call them platform&quot; and &quot;language&quot;) with, respectively, two and three allowed values. You can issue &quot;make&quot; in the end leaf locations, and you get a flavor of &quot;my_exe&quot; built. This may look extreme, but many systems are fundamentally doing just this kind of unfolding of parameters, including the good old GNU Build System (the ubiquitous one based on autoconf and GNU make).

Here is how that would look in autoconf/automake. Where you have NxM makefiles already, I have one - no includes.

and what you would do on the shell:

I am not trying to convert you, but at least convince you that autotools does the right thing for those who use it. I'd be delighted to discuss and implement with you how your project could/would look like if it were to use autotools. Then you can still decide.

28 Sep 2007 01:12 Avatar jengelh

Re: Thank you

> How did it come to mind to call that dir "obj". Why not "output" or something else? And I can create next to it "obj2" and "obj3" and my colleague developer "obj4" and "obj4/second_try" and at the end of the day we will have a hard time "to sort the wheat from the chaff". It would have been more interesting to have (make --platform=m64 --type=dbg)

Yes, you can have multiple obj dirs in various locations, owned by different people, with a name you choose, with the flags you like. `cd ~/proj; /usr/src/proj/configure --build=x86_64-unknown-linux --enable-debug` for example, whose flags you only need to enter once instead of each `make` invocation. "--build" is even a standardized option. While "--enable-debug" is not, it is still much more common than adding "--platform" or thelike to one flavor of 'make' (you would have to add it to all flavors of Make; GNU, Solaris, BSD... - tough luck). At the same, adding options like these to 'make' goes gainst the unix principle. 'make' should do "is this target newer? rebuild", and not fiddle with platform details.

27 Sep 2007 18:59 Avatar buildsmith

Re: Thank you

> Thank you for a good laugh.

You are welcome. Although I have a difficult time

to understand what can I learn from you laughing

at my needs, even assuming that my needs

are narrow minded.

> although make can be blamed of various

> things, the fact you'd be better served

> by something more narrow-minded does not

> belong among them.

Granted. I would have been better served

by a more narrow-minded tool (a long time ago,

before I moved to other build systems).

A lot of my criticism only applies to the tool

if it is used to build some files (I still believe

that's the most frequent use case) and there

was no word of caution in the article

that it doesn't apply to other uses.

> If you run random programs (e.g.

> `halt') without arguments to see what

> they do, you deserve the burns.

You need to work a bit on your attitude.

People do make mistakes. Place yourself for

a second in the place of a developer having

to go to his boss to admit a mistake.

Would you like your boss to think

on the line "you deserve the burns"?

Imagine yourself for a moment in a position

where you have to support 30 developers

building your product. Some are MS Windows

developers not even familiar with a command

line. Some are external customer that are

paying you. How will this "you deserve the burns"

attitude will help them?

> The GNU build system (autoconf, automake

> and other stuff) enables building

> software on all queer platforms with

> their b0rken implementations of

> everything (including make) without the

> need to install a build system. It has

> issues and people question this goal

> too, but you have to reconsider who to

> blame for the lowest common denominator

> being so low.

Really? It seems you need to open again

the autoconf book and read in the 1st chapter

what the developers of autoconf have to say

about make in general and gmake in particular.

One thing to remember is that

the GNU build system does not address

at all a few categories of limitations

of make. It addresses, as you mention,

portability. It also addresses

easy development of the build description.

It doesn't address build consistency,

build speed, build debugging and some

others which are also important

performance criteria for builds.

For example, make tool timestamp

heurestic is hurting badly the build

(except for the "build once forget ever

after" kind of casual builder).

You may get a larger view

on various requirements from a build

system reading here (

> And finally, make is a general

> dependency processor. People use it for

> all kinds of tasks, even to resolve the

> order of system service startups.

> ...

> Sometimes all targets are

> phony and it is a good thing.

That is true. I saw experts systems

(for pattern recognition) implemented

with Makefile for GNU make (as I already

mentioned in other article on Freshmeat).

Although I would rather use Prolog

for implementing rules for an expert system,

I don't see anything bad with that use of make.

> How all the binary directory and target

> platform stuff and --clean target_B maps

> to this?

What's the problem? Suppose that make would

have an argument --source_dir= translated

into a pre-defined macro $SOURCE_DIR, how

would that prevent other uses of make?

Why cannot a Makefile ignore it, if

it doesn't have a use for it?


Project Spotlight

Kigo Video Converter Ultimate for Mac

A tool for converting and editing videos.


Project Spotlight


An efficient tagger for MP3, Ogg/Vorbis, and FLAC files.