Can this build system be sped up?
Our build is dog slow. It uses nested gnu makefiles on linux. It creates three builds for three different targets from the same source tree. It uses symlinks to point to each of the three parallel directory trees in turn. We can do partial builds by using make inside subdirectories, which saves time, but if our work spans multiple directories we must build for at least one of the three targets and that takes a minimum of 45 minutes. A subdirectory only build may take "only" 5-10 minutes.
Do you know of any quick things to check that may be bogging down this build system? For example, is there a faster alternative to symlinks?
Addition: I've seen the paper regarding recursive makefiles. Does anyone know firsthand what would be the effects of flatting a Makefile system which currently has many makefiles (around 800) and over 4.5 million source lines of code? People currently enjoy being able to build just their current subdirectory or process (embedded linux target) by using make in that directory.
I've just learned the build was, until recently, twice as long (wince), at which point the release engineer deployed ccache.
Considerations for speeding up your build:
Builds tend to be I/O bound, so distribute the I/O across multiple drives/controllers or machines. For example, put the source on one physical drive and put the target (the build output) on a different physical drive, and separate both of those drives from the physical drive that contains your build tools (.NET, Java, Ant, etc.) and from the physical drive that contains your OS.
Builds often can be done asynchronously, so establish and use separate build machines (continuous integration server). Use this particularly for calculating metrics, generating docs, producing release candidates, and anything else that takes too long or is not needed on a developer's workstation.
Build tools often involve lots of process startup/shutdown overhead, so choose tools, scripts, and procedures that minimize that overhead. This is particularly true of tools like make and Ant that tend to invoke other tools as subprocesses. For example, I am moving to a Python-based build system so that I can do most of my build processing from a single process, yet still be able to easily spawn other processes when I must.
Build tools often support skipping build steps based on detecting that they will do nothing (don't compile because the source has not been changed since the last compile). However, that support for skipping build steps is often not active by default--you may need to specifically invoke it. For example, compilers usually do this automatically, but code generation does not. Of course, a corollary to this is to use incremental builds when you can (while developing on your workstation, as long as it "behaves").
Build scripts can quickly and easily get very complex, so take the time to make them simple. First, separate the builds into separate projects that each build only one "artifact", such as a JAR, DLL, or EXE. This allows you to save lots of time by merely not invoking a long build that you don't need at the moment. Second, simplify each and every project build by NEVER reaching into another project--always make your project dependencies via the build artifact rather than the source. This will make each project stand-alone, and you can use "super" scripts to build various subsets of the projects at will.
Finally, treat your build infrastructure as a real project of its own--log bugs and feature requests against it, version it, perform official releases, and continue to refine it indefinitely. Treat it like an essential product rather than as an afterthought: your build is your lifeblood for any healthy project, it can save your buns if you care for it.
I'm not sure why symlinks are needed in your case. If you are building multiple targets from the same source, you might try putting your intermediate and target files in separate directories.