Over the past year, software engineers have lived through the shock of infiltrated or intentionally broken NPM packages, supply chain attacks, long-unnoticed backdoors, the emergence of dependency confusion threats, and more. This has created a firestorm of activity around how to securely build software. Many organizations, from the Linux Foundation to the United States government, are calling for and building new practices and regulations, and one of the primary threads is around “reproducible builds."
It’s one thing to talk about reproducible builds and how they strengthen software supply chain security, but it’s quite another to effectively configure a reproducible build. Concrete steps for specific languages are a far larger topic than can be covered in a single blog post, but today we’ll be talking about some guiding principles when designing reproducible builds.
What is a Reproducible Build?
Builds are reproducible when they give the exact same output no matter when or where they’re run. A reproducible build produces the same byte-for-byte output no matter what computer you run on, what time you run it, and what external services are accessible from the network. This is great for both development (because reproducible builds are easy to share between different developer devices) and production (because it’s easy to ensure that reproducible build artifacts have not been tampered with — just re-run the build on your own machine, and check that the results match!).
At FOSSA, we think about reproducible builds in the context of three guiding pillars:
- Repeatable builds
- Immutable environments
- Source availability
By looking at builds in light of these pillars, we’ll find that reproducible builds are more complicated than they might first seem, and we’ll see that common shortcuts like “just commit the lock file” or “just run add a Dockerfile” don’t quite cut it.
Pillar 1: Repeatable Builds
Build repeatability is about what’s happening on the build machine itself. Given that our build inputs are available and the world doesn’t change at all around us, does our build produce the same output when repeated?
Deterministic Install Plans
The first, easiest, and most visible requirement in a repeatable build is deterministic dependency install plans.
In most languages, this is as simple as checking in a lock file. Modern build tools generally allow projects to express direct dependency requirements as constraints and then solve those constraints to produce an install plan (a list of dependency names and version pairs to install). Many of those tools also generate lock files that serialize the install plan. Developers can commit these lock files into version control so future builds will use the same dependency names and versions.
Note that we also need the dependency builds themselves (not merely the version selections) to be deterministic, and deterministic install plans don’t quite get us there!
Deterministic Builds
Once we know what we’re going to build, our builds themselves (including both the builds of our own code and of our dependencies’ code) must actually be deterministic.
For projects without a compilation step, this might actually be a non-issue! As an example, a Node project whose dependencies are all pure JavaScript is effectively deterministic without additional work.
For projects that do have a compilation or transpilation (source-to-source compilation) step, ensuring determinism is by far the most difficult part of setting up a reproducible build. There are myriad ways in which compilation can secretly introduce non-determinism, including:
- Turing-complete programs-as-build-scripts that can arbitrarily alter compilation outputs.
- Dependency post-install scripts that can perform filesystem lookups or network calls.
- C bindings to system-installed packages, where bindings on different systems with different library headers may produce different outputs.
- Build steps that read files outside of version control.
- Build steps that use system time to generate timestamps.
- Build steps that download dependencies from the network that are not expressed in the install plan (for example, an NPM dependency that downloads a cached binary build of its C bindings from GitHub).
- Builds that change behavior based on currently set environment variables but don’t commit environment variable configurations.
Not all of these behaviors are guaranteed to introduce non-determinism when properly set up, but configuring a build correctly can be nuanced and difficult. For example, check out this blog post about non-determinism in Chromium builds. Many of these issues can be mitigated by controlling your local build environment, which we discuss in the next section.
Pillar 2: Immutable Environments
Even with repeatable builds, we need to ensure that our build inputs don’t change. Often, this means making sure that we’re running our builds with an immutable snapshot of the world around us.
Immutable Local Environments
We discussed above that a common source of build non-determinism is relying on “dependencies” that aren’t captured by the build tool. System libraries for C bindings are the most common example here, but other local environmental factors such as environment variable settings and files outside of version control can also impact the build.
An easy way to mitigate this issue is to run your builds inside of a known, immutable container. Container runtimes like Docker, for example, help ensure that everyone is using the same set of system dependencies, the same set of environment variables, and running on the same filesystem. It’s also easy to verify whether a container’s contents match a known-good build container, and it’s easy to completely delete and recreate containers from a known-good image if needed.
Notice that we’ve been very specific about having a known container or a known container image. Just committing a Dockerfile is not enough! Why not? Because Dockerfiles themselves don’t describe a fully reproducible build process for Docker images since they aren’t run against an immutable global environment.
Immutable Global Environments
Build systems often talk to external services for tasks like version resolution and dependency downloading. But external services change often.
Running apt install nodejs
today will give you a different answer than it did last year, and will probably give you a different answer next year. This is why Dockerfiles themselves don’t describe a reproducible build — running the same Dockerfile at a different point in time will produce different build outputs!
The simple mitigation here is to configure your builds to specify exact versions (and ideally, exact content hashes) whenever possible so that a build in the future will use the same version as a build today. But external services can also change their behaviors unexpectedly — a truly pessimistic reproducible build will run internal mirrors for as many of its network resources as it can.
Pillar 3: Source Availability
Let’s assume that our builds are repeatable and the world isn’t changing underneath us. All we need now is access to our build inputs. That seems simple enough, right? Well…
Registries Sometimes Go Down
Most Node developers have now experienced at least one NPM outage, during which build pipelines that didn’t have cached or mirrored NPM packages were broken. Many Node developers have also seen the left-pad and faker deletions, which broke so much of the NPM ecosystem as to be effectively outages.
The only reliable way to mitigate these sorts of build breakages is to run your own package registry mirrors. Mirrors can stay online when external services become unavailable, and mirrors can continue to serve old packages when official registries have deleted them. The same principle applies to other remote services: unless you have your own mirror running, your build pipeline’s availability is only as good as the availability of their service.
Choosing to run a service mirror is always a nuanced trade-off. On one hand, registries like NPM have dedicated engineering and operations teams with expertise in keeping these systems online. On the other hand, running a small mirror for a small set of dependencies is a dramatically easier task than running all of NPM. You should make your mirroring decisions on a service-by-service basis, taking into account historical external service reliability and your team’s build availability and staffing needs.
Vendoring Ensures Maximal Availability
One simple way to ensure maximal availability for your project dependencies is to vendor them. Most package managers support some form of “vendoring," which means that instead of relying on downloads from external services, we store dependency source code in version control next to our source code. For example, in Node, this might look like committing node_modules to source control.
While this solution isn’t perfect (depending on how you vendor and how your project is set up, this might add significant bloat to your version control), it’s often the simplest and easiest solution for obtaining maximal availability.
Tying it All Together for a Stable and Reproducible Build
Together, these pillars make sure:
- Our builds are repeatable.
- Our build environments are immutable.
- Our build inputs are always available.
We’re now at a really strong starting point for securely and reliably maintaining our project going forward. But it doesn’t stop there!
Given the complex and rapidly evolving nature of today's software supply chain threats, there's no single, foolproof strategy that will work 100% of the time. Reproducible builds help secure your supply chain, and tools like software composition analysis and SBOMs help you audit and monitor your build results as they run. Together, they help strengthen your defenses against supply chain threats.
I hope you found this post to be a solid foundation for your engineering organization's journey to designing more reproducible builds. Of course, at this point, you might be thinking: this seems like a lot of work; when is it worth doing?
That’s an exciting topic and is something I intend to answer in a follow-up blog post within the next few weeks. In the interim, we talk a lot about securely building software in the FOSSA blog and encourage you to check our similar topics out:
- Log4J Vulnerability Resource Center
- Container Image Security and Vulnerability Scanning
- Defending Against Software Supply Chain Attacks
About the Author
Jessica Black is a software engineer at FOSSA. She specializes in relational databases, server software, and CLIs primarily using languages like Go, Haskell, and Rust. When not programming, she’s usually curled up with a dark fantasy book or playing an MMO.