Spend any time in complex systems and you know the first line of defense against tough-to-debug bugs is a reproducible process. You'd think that most software we write is reproducible – but there are many places where it isn't.
Files get overwritten. Different machines have slightly different configuration. One step of a process gets skipped. The same code compiled on two different machines spits out two different programs.
Reproducibility (in software) is the confidence that an application will behave similarly in development, test, and production.
While reproducibility would seem like a binary process – given the same input, outputs are either the same or different – it is more of a spectrum. I like to think of it as a confidence interval. It's a tradeoff between eliminating certain classes of "heisenbugs" and developer velocity.
Take for instance a declarative build system like bazel, the open-source version of Google's internal build system. It hasn't found significant adoption in the outside world. Why? It is the rule of antifragility and optimization: small changes require constant changes in your build configuration. At a large company like Google, with complex applications, the trade-off maybe makes sense: what you lose in configuration time is made up in shared build systems and reduced debugging time. But for most, it isn't worth it. For what it is worth, Kubernetes implemented bazel and then later removed it. Tensorflow continues to use bazel.
Looking at reproducibility as a spectrum is why all-or-nothing projects like Nix have failed to gain mainstream adoption, despite their technical superiority. Nix has a fully declarative package system – at the cost of having to learn a bespoke configuration language and recompile every program. The learning curve is steep. But for many – something like the Makefile is "reproducible enough" for day-to-day work.
I imagine the next class of reproducible systems will be just reproducible enough to get the job done, and no more. Antifragile but reproducible.