The Nix Package Manager
*by Jonathan Ringer
This book assumes usage of the nix 3.0 (flakes) cli.
This can be enabled in nix 2.4+ by following these instructions.
This book is available on the web at https://ekala-project.github.io/nix-book/.
For changes to the book, please see the https://github.com/ekala-project/nix-book.
Preface
My journey to learn Nix was only made possible by my extreme desire to master it. The path was anything but easy and predictable. And it is still a considerable hurdle for many who try to learn Nix.
The goal of this book is to provide newcomers with a more approachable document than the nix-pills series. Although nix-pills is a very good resource with many years of refinement, it is extensive and hard to follow without some prior knowledge of Nix. Pills attempts to teach deep understanding of Nix + Nixpkgs while this book attempts to teach relevance: the 20% of Nix you will need 80% of the time. This book hopes to provide a more recent account of Nix with more of a focus on giving the user intuition around what Nix is doing rather than clarity.
The motivation for writing this is to provide a “Nix equivalent of the rust-lang book”: one resource that can be read end-to-end in an afternoon that is able to equip the reader with the knowledge necessary to thrive in the Nix ecosystem. The goal is not to replace any existing Nix guides or documentation, but rather provide a good starting place for new users.
Introduction
The Nix Package Manager
Nix is a package manager that focuses on capturing all inputs which contribute to building software. The result of factoring all of the information about building the software is called a derivation. This information includes from where the source code is downloaded, configuration flags, patches, dependencies, build steps, installation steps, and many other potential inputs.
This information is hashed, which allows Nix to describe and reference the exact software which is intended to use. This enables Nix to be used on any system because its assumptions do not collide with the assumptions of a host system. This means that Nix does not adhere to the traditional File Hierarchical System (FHS) but it also means that it’s not limited to FHS’s restriction of only having a single variant of a piece of software: You can have multiple versions of the same software installed, or the same version installed twice compiled with two different set of compile flags without conflict.
Who is Nix For
Teams of Developers
Each developer on a team needs access to the same development environment.
Development environments need to reflect the build and production environments.
When these diverge, software may fail to build or run properly after a software delivery was made. Having divergent development, build and production environments is a major cause of regressions in software development. Nix can help mitigate this by allowing environments to be version controlled and maintained along with a project. Nix can also lower the onboarding time of new developers by automating installation instructions.
DevOps
Nix allows you to precisely describe the software you intend to use. Nix packages are defined by their dependencies, so they inherently retain their SBOM (Software Bill of Materials) by default. By leveraging NixOS modules, one can also create configurable services and compose them into coherent systems. The combination of Nix + NixOS allows you to have declarative configuration of both services and systems of multiple machines and architectures.
System Administrators (home to enterprise)
Nix allows you to maintain dozens to hundreds of systems: Placing a system’s configuration as code in version control enables a new paradigm of system configuration management. Atomically apply or rollback system updates for each system. Nixpkgs can be freely extended to include private additions to software.
Nix largely replaces the need for Docker. However, Nix can also be used to produce Docker images if there is a downstream technology which consumes OCI images as an interface (e.g. Kubernetes).
Power Users
Nix allows for incredibly specific or opinionated environments. Nix allows you to declare projects (flakes), user directory configuration (home-manager), or system environments (NixOS) with the exact same software. Whether you’re a software developer, or you’re tweaking the appearance of your desktop system, Nix will allow you to control and specify configuration exactly as you intend, and persist this across multiple machines.
The Nix Ecosystem
There’s roughly four layers of abstractions in the official Nix ecosystem, these are:
- Nix - The domain-specifc language used to write Nix expressions
- Nix - The package manager
- Nixpkgs - The official Nix package repository
- NixOS - A Linux distribution built on Nixpkgs
There are also a several unofficial projects commonly used within the community. Some of these are:
- home-manager - NixOS-like user configuration for Linux or MacOS built on Nixpkgs
- Nix-darwin - NixOS-like configuration, but for MacOS
These topics will be discussed in greater detail in later sections, but a quick summary of official projects are provided below.
Nix: The Language
The Nix language is a Domain-Specific Language (DSL) which is designed to handle package configuration. Nix can be thought of JSON + functions + imports + some syntax sugar. Its main goal is to provide effect-free evaluation of package configuration. For that reason, Nix is restricted in many ways and lacks many features from generic programming languages. There is very limited input and output possible to the system, there are no loops, no concurrency primitives, and no types. What is left is a small functional programming language. After all, Nix’s goal is to take a few inputs such as a system platform, and produce a build graph which can be used as a recipe to build software.
Nix: The Package Manager
The Nix Package Manager began its life as the PhD thesis work of Eelco Dolstra. The goal was to bring discipline to the software landscape. Similar to how structured programming helped tame the complexity of goto through introducing constructs such as loops and logic flow; so too does Nix attempt to tame the chaos of package management through explicit descriptions of software and their dependencies. The truly novel idea of Nix is that of the derivation. It encapsulates everything about a piece of software, and these derivations can be referenced from other derivations constituting a directed, acyclic graph (DAG) of how to built that software from source.
Nixpkgs: The Package Repository
Nixpkgs is the official package repository for the Nix community. It contains the logic on how to build over 100,000+ software packages. Nixpkgs can be thought of as an expert body-of-knowledge on the subject of how to build software. When a user asks for the “firefox” package, the Nix package manager is able to query Nixpkgs and produce a build graph on how to build Firefox and all of its dependencies down to the C compiler, for that user’s platform. This allows for a great deal of freedom: Nix can be used on any Linux distribution and MacOS as first class supported OS’es, and to a lesser degree on many other UNIX-like OS’es.
Nixpkgs is also supported by Hydra, which provides pre-built binaries of libre software for Linux and MacOS.
NixOS: The Operating System
NixOS is a non-FHS Linux distribution which leverages Nixpkgs to provide a wealth of software ready to be combined into a system environment. The concept of a Nix derivation is extended here to include service configuration and system creation. The entirety of the system is represented as a derivation which gives it many of its defining qualities such as atomic rollbacks, system-as-a-configuration-file, and extensive user configuration potential.
Installation
Linux and MacOS
The guided installer is the preferred way to install nix, please run the following in a shell:
sh <(curl -L https://nixos.org/nix/install) --daemon
Other Installation Methods
There are many ways to leverage nix, for more installation options, please visit the official download page.
Derivations
Derivations are the defining feature of nix. Derivations attempt to capture everything that it would take to build a package. This includes, but is not limited to: source code, dependencies, build flags, build and installation steps, tests, and environment variables. The culmination of all direct and transitive build depdencies is commonly refered to as the derivations’s “build closure”. More dependencies that a package refers to, more package will need to be created in order to attempt a build. Generally, dependencies of a derivation are other derivations.
Types of Derivations
Fixed Output Derivations (FODs)
These are the “leaves” of any build closure, in that, they do not refer to other derivations. These derivations are defined by their content. These derivations are easily differientiated because they will contain a sha256 (or other hash) which is used to enforce that an artifact is reproducible.
One critical difference from evaluated derivations is that Fixed-Output derivations are able to have access to the network while fetching contents. This “impurity” is offset by enforcing that the hash matches, and reproducibility is delegated to the process which fetchs the assets.
Many of the fetch* utilities in nixpkgs and nix’s builtins will create FODs.
Input-Addressed Derivations
Input-Addressed derivations are generally what are referred to when the term derivation is used. These derivations are defined by all of the dependencies, build phases, and flags present during a build. Nix captures all of the variables which constitute a derivation and uses a cryptographic hash to give each derivation a unique name.
stdenv.mkDerivation and related build* helpers will create an input-addressed derivation.
Content-Addressable Derivations (CA Derivations)
NOTE: CA Derivations are still considered experimental at the time of writing
Content-Addressable (CA) derivations are a hybrid of both FOD and IA derivations. The problem which CA derivations address are rebuilds. In the IA derivation model, a patch to openssl will cause all downstream packages to rebuild since that derivation will propagate the patch change across all consumers. Under CA derivations, nix can determine that a consuming package which was built before the openssl patch has remained unchanged with the only exception being where openssl is located in the nix store. In this case the package which uses openssl is “the same” in usage, the only thing which has changed is what variant of openssl it uses. Nix is then free to assert an equivalence of the package before and after the openssl patch; thus, it doesn’t need to rebuild all packages, just update the references of openssl.
The name Content-Addressable comes from the fact that the implementation will stub out nix store paths and use this normalized content to compare against other builds. Now nix can deduplicate builds which were done previously. In the openssl example, the build of curl will likely be exactly the same; thus any package which just consumes curl will not have to be rebuilt. Only the references to the new variant of curl needs to be updated.
CA derivations are an opt-in experimental feature, but don’t require the user to alter their existing workflows.
Create a Derivation
Before a package is built, a derivation must be created. The derivation can be thought of as the unamiguous definition of how to build a package. The process of creating a derivation is called “instantiatiation”, or sometimes also refered to as evaluation (although this is more general). Every package in nixpkgs has a corresponding derivation. This means that we can create and inspect the derivation for anything exposed in nixpkgs. An example would be:
$ nix-instantiate '<nixpkgs>' -A hello
/nix/store/byqskk0549v1zz1b2a61lb7llfn4h5bw-hello-2.10.drv
# or using flakes, nix>=2.4
$ nix eval nixpkgs#hello.drvPath
"/nix/store/byqskk0549v1zz1b2a61lb7llfn4h5bw-hello-2.10.drv"
Inspect the contents of a derivation
To inspect the contents of the drv, one can use the nix derivation show (previously, nix show-derivation) utility.
$ nix derivation show /nix/store/byqskk0549v1zz1b2a61lb7llfn4h5bw-hello-2.10.drv
{
"/nix/store/byqskk0549v1zz1b2a61lb7llfn4h5bw-hello-2.10.drv": {
"outputs": {
"out": {
"path": "/nix/store/f4bywv8hjwl0ckv7l077pnap81h6qxw4-hello-2.10"
}
...
Defining characteristics of a derivation
There’s a few important features of a derivation:
- It’s a description of how to build the package, generally from source
- The output paths are determined before the build begins
- All dependencies are resolved as part of instantiation, and may have a similar derivation descriptions of their dependencies
- Any additional flags (makeFlags, configuration flags, cflags, or ld flags) are explicitly stated
- There’s no ambiguity. The system, architecture, and other options have been resolved.
- It’s immutable. If you want to change a derivation, you need to evaluate a new one.
- It’s unique. The hashing scheme ensures that there should only ever be one derivation; if two derivations match, then they are exactly the same in every way.
Realise a derivation
Building a derivation is referred to as “realisation”. A derivation is just an abstract description of a package, based upon what it requires to build. Derivations can be thought of as constructing a blueprint, but realisation is the construction of the desired object. Taking from the previous example, one can build a derivations like so:
$ nix-store --realise /nix/store/byqskk0549v1zz1b2a61lb7llfn4h5bw-hello-2.10.drv
...
/nix/store/f4bywv8hjwl0ckv7l077pnap81h6qxw4-hello-2.10
# or in nix flakes:
$ nix build nixpkgs#hello
...
/nix/store/f4bywv8hjwl0ckv7l077pnap81h6qxw4-hello-2.10
Here, the gnu hello project was built and installed at the output path. This includes: the executable binary, documenation, and locale info.
$ tree -L 2 /nix/store/f4bywv8hjwl0ckv7l077pnap81h6qxw4-hello-2.10
/nix/store/f4bywv8hjwl0ckv7l077pnap81h6qxw4-hello-2.10
├── bin
│ └── hello
└── share
├── info
├── locale
└── man
The nix-build and nix build commands will perform both instantiation and realisation.
These are the most common commands used when iterating on packages. One could also do:
$ nix-buid '<nixpkgs>' -A hello
# these are the same, nix build is just much more concise
$ nix-store --realise $(nix-instantiate '<nixpkgs>' -A hello)
Note: Many other commands also will realise a derivation
as part of a workflow. Some examples are:
nix-shell, nix shell, nix-env, nix run, and nix profile.
These commands are very goal oriented and will differ
significantly in how they leverage nix, often realisation
is an side-effect to achieve that goal.
Using Derivations from Nix
The derivation is the main abstraction of nix. All of Nixpkgs and NixOS is created by leveraging derivations to create new derivations, scripts, services, and even entire linux distributions. The ability to compose these usecases with uniquely named packages allows nix the freedom to aggressively share common dependencies, meanwhile allowing the flexibility to have potentially incompatible packages available on the system.
The nix language allows for consumption of derivations to be quite transparent. For example:
$ cat hello.nix
let
pkgs = import <nixpkgs> { };
in
pkgs.writeScriptBin "greet.sh" ''
${pkgs.hello}/bin/hello -g "Hello $USER!"
''
$ nix-build hello.nix
this derivation will be built:
/nix/store/xd9qpwnvybm9p8k2szhkvpd2ym85is9p-greet.sh.drv
building '/nix/store/xd9qpwnvybm9p8k2szhkvpd2ym85is9p-greet.sh.drv'...
/nix/store/h8yxaciazc8basn9l335bmdrpfak0aqk-greet.sh
$ cat ./result/bin/greet.sh
/nix/store/mg35qkhk7wqbhhykpakds4fsm1riy8ga-hello-2.12.1/bin/hello -g "Hello $USER!"
$ ./result/bin/greet.sh
Hello jon!
We created a greet.sh script which will greet the user.
Nix first created the “derivation” (build plan) of our script at
/nix/store/<hash>-greet.sh.drv, and then realised (built) the derivation as
/nix/store/<hash>-greet.sh.
We can see from the contents of the resulting file that
pkgs.hello was substituted for the realised output path.
This allows for us to not worry about what the unique name
of the derivation will be, but rather worry about the
contents post realisation.
Although this may not seem markedly better than other package management workflows such as: please install these tools, then run this script. There is quite a lot of benefit to leveraging nix whether it’s to create scripts or build more software:
- Use of exact versions which you control
- For example, which version of python or node do you have?
- No longer dependent on the state of the consuming system
- For example, do you have python installed?
- Use of multiple versions of the same software
- Want to use NodeJS v14 in one script, but NodeJS v16 in another? No problem.
Although many ecosystems will have ecosystem specific solutions to these solutions (e.g. tox for python, nvm for node), nix provides a universal abstraction for native dependencies and any downstream dependencies.
Use of “outPath” as a toString
This is one of the oddities of nix, but stringification of an object which contains a key “outPath” will return the contents of the “outPath” key. Since all derivations will have an outPath, any usage of them in a string will yeild the store path that they create.
nix-repl> a = { outPath = "foo"; }
nix-repl> "${a} bar"
"foo bar"
The Nix Language
Nix is a pure, lazy, functional language which serves a domain-specific language (DSL) for writing nix derivations and expressions. In general, nix can be thought of as JSON with functions.
The goal of Nix is to facilitate the creation of a derivation. In most situations
nix is given a small amount input and expected to produce a result (usually a derivation).
In the case of nixpkgs, the workflow is generally, “given the user has an x86_64-linux
device and the information held within nixpkgs then the desired package will be
/nix/store/<some hash>-<package>. This is also why the word evaluation is used commonly
when refering to nix packages, the nix expression which describes how to build
software can evaluate to it’s final reduced state given just a system platform.
Nix Language Basics
Primitive Values
These values are mostly similar to JSON:
| Type | Description | Example |
|---|---|---|
| integer | Whole number | 1 |
| float | Floating point number | 1.054 |
| string | UTF-8 string | “hello!” |
| path | File or url | ./default.nix |
NOTE: Paths are special. They will be resolved relative to the file. The value must contain a “/” to be considered a path, however, it’s common to construct the value starting with “.” to avoid confusion (e.g. ./foo/bar vs foo/bar). If a path is referenced as part of a package, that path will be added to the nix store, and all references to that path will be substituted with the nix store path.
Strings
Nix exposes two ways to express strings. Strings are enclosed with double quotes: "hello".
This works well for small strings, such as simple flags. However, it’s common to write a
block of commands which need to be executed; for this, nix also has multi-line support with
the “lines” construct. “lines” are denoted by two single quotes.
Example usage of lines:
postPatch = ''
./autogen.sh
mkdir build
cd build
'';
Here, we have postPatch being assigned a series of commands to be ran as part of a build.
Another quality of lines, is that all shared leading whitespace will be truncated. This allows for the the lines blocks to be adjusted to the indention of the parent nix expression without influencing the contents of the string.
$ cat lines.nix
''
2 spaces
3 spaces
4 spaces
''
$ nix eval -f lines.nix
"2 spaces\n 3 spaces\n 4 spaces\n"
Lists
Lists work similarly to most other languages, but are whitespace delimited. [ 1 2 ] is an
array with elements 1 and 2.
Note: For oddities around lists and elements which use whitespace, please see list common mistakes.
Attribute Set (Attr set)
This can be thought of as a dictionary or map in most other languages. The important distinct is that the
keys are always ordered, so that the order doesn’t influence how a derivation will produce a hash. Attr sets
values do not need to be of the same type. Attr sets are constructed using an = sign which denotes key value
pairs which are separated with semicolons ;, the attr set is enclosed with curly braces { }. Selection
of an attribute is done through dot-notation <set>.<key>.
nix-repl> a = { foo = "bar"; count = 5; flags = ''-g -O3''; }
nix-repl> a.count
5
# Shorthand for nested attribute sets
nix-repl> :p { foo.bar.baz = 1; foo.bar.buzz = 2; }
{ foo = { bar = { baz = 1; buzz = 2; }; }; }
You will commonly see empty attr sets in nixpkgs, an example being:
hello = callPackage ../applications/misc/hello { };
Derivations
Technically, a derivation is just an attr set which has a few special attributes
set to valid values which then nix can later realise into a build. Promotion
from an attr set to derivation is facilitated through the builtins.derivation
function. However directly calling the builtin is highly discouraged within
nixpkgs. Instead people are encouraged to use stdenv.mkDerivation and other
established builders which provide many good defaults to achieve their packaging goals.
If / Else logic
Like many other functional programming languages, you cannot
use if without an accompanying else clause. This is because
the expression needs to return a value, not just follow a code
path.
extension = if stdenv.isDarwin then
".dylib"
else
".so";
Note: The proper way to find the shared library extension
within nixpkgs is hostPlatform.extensions.sharedLibrary.
Let expressions
Let expressions are a way to define values to be used later in a given ‘in’ scope. Generally these are used to alter a given value to conform to a slightly different format. Let expressions can refer to other values defined in the same let scope. For haskell users, let expressions work similarly to how they work in Haskell.
src = let
# e.g. 3.1-2 -> 3_1_2
srcVersion = lib.strings.replaceStrings [ "." "-" ] [ "_" "_"] version;
srcUrl = "https://example.com/download/${pname}-${srcVersion}.tar.gz";
in fetchurl {
url = srcUrl;
sha256 = "...";
};
With expressions
With expressions allows for many values on an attr set to be exposed by their key names.
# before
meta = {
licenses = lib.licenses.cc0;
maintainers = [ lib.maintainers.jane lib.maintainers.joe ];
platforms = lib.platforms.unix;
};
# after
meta = with lib; {
licenses = licenses.cc0;
maintainers = with maintainers; [ jane joe ];
platforms = platforms.unix;
};
Laziness
Many pure functional programming languages also have the feature that the evaluation model of the language is lazy. This means that the values of a data structure aren’t computed until needed. The benefits for nix is that evaluating a package doesn’t mean computing all packages, but only computing the dependency graph for the packages requested. In practice this means limiting the scope of an action from 80,000+ possible dependencies to just the dependencies explicitly mentioned by the nix expressions.
Although laziness isn’t a hard requirement for nix to work. The purity model of nix makes laziness more a symptom rather than an explicit design goal. However, It does enable many implicit benefits such as memoization.
Functions
Nix only has unary functions: unary functions are functions which only accept one parameter. However, in combination with uncurrying, you can create functions which take an arbitrary number of parameters.
Functions can be treated as values, and freely passed to other functions as such. To name a function, it just needs to be assigned to variable, much as you would do to a literal.
Function examples:
# creation, and immediate application of a nameless function
nix-repl> (x: x + 2) 3
5
# assigning a function to a variable, then later applying it
nix-repl> addTwo = x: x + 2
nix-repl> addTwo 3
5
# two parameters
nix-repl> sumBoth = x: y: x + y
nix-repl> sumBoth 2 5
7
Attr sets as inputs
Nix also heavily uses attr sets to pass around many arguments. In nixpkgs, this is most commonly used to express what subset of packages and utilities should be used for a nix expression. It’s also useful when a large context for a function is needed, and an ordered list of parameters is a poor fit.
Attr sets as inputs are also particular good when the function can provide good defaults, and only a small subset of inputs are expected to be edited.
Function examples:
# function which takes an attr set
nix-repl> addTwo = { x }: x + 2
nix-repl> addTwo { x = 3; }
5
# function which takes optional attr set values
nix-repl> addTwoOptional = { x ? 4 }: x + 2
nix-repl> addTwoOptional { }
6
nix-repl> addTwoOptional { x = 5; }
7
# same as above, but binding the entire attr set to another variable
nix-repl> addTwoOptional = { x ? 4 }@args: args.x + 2
nix-repl> addTwoOptional { x = 6; }
8
Note: The @ syntax is not very common for most nix expressions.
Its most common use case are “helpers”, which only care about a
subset of arguments, and will then call another function with some
of the inputs pruned. A good example of this is the pkgs.fetchFromGithub
fetcher; which will know how to translate owner, repo, rev, and
other options into a call to builtsin.fetchzip or builtins.fetchgit.
Imports and callPackage
Import
import is one of the few keywords in nix. It allows for a file to be read and evaluated. If a directory is passed to import,
then it will assume <directory>/default.nix was the desired file.
$ cat data.nix
{ a = "foo"; b = "bar"; }
nix-repl> :p import ./data.nix
{ a = "foo"; b = "bar"; }
$ cat expression.nix
5+2
nix-repl> :p import ./expression.nix
7
This still extends to functions:
$ cat function.nix
{ x, y }: x + y
nix-repl> :p import ./function.nix { x = 2; y = 9; }
11
Imports for packages
In nixpkgs, each package usually has a corresponding
file associated with the packaging and related concerns of just that package.
Early in nix’s history, import was used to integrate the files with other expressions and allow for greater organization of code.
However, the import model is quite explicit, and requires users
to declare the dependencies twice.
Below is an example expression for openssl:
# pkgs/libraries/openssl/default.nix
{ lib, stdenv, fetchurl, perl }:
stdenv.mkDerivation {
...
}
In the import paradigm, the calling site would look:
openssl = import ../libraries/openssl {
inherit lib stdenv fetchurl perl;
};
Obviously this isn’t ideal. The dependencies need to be referred to
three times, once at the call site, as inputs to the expression,
and then within the expression at the appropriate section.
The tediousness of passing the values will be solved by callPackage.
CallPackage
callPackage is a function which will call a function with the
appropriate dependencies. The package set will generally expose
a callPackage function with the current package set already bound.
A minimal callPackage implementation can be thought of as:
# <nixpkgs>/lib/customisation.nix
# callPackageWith :: Attr Set -> (Attr Set -> drv) -> Attr Set -> drv
callPackageWith = autoArgs: fn: args:
# autoArgs - Attr set of "defaults", for nixpkgs this would be all top-level packages
# fn - A nix expression which uses an attr set as in input.
# args - Overrides to the defaults in autoArgs
let
# if a file is passed, import it
f = if lib.isFunction fn then fn else import fn;
# find what attrs are shared from expression and package set
# then override the values by anything passed explicitly through args
fargs = builtins.intersectAttrs (lib.functionArgs f) autoArgs // args;
in
f fargs; # With nix, creation of a derivation is just function application
Usage of callPackage would look something like this:
# <nixpkgs>/pkgs/top-level/all-packages.nix
{ lib, ... }:
let
self = with self; {
...
callPackage = lib.callPackageWith self;
openssl = callPackage ../libraries/openssl { };
};
in self
With callPackage we only need to explicitly pass an attr set
if we need to override the default values that would have been
present in the package set.
In nixpkgs, callPackage has been extended to include helpful
package hints, and thus the complexity has grown, but the
underlying intuition has remained the same.
In javascript, callPackage would be an example of a curried function,
where there’s an implicit package set bound to it.
Best Practices
Avoid excessive with usage
Although with can be useful in small scopes. Doing something such as
with pkgs; is usually discouraged. This is most dramatic with pkgs,
in which you will introduce 15,000+ variables into your namespace. Although
you may be aware of what is coming from where when you first write the code,
this implicit context is much harder to re-create each time the expression
is visited in the future. This is compounded with multiple with expressions,
as later with’s will shadow previously defined values.
This is not to say that all usage of with is discouraged, it’s often
encouraged with certain tasks such as defining the meta section of
a package; as most attributes of a meta section will be pulling from
lib. So a meta = with lib; { ... } can dramatically reduce how
many lib. need to be explicitly added. Also, it’s very common for
NixOS modules to use with lib; for the whole file as many of the
module building blocks are exposed through lib.
In general, with should be scoped as much as possible:
# good
stdenv.mkDerivation {
...
buildInputs = [ openssl ]
++ (with xorg; [ libX11 libX11randar xinput ]);
}
# also good, just repetitive
stdenv.mkDerivation {
...
buildInputs = [ openssl xorg.libX11 xorg.libX11randar xorg.xinput ];
}
# discouraged, now all of xorg is exposed everywhere
with xorg;
stdenv.mkDerivation {
...
buildInputs = [ openssl libX11 libX11randar xinput ];
}
Common Mistakes
Functions
The space after : is required. Without a space, nix will
parse the value as an url, and represent it as a string
nix-repl> :t x: x
a function
nix-repl> :t x:x
a string
Lists
Functions and lists use whitespace to do funcation application, however, list element delimination takes precedence over function application.
For example, if someone were to try and use optional python integration on a package, they may write something like:
extraPackages = [
somePackage.override { withPython = true; }
];
In this example, it’s an array of two elements, somePackage.override is a function, and the other element
is an attr set. This is more accurately represented as:
extraPackages = [
(somePackage.override) # type: Attr -> drv
({ withPython = true; }) # type: Attr
];
The correct usage of this would be:
extraPackages = [
(somePackage.override { withPython = true; }) # type: drv
];
Building a Nix Package
Building a package for nix can range from trivial to near impossible. Generally the difference between the two experiences is determined by how many assumptions the build process makes. Toolchains which have strong integrity guarantees (e.g. lock files) , and allow for offline builds are generally more nix compatible.
Nix is language and toolchain agnostic. Support for many
toolchains have been added to nixpkgs, but the nix build
environment is very constrained so many <toolchain>2nix tools have
arisen to try and bridge the gap in expectations.
Simple C program
Many fundamental unix tools are written in C, as it provides many benefits to system programmers. In this section we will cover how to compile and package a simple C application to demonstrate how the nix build process works.
Impure build and install
Given the example C program:
$ cat simple.c
#include <stdio.h>
void main() {
printf("Hello from Nix!");
}
The build and installation of which on a traditional FHS system may look like:
# build
$ gcc simple.c -o hello_nix
# install
$ sudo cp hello_nix /usr/bin/hello_nix
However, let’s see how this would be done in nix
Nix build
Implicit to the previous workflow, was the availability of the GNU C Compiler and
the usage of the cp command. In many package repositories, usage of these tools
is near universal; and forms the foundation for how to build most other software.
Although C compilers and GNU’s coreutils (where cp comes from) have their
own specific packages in nixpkgs, generally they are aggregated into a pseudo-package
called stdenv in nixpkgs. The function stdenv.mkDerivation provides:
- A nixpkgs-compatible wrapped C compiler (GCC on linux, Clang on MacOS)
- GNU coreutils
- A default “builder” script
stdenv will be covered in more detail in the next section.
A nixified version of the build would look like:
# simple.nix
let
pkgs = import <nixpkgs> { };
in
pkgs.stdenv.mkDerivation {
name = "hello-nix";
src = ./.;
# Use $CC as it allows for stdenv to reference the correct C compiler
buildPhase = ''
$CC simple.c -o hello_nix
'';
}
Nix defaults to a Makefile workflow unless specified otherwise.
So stdenv will default to calling make install for the installPhase which will
fail with No rule to make target 'install' so we need
to also fix how nix will install the package.
$ nix-build simple.nix
this derivation will be built:
/nix/store/dbavzdq1idb0hvwdh7r9gfn2l52kvycf-hello-nix.drv
...
install flags: SHELL=/nix/store/3j918i1nbwhby0y38bn2r438rjhh8f4d-bash-5.1-p16/bin/bash install
make: *** No rule to make target 'install'. Stop.
error: builder for '/nix/store/dbavzdq1idb0hvwdh7r9gfn2l52kvycf-hello-nix.drv' failed with exit code 2;
Nix install
The second glaring problem in the old workflow, is that we had a convention as to where to install the executable in /usr/bin/. But installing software in a central location is one the issues that nix is trying to solve. Instead, nix needs to install files on a per-package basis, thus where we need to install files will change for every package. So how do we know where to install files with nix?
Nix will bind the values defined in the derivation to environment variables
inside of the nix build. The default “output” of a package is out, which will be bound
to the hashed nix store path mentioned in the derivation section.
So an adjusted workflow would be:
# build
$ gcc simple.c -o hello_nix
# install
$ mkdir -p $out/bin
$ cp hello_nix $out/bin/hello_nix
Extending the example above, the easiest solution would be to write our own installPhase. The
resulting expression would be:
# simple.nix
let
pkgs = import <nixpkgs> { };
in
pkgs.stdenv.mkDerivation {
name = "hello-nix";
src = ./.;
buildPhase = ''
$CC simple.c -o hello_nix
'';
installPhase = ''
mkdir -p $out/bin
cp hello_nix $out/bin/hello_nix
'';
}
Now when we build the package, nix is able to realize it. After which we can use the executable:
$ nix-build simple.nix
this derivation will be built:
/nix/store/9j274i4wckn0ksxpj7asd8vbk67kfz4p-hello-nix.drv
...
/nix/store/giwy9rwzwsdvh86pvdpv37lkwms7xcx9-hello-nix
$ ./result/bin/hello_nix
Hello from Nix!
Stdenv
stdenv provides a foundation for building C/C++ software with nixpkgs. It includes, but is not limited to
containing tools such as: a c compiler and related tools, GNU coreutils, GNU awk, GNU sed, findutils, strip, bash, GNUmake, bzip2, gzip,
and many more tools. Stdenv also provides a default “builder.sh” script which will perform the build of a package. The default builder script
is comprised of many smaller “phases” which package maintainers can alter slightly as needed. The goal of stdenv is to enable most C/C++ + Makefile workflows; in theory, if a software
package has these installation:
./configure # configurePhase, optional
make # buildPhase
make install # installPhase
Then the only necessary changes for it to work with stdenv.mkDerivation would be the inclusion of
installFlags = [ "PREFIX=$(out)" ]; to communicate where the package should be installed with nix.
Unique qualities of Nixpkgs’ Stdenv
Wrapped C Compiler
stdenv exposes a wrapped compiler to help communicate nix-specific to the compiler without
having to rely on the upstream maintainer to expose such allowances in configuration. For example,
let’s assume that a package doesn’t officially support MacOS, so all testing and building
occurs with Linux + GCC. Trying to package this for MacOS might be difficult because the logic
may call gcc directly, and assume
- Nix differences
- Wrapped compiler
- stdenv shell functions
Phases
As mentioned in the stdenv section, stdenv.mkDerivation provides
a default builder script that is comprised of smaller units of work called “phases”. Each phase
handles a specific part of the build process: unpacking source code, configuring the build,
compiling, running tests, and installing the final artifacts.
Understanding phases is crucial to packaging software with nix, as they provide standardized extension points where package-specific logic can be injected.
Standard Phases
The standard phases executed by stdenv.mkDerivation in order are:
- unpackPhase - Unpacks source archives
- patchPhase - Applies patches to source code
- configurePhase - Runs configuration scripts (e.g.,
./configure) - buildPhase - Compiles the software (e.g.,
make) - checkPhase - Runs test suites (e.g.,
make check) - installPhase - Installs files to
$out(e.g.,make install) - fixupPhase - Post-processes installed files (e.g., stripping binaries)
- installCheckPhase - Runs tests on installed outputs
- distPhase - Creates distribution artifacts (rarely used)
Not all phases run by default. For example, checkPhase only runs if doCheck = true;
is set, and installCheckPhase only runs if doInstallCheck = true; is set.
Customizing Phases
There are several ways to customize the build process by modifying phases:
Overriding an Entire Phase
You can completely replace a phase by setting the corresponding attribute:
stdenv.mkDerivation {
name = "example";
src = ./.;
buildPhase = ''
$CC simple.c -o program
'';
installPhase = ''
mkdir -p $out/bin
cp program $out/bin/
'';
}
When you override a phase, you are responsible for implementing all the logic for that phase. The default implementation is no longer used.
Extending Phases with Hooks
Rather than replacing an entire phase, you can inject additional commands before or after a phase using pre/post hooks:
stdenv.mkDerivation {
name = "example";
src = ./.;
# Run before the configure phase
preConfigure = ''
./autogen.sh
'';
# Run after the build phase
postBuild = ''
echo "Build completed successfully!"
'';
# Run after the install phase
postInstall = ''
# Remove unnecessary files
rm -rf $out/share/doc
'';
}
Every phase supports both pre<Phase> and post<Phase> hooks. This approach
is preferred when you only need to add supplementary steps rather than
completely changing how a phase works.
Disabling Phases
Some phases can be disabled by setting them to an empty string or by using control variables:
stdenv.mkDerivation {
name = "example";
src = ./.;
# Disable the configure phase
configurePhase = ":"; # ":" is a shell no-op
# Disable the check phase
doCheck = false;
}
Common Phase Patterns
The unpackPhase
By default, unpackPhase automatically detects and unpacks common archive formats
(.tar.gz, .tar.bz2, .zip, etc.). The source archive is specified via the src attribute.
If you need custom unpacking logic:
stdenv.mkDerivation {
name = "example";
src = fetchurl {
url = "https://example.com/source.custom";
sha256 = "...";
};
unpackPhase = ''
runHook preUnpack
mkdir source
cd source
custom-unpack-tool $src
runHook postUnpack
'';
}
Note: runHook calls are important to preserve pre/post hook functionality
when overriding phases.
The patchPhase
The patchPhase applies patches specified in the patches attribute. See
the patching section for more details.
If you need to modify source files before building:
stdenv.mkDerivation {
name = "example";
src = ./.;
postPatch = ''
# Fix hardcoded paths
substituteInPlace Makefile \
--replace /usr/bin $out/bin
'';
}
The configurePhase
The default configurePhase runs ./configure with appropriate flags if a
configure script is found. You can customize it with:
stdenv.mkDerivation {
name = "example";
src = ./.;
# Additional flags for ./configure
configureFlags = [
"--enable-feature-x"
"--disable-feature-y"
];
# Or override entirely
configurePhase = ''
runHook preConfigure
cmake -DCMAKE_INSTALL_PREFIX=$out .
runHook postConfigure
'';
}
The buildPhase
The default buildPhase runs make if a Makefile is present. Common customizations:
stdenv.mkDerivation {
name = "example";
src = ./.;
# Additional flags for make
makeFlags = [
"VERBOSE=1"
"PREFIX=$(out)"
];
# Set number of parallel jobs
enableParallelBuilding = true;
# Or override entirely for non-make builds
buildPhase = ''
runHook preBuild
python setup.py build
runHook postBuild
'';
}
The checkPhase
Tests are not run by default. Enable them with:
stdenv.mkDerivation {
name = "example";
src = ./.;
doCheck = true;
# The default runs "make check" or "make test"
# Override if needed:
checkPhase = ''
runHook preCheck
python -m pytest tests/
runHook postCheck
'';
}
The installPhase
The default installPhase runs make install. If the package doesn’t support
this, you’ll need to implement it manually:
stdenv.mkDerivation {
name = "example";
src = ./.;
installPhase = ''
runHook preInstall
mkdir -p $out/bin
cp my-program $out/bin/
mkdir -p $out/share/man/man1
cp docs/my-program.1 $out/share/man/man1/
runHook postInstall
'';
}
Advanced Phase Control
Running Commands in All Phases
Sometimes you need to ensure certain environment variables or setup occurs in every phase:
stdenv.mkDerivation {
name = "example";
src = ./.;
# Set environment variables
NIX_CFLAGS_COMPILE = "-DSPECIAL_FLAG";
# Or use setupHook for more complex logic
}
Phase Debugging
When a build fails, it can be helpful to understand which phase failed and what commands were executed. Nix provides some debugging aids:
# Build with verbose output
$ nix-build --verbose simple.nix
# Enter a build environment to debug interactively
$ nix-shell simple.nix
# Inside nix-shell, run phases manually:
$ unpackPhase
$ cd $sourceRoot
$ patchPhase
$ configurePhase
$ buildPhase
Skipping Phases
While it’s possible to skip phases by setting them to ":", it’s generally
better to be explicit:
stdenv.mkDerivation {
name = "example";
src = ./.;
# Preferred way to skip configure
dontConfigure = true;
# Preferred way to skip build
dontBuild = true;
# Preferred way to skip fixup
dontFixup = true;
}
Alternatively, you can also specify the phase explicitly. However, this practice is generally frowned upon.
stdenv.mkDerivation {
name = "example";
src = ./.;
phases = [ "installPhase" ];
installPhase = ''
install -m 755 ./script.sh $out/bin
'';
}
For trivial builds, generally pkgs.runCommand is preferred.
runCommand "example" { } ''
install -m 755 ${./script.sh} $out/bin
'';
Best Practices
-
Use hooks when possible: Prefer
postPatch,preBuild, etc. over completely overriding phases. This preserves default behavior and makes your package more maintainable. -
Include runHook calls: When overriding phases, always include
runHook pre<Phase>andrunHook post<Phase>to preserve extensibility. -
Use phase-specific attributes: Prefer
configureFlags,makeFlags,installTargetsover custom phase implementations when possible. -
Keep phases focused: Each phase should do one thing. Don’t put build logic in
installPhaseor installation logic inbuildPhase. -
Test your package: Enable
doCheck = true;when possible to ensure the package builds correctly and passes its test suite.
Example: Complete Package with Multiple Phase Customizations
Here’s a practical example showing multiple phase customizations:
{ stdenv, fetchFromGitHub, cmake, pkg-config, openssl }:
stdenv.mkDerivation rec {
pname = "example-tool";
version = "1.2.3";
src = fetchFromGitHub {
owner = "example";
repo = "tool";
rev = "v${version}";
sha256 = "...";
};
nativeBuildInputs = [ cmake pkg-config ];
buildInputs = [ openssl ];
# Patch phase customization
postPatch = ''
# Fix hardcoded paths in the source
substituteInPlace src/config.h \
--replace /usr/share/example $out/share/example
'';
# Configure phase customization
cmakeFlags = [
"-DENABLE_TESTS=ON"
"-DUSE_SYSTEM_OPENSSL=ON"
];
# Build phase customization
enableParallelBuilding = true;
preBuild = ''
# Generate version file
echo "${version}" > version.txt
'';
# Check phase customization. For cmake builds, this will run CTest.
doCheck = true;
# Install phase customization
postInstall = ''
# Install additional documentation
mkdir -p $out/share/doc/example-tool
cp -r docs/* $out/share/doc/example-tool/
# Remove unnecessary files
rm -rf $out/share/example/tests
'';
meta = {
description = "An example tool demonstrating phase usage";
homepage = "https://example.com/tool";
};
}
Build Dependencies
Most software depends on other libraries, tools, or frameworks to build successfully. In traditional package managers, dependencies are often implicit or managed through system-wide package databases. Nix takes a different approach: all dependencies must be explicitly declared and are isolated to each build.
This chapter covers how to specify build dependencies in nix packages, the different types of dependency inputs, and how nix makes these dependencies available during the build process.
Adding Build Dependencies
Dependencies in stdenv.mkDerivation are specified through input attributes. The most
common are buildInputs and nativeBuildInputs:
{ stdenv, fetchurl, openssl, zlib, pkg-config }:
stdenv.mkDerivation {
pname = "example";
version = "1.0";
src = fetchurl {
url = "https://example.com/example-1.0.tar.gz";
sha256 = "...";
};
nativeBuildInputs = [ pkg-config ];
buildInputs = [ openssl zlib ];
}
In this example:
pkg-configis a build tool needed to find other librariesopensslandzlibare libraries the software links against
nativeBuildInputs vs buildInputs
The distinction between nativeBuildInputs and buildInputs becomes important
when cross-compiling, but the rule of thumb applies even for native builds:
nativeBuildInputs
These are dependencies that run on the build platform (the machine doing the compilation). They are tools used during the build process itself.
Common examples:
- Build tools:
cmake,meson,autoconf,automake - Code generators:
bison,flex,protobuf - Package discovery tools:
pkg-config - Documentation generators:
doxygen,sphinx - Compilers and interpreters:
python3,perl,rustc
stdenv.mkDerivation {
pname = "app";
version = "1.0";
src = ./.;
nativeBuildInputs = [
cmake # Build system
pkg-config # Finds libraries
makeWrapper # Wrapper script generator
];
}
buildInputs
These are dependencies for the host platform (the machine that will run the built software). They are libraries and runtime dependencies that the built program will use.
Common examples:
- Libraries:
openssl,zlib,sqlite - GUI frameworks:
gtk3,qt5 - Language runtimes:
python3(when the built program uses it)
stdenv.mkDerivation {
pname = "app";
version = "1.0";
src = ./.;
buildInputs = [
openssl # Library linked into the binary
zlib # Library linked into the binary
sqlite # Database library
];
}
Rule of Thumb
Ask yourself: “Does this dependency produce code that runs in my final binary, or is it just a tool used during compilation?”
- If it’s linked into your binary →
buildInputs - If it’s a tool used during build →
nativeBuildInputs
For native (non-cross) compilation, both lists work similarly, but using the correct attribute makes your package more likely to work in all intended cases.
Propagated Dependencies
Sometimes a package needs to ensure that its dependencies are also available to anything that depends on it. This is where propagated inputs come in.
propagatedBuildInputs
When package A depends on package B, and B’s headers or modules must be available
when building anything that uses A, B should be in A’s propagatedBuildInputs.
Common scenarios:
- Header-only C++ libraries
- Libraries whose headers include other library headers
- Python/Perl modules that import other modules
- Libraries that expose their dependencies through their API
# package: mylib
{ stdenv, boost, eigen }:
stdenv.mkDerivation {
pname = "mylib";
version = "1.0";
src = ./.;
# eigen is header-only and boost headers are exposed in mylib's API
propagatedBuildInputs = [ boost eigen ];
# Users of mylib will automatically get boost and eigen
}
Now when another package uses mylib:
# package: myapp
{ stdenv, mylib }:
stdenv.mkDerivation {
pname = "myapp";
version = "1.0";
src = ./.;
buildInputs = [ mylib ];
# boost and eigen are automatically available during build
# because mylib propagated them
}
propagatedNativeBuildInputs
Similar to propagatedBuildInputs, but for build-time tools. This is less common,
but useful when a package provides build helpers that depend on other build tools.
{ stdenv, cmake, pkg-config, custom-cmake-modules }:
stdenv.mkDerivation {
pname = "my-build-helper";
version = "1.0";
src = ./.;
propagatedNativeBuildInputs = [ cmake custom-cmake-modules ];
# Packages using my-build-helper will get cmake and custom-cmake-modules
}
When to Propagate
Propagation should be used sparingly, as it increases the closure size of dependent packages. Only propagate when:
- The dependency’s headers/modules are exposed in your public API
- The dependency is required for anyone using your package
- Not propagating would cause build failures in dependent packages
Don’t propagate if the dependency is only used internally and not exposed to users of your package.
How Build Dependencies Are Made Available
Nix doesn’t rely on a global package database or system paths. Instead, it sets up environment variables that build tools use for dependency discovery.
pkg-config
For libraries that provide .pc files, the pkg-config setup hook sets PKG_CONFIG_PATH:
# During build, nix automatically sets:
export PKG_CONFIG_PATH="/nix/store/xxx-openssl/lib/pkgconfig:/nix/store/yyy-zlib/lib/pkgconfig:..."
# This allows pkg-config to find dependencies:
$ pkg-config --cflags openssl
-I/nix/store/xxx-openssl/include
$ pkg-config --libs openssl
-L/nix/store/xxx-openssl/lib -lssl -lcrypto
Many build systems (autoconf, cmake, meson) use pkg-config automatically,
so this works transparently.
CMake
For CMake packages, the cmake setup hook sets CMAKE_PREFIX_PATH:
export CMAKE_PREFIX_PATH="/nix/store/xxx-openssl:/nix/store/yyy-zlib:..."
This allows CMake’s find_package() to locate dependencies:
find_package(OpenSSL REQUIRED)
# CMake finds OpenSSL in CMAKE_PREFIX_PATH
Note: NIXPKGS_CMAKE_PREFIX_PATH was patched into Nixpkgs’ CMake search path
traversal and now the prefered way to communicate nix dependencies. This was to
avoid clobbering CMAKE_PREFIX_PATH which can be altered during a build.
Other Build Systems
Different ecosystems have their own conventions:
- Python:
PYTHONPATHis set when usingbuildPythonPackage - Perl:
PERL5LIBis set when usingbuildPerlPackage - Go:
GOPATHis managed automatically
The key principle is that nix sets environment variables that toolchains already understand, making the nix store paths discoverable.
Common Dependency Patterns
Libraries with Multiple Outputs
Many packages split their outputs into multiple parts to reduce closure sizes. Common outputs include:
out- The default output (binaries, non-development files)dev- Development files (headers,.pcfiles, CMake configs, propagated build inputs)lib- Shared libraries (sometimes separated fromout)bin- Executables (sometimes separated fromout)man- Man pagesdoc- Documentation
See the multiple outputs section for more details.
Referencing Specific Outputs
When a dependency has multiple outputs, you can specify which one you need:
{ stdenv, openssl, zlib }:
stdenv.mkDerivation {
pname = "example";
version = "1.0";
src = ./.;
# Reference specific outputs explicitly
nativeBuildInputs = [ pkg-config ];
# Use dev output for headers during build
buildInputs = [
openssl.dev # Headers and pkg-config files
zlib.dev # Headers
];
# The lib/out outputs are automatically included in runtime dependencies
}
By default, when you reference a package in buildInputs, nix uses the dev output
if available (for build-time) and automatically tracks runtime references to lib or out.
Helper Functions for Outputs
Nixpkgs provides helper functions in lib for working with outputs:
{ stdenv, lib, openssl, zlib }:
stdenv.mkDerivation {
pname = "example";
version = "1.0";
src = ./.;
buildInputs = [
(lib.getDev openssl) # Get dev output
(lib.getLib zlib) # Get lib output
];
# Other helpers:
# lib.getBin pkg - Get bin output
# lib.getMan pkg - Get man output
}
These helpers are more explicit than relying on default output selection and make the intent clear.
Conditional Dependencies
Dependencies can be conditional based on the platform or features:
{ stdenv, lib, openssl, util-linux }:
stdenv.mkDerivation {
pname = "example";
version = "1.0";
src = ./.;
buildInputs = [ openssl ]
++ lib.optionals stdenv.isLinux [
util-linux
];
}
The lib.optionals function only includes the list if the condition is true.
Debugging Dependency Issues
When a build fails due to missing dependencies, here are some debugging techniques:
Check What’s Available
Enter a build environment to inspect what’s available:
$ nix-shell '<nixpkgs>' -A your-package
# Check if a library is found
[nix-shell]$ pkg-config --exists openssl && echo "Found"
# Check available paths
[nix-shell]$ echo $PKG_CONFIG_PATH
[nix-shell]$ echo $CMAKE_PREFIX_PATH
# Try to find headers
[nix-shell]$ find -L $CMAKE_PREFIX_PATH -name "openssl.h" 2>/dev/null
Verify Dependency Outputs
Check what outputs a dependency provides:
$ nix-instantiate --eval -E 'with import <nixpkgs> {}; openssl.outputs'
[ "bin" "dev" "out" "man" "doc" ]
If your build needs headers but you’re only getting the out output, you may need
to explicitly request the dev output.
Check Build Logs
Build logs show all environment variables set during the build:
$ nix-build your-package.nix 2>&1 | grep -A 10 "environment variables"
# Or build with verbose output
$ nix-build --verbose your-package.nix
Example: Complete Package with Dependencies
Here’s a realistic example showing various dependency patterns:
{ stdenv
, lib
, fetchFromGitHub
, cmake
, pkg-config
, openssl
, zlib
, curl
, sqlite
, boost
, qtbase
, wrapQtAppsHook
, enableGui ? true
}:
stdenv.mkDerivation rec {
pname = "myapp";
version = "2.1.0";
src = fetchFromGitHub {
owner = "example";
repo = "myapp";
rev = "v${version}";
sha256 = "...";
};
nativeBuildInputs = [
cmake
pkg-config
] ++ lib.optionals enableGui [
wrapQtAppsHook
];
buildInputs = [
openssl.dev
zlib
curl
sqlite
boost
] ++ lib.optionals enableGui [
qtbase
];
cmakeFlags = [
"-DENABLE_GUI=${if enableGui then "ON" else "OFF"}"
];
meta = with lib; {
description = "Example application with various dependencies";
homepage = "https://example.com/myapp";
license = licenses.mit;
platforms = platforms.unix;
};
}
This example demonstrates:
- Separating native and build inputs
- Using
.devoutput for headers - Conditional dependencies based on features
- Using both build tools and libraries
- Proper use of helper hooks (wrapQtAppsHook)
Summary
Understanding dependency management in nix requires grasping a few key concepts:
- Explicit dependencies: All dependencies must be declared
- Build vs host:
nativeBuildInputsvsbuildInputs - Propagation: When to use
propagatedBuildInputs - Environment variables: How nix makes dependencies discoverable
- Multiple outputs: Referencing
dev,lib, or other outputs - Helper functions: Using
lib.getDev,lib.getLib, etc.
With these tools, you can accurately express your package’s dependency requirements and create reproducible builds that work across different platforms and configurations.
Runtime Dependencies
While build dependencies are needed to compile software, runtime dependencies are the libraries and programs that must be available when the software actually runs. In traditional package managers, these are often tracked separately through complex metadata. Nix takes a simpler approach: it automatically detects runtime dependencies by scanning the built artifacts for references to the nix store.
This chapter explains how nix identifies runtime dependencies, how to inspect them, and how to ensure programs can find their dependencies at runtime.
What Are Runtime Dependencies?
Runtime dependencies are any build inputs that remain referenced in the final installed output. When you build a package, nix scans the output files looking for references to store paths. Any store path found becomes a runtime dependency.
For example, when compiling a C program that links against OpenSSL:
{ stdenv, openssl }:
stdenv.mkDerivation {
pname = "myapp";
version = "1.0";
src = ./.;
buildInputs = [ openssl ];
buildPhase = ''
$CC main.c -o myapp -lssl -lcrypto
'';
installPhase = ''
mkdir -p $out/bin
cp myapp $out/bin/
'';
}
The compiled binary myapp will contain hardcoded paths to OpenSSL’s shared libraries:
$ ldd /nix/store/xxx-myapp/bin/myapp
libssl.so.3 => /nix/store/yyy-openssl/lib/libssl.so.3
libcrypto.so.3 => /nix/store/yyy-openssl/lib/libcrypto.so.3
libc.so.6 => /nix/store/zzz-glibc/lib/libc.so.6
Because these store paths appear in the binary, nix knows OpenSSL and glibc are runtime dependencies.
How Nix Detects Runtime Dependencies
Nix uses a straightforward but effective method to detect runtime dependencies: it scans the output files for strings that look like nix store paths.
Specifically, nix looks for the hash portion of store paths. A store path has the format:
/nix/store/<32-character-hash>-<name>
During the build completes, nix scans all files in each output for any occurrence of the 32-character hash of dependencies that were available during the build. If found, that dependency is marked as a runtime dependency of the output.
This works because:
- Compiled binaries contain full paths to shared libraries
- Scripts often have shebangs like
#!/nix/store/xxx-bash/bin/bash - Configuration files may reference other programs by full path
- Any file that embeds a store path will cause that path to be retained
Note: Certain formats such as .jar files are compressed, and the naive
dependency scanning will not be able to decompress the contents to search for
retained dependencies. In this case, needed dependencies will need to be added
through other means; the most common of which is to create a file in
<output>/nix-support/<any file name> with the paths needed.
The Runtime Closure
The complete set of runtime dependencies, including transitive dependencies (dependencies of dependencies), is called the “runtime closure” or just “closure”.
For instance, if your program depends on OpenSSL, and OpenSSL depends on zlib, then your program’s closure includes both OpenSSL and zlib, even though your program never directly references zlib.
Inspecting Runtime Dependencies
Nix provides several commands to inspect runtime dependencies:
Immediate Dependencies
To see direct runtime dependencies of a package:
$ nix-store -q --references /nix/store/xxx-myapp
/nix/store/yyy-openssl
/nix/store/zzz-glibc
/nix/store/aaa-gcc-libs
This shows only the packages that are directly referenced in the output.
Complete Closure
To see all runtime dependencies, including transitive ones:
$ nix-store -q --requisites /nix/store/xxx-myapp
/nix/store/yyy-openssl
/nix/store/zzz-glibc
/nix/store/aaa-gcc-libs
/nix/store/bbb-zlib
/nix/store/ccc-linux-headers
...
This lists every package in the runtime closure.
Closure Size
To see how much disk space the closure requires:
$ nix path-info -Sh /nix/store/xxx-myapp
/nix/store/xxx-myapp 256.3M
Understanding closure size is important because:
- It affects download time when using binary caches
- It determines disk usage for deployed systems
- Smaller closures mean faster container images and updates
Dependency Tree
To visualize the dependency tree:
$ nix-store -q --tree /nix/store/xxx-myapp
/nix/store/xxx-myapp
├───/nix/store/yyy-openssl
│ ├───/nix/store/bbb-zlib
│ │ └───/nix/store/zzz-glibc
│ │ └───...
│ └───/nix/store/zzz-glibc
│ └───...
└───/nix/store/zzz-glibc
└───...
The nix-tree tool is a TUI which is immensely helpful in finding dependencies in a closure.
Finding Why a Package Is in the Closure
To find out why a specific package is in your closure:
$ nix why-depends /nix/store/xxx-myapp /nix/store/bbb-zlib
/nix/store/xxx-myapp
└───/nix/store/yyy-openssl: …/openssl/lib/libssl.so.3
└───/nix/store/bbb-zlib: …/zlib/lib/libz.so.1
This shows the dependency chain that causes zlib to be included.
Note: The --precise argument will also tell you which file and where in
the file it found the reference, this is useful for pinpoint how the reference
was retained.
Making Runtime Dependencies Available
Sometimes dependencies need to be available on PATH or through environment
variables when a program runs. This is common for programs that execute other
programs, or for interpreted languages.
The Problem
Consider a shell script that uses common utilities:
#!/usr/bin/env bash
# myscript.sh
grep "pattern" file.txt | sed 's/old/new/' | sort
This script assumes grep, sed, and sort are available on PATH. But in nix,
programs don’t have access to a global PATH. They need to be explicitly provided.
Solution: wrapProgram
Nix provides wrapProgram (from makeWrapper) to solve this. It creates a wrapper
script that sets up the environment before executing the actual program.
{ stdenv, lib, makeWrapper, gnugrep, gnused, coreutils }:
stdenv.mkDerivation {
pname = "myscript";
version = "1.0";
src = ./.;
nativeBuildInputs = [ makeWrapper ];
installPhase = ''
mkdir -p $out/bin
cp myscript.sh $out/bin/myscript
# Wrap the script to provide dependencies on PATH
wrapProgram $out/bin/myscript \
--prefix PATH : ${lib.makeBinPath [ gnugrep gnused coreutils ]}
'';
}
Now when myscript runs, the wrapper automatically adds grep, sed, and sort to PATH.
Example: Python Script with Dependencies
A Python script that imports external modules:
{ stdenv, makeWrapper, python3, python3Packages }:
stdenv.mkDerivation {
pname = "my-python-tool";
version = "1.0";
src = ./.;
nativeBuildInputs = [ makeWrapper ];
buildInputs = [ python3 ];
installPhase = ''
mkdir -p $out/bin
cp tool.py $out/bin/my-python-tool
wrapProgram $out/bin/my-python-tool \
--prefix PYTHONPATH : "${python3Packages.requests}/${python3.sitePackages}" \
--prefix PYTHONPATH : "${python3Packages.click}/${python3.sitePackages}"
'';
}
Now the Python script can import requests and click at runtime.
Note: For Python packages, it’s better to use python3Packages.buildPythonApplication
or buildPythonPackage, which handle this automatically. Many mkDerivation helpers
which automatically wrap programs will generally also support makeWrapperArgs
which allows you to add more args outside of the defaults.
Example: Application Needing Qt Plugins
Qt applications need to find their plugins at runtime:
{ stdenv, qt5, makeWrapper }:
stdenv.mkDerivation {
pname = "qt-app";
version = "1.0";
src = ./.;
nativeBuildInputs = [ makeWrapper qt5.wrapQtAppsHook ];
buildInputs = [ qt5.qtbase ];
# wrapQtAppsHook automatically wraps Qt apps in postFixup
# But if you need manual control:
postInstall = ''
wrapProgram $out/bin/qt-app \
--prefix QT_PLUGIN_PATH : "${qt5.qtbase.bin}/${qt5.qtbase.qtPluginPrefix}"
'';
}
The wrapQtAppsHook handles this automatically for most cases.
Ensuring Hidden Dependencies Are Retained
Sometimes a dependency is needed at runtime but won’t be automatically detected because it’s not referenced in the output files. Common cases include:
Plugins Loaded Dynamically
If your program loads plugins by name at runtime, nix won’t detect them:
// main.c - won't reference libplugin.so in source
void *handle = dlopen("libplugin.so", RTLD_LAZY);
In this case, try using autoPatchelfHook and reference the libraries as
runtimeDependencies which will attempt to add the necessary DT_RUNPATH entries
which will point to the library location.
Compressed Archives
JAR files, ZIP archives, or other compressed formats may contain references to store paths, but nix’s scanner will not attempt to decompress them to search for references.
Solution: Explicit Runtime Dependencies
You can force nix to retain dependencies by writing references to $out/nix-support:
{ stdenv, plugin1, plugin2, jdk }:
stdenv.mkDerivation {
pname = "app-with-plugins";
version = "1.0";
src = ./.;
buildInputs = [ plugin1 plugin2 jdk ];
installPhase = ''
mkdir -p $out/bin
cp app.jar $out/bin/
# Ensure plugins are retained in closure
mkdir -p $out/nix-support
echo ${plugin1} >> $out/nix-support/propagated-user-env-packages
echo ${plugin2} >> $out/nix-support/propagated-user-env-packages
# Or just create a reference anywhere in $out:
echo "Plugins: ${plugin1} ${plugin2}" > $out/nix-support/plugins.txt
'';
}
By writing the store paths to any file in $out, nix’s scanner will find them
and include those packages in the runtime closure.
Patching for Runtime Dependencies
Some programs hardcode paths that need to be fixed to work with nix. This is especially common with:
- Scripts with hardcoded shebangs
- Programs looking for libraries in
/libor/usr/lib - Python/Ruby/Perl scripts importing system modules
Patching Shebangs
Nix automatically patches shebangs during fixupPhase:
# Before:
#!/usr/bin/python3
# After:
#!/nix/store/xxx-python3/bin/python3
This happens automatically for any file with execute permissions.
Patching Hardcoded Paths
For programs that hardcode system paths:
{ stdenv, substituteAll, python3, curl }:
stdenv.mkDerivation {
pname = "my-tool";
version = "1.0";
src = ./.;
buildInputs = [ python3 curl ];
postPatch = ''
# Fix hardcoded paths in source
substituteInPlace tool.py \
--replace '/usr/bin/python3' '${python3}/bin/python3' \
--replace '/usr/bin/curl' '${curl}/bin/curl'
'';
}
The substituteInPlace function (from stdenv) performs in-place replacements.
Python Runtime Imports
Python programs that import other modules need those modules in PYTHONPATH.
However, modifying PYTHONPATH globally isn’t always reliable, especially when
the Python program executes other programs.
For Python packages, use buildPythonPackage:
{ python3Packages }:
python3Packages.buildPythonPackage {
pname = "my-python-app";
version = "1.0";
src = ./.;
propagatedBuildInputs = with python3Packages; [
requests
click
pyyaml
];
# buildPythonPackage automatically:
# - Sets up PYTHONPATH
# - Creates wrapper scripts
# - Handles site-packages layout
}
This ensures Python can find its dependencies reliably.
Perl and Ruby
Similar patterns apply to Perl and Ruby:
{ perlPackages }:
perlPackages.buildPerlPackage {
pname = "my-perl-script";
version = "1.0";
src = ./.;
propagatedBuildInputs = with perlPackages; [
LWP
JSON
];
}
Reducing Runtime Closure Size
Large closures can be problematic for deployments. Here are strategies to reduce them:
Use Multiple Outputs
Split development files from runtime files:
stdenv.mkDerivation {
pname = "mylib";
version = "1.0";
src = ./.;
outputs = [ "out" "dev" "doc" ];
postInstall = ''
# Headers go to dev
moveToOutput "include" "$dev"
# Docs go to doc
moveToOutput "share/doc" "$doc"
'';
}
See the multiple outputs section for details.
Avoid Unnecessary Propagation
Don’t use propagatedBuildInputs unless necessary. It increases downstream closures.
Use Static Linking Selectively
Static linking can reduce runtime dependencies but increases binary size and circumvents the principle of “maximal sharing” of dependencies between packages.
Inspect and Eliminate Unwanted References
Sometimes build artifacts accidentally reference build-time dependencies:
# Find why a build tool is in the closure
$ nix why-depends ./result /nix/store/xxx-gcc
# The culprit might be a debug symbol or metadata file
Use removeReferencesTo to strip unwanted references:
{ stdenv, removeReferencesTo, gcc }:
stdenv.mkDerivation {
pname = "app";
version = "1.0";
src = ./.;
nativeBuildInputs = [ removeReferencesTo ];
postInstall = ''
# Remove reference to gcc from the binary
remove-references-to -t ${gcc} $out/bin/app
'';
}
Warning: Only remove references you’re certain are not needed at runtime.
Common Runtime Dependency Patterns
GUI Applications
Desktop applications need many runtime components:
{ stdenv, makeWrapper, gtk3, hicolor-icon-theme, shared-mime-info }:
stdenv.mkDerivation {
pname = "gui-app";
version = "1.0";
src = ./.;
nativeBuildInputs = [ makeWrapper ];
buildInputs = [ gtk3 ];
postInstall = ''
wrapProgram $out/bin/gui-app \
--prefix XDG_DATA_DIRS : "$XDG_ICON_DIRS:$GSETTINGS_SCHEMAS_PATH" \
--prefix GIO_EXTRA_MODULES : "${lib.getLib dconf}/lib/gio/modules"
'';
# Ensure icon and mime databases are in closure
preFixup = ''
mkdir -p $out/nix-support
echo ${hicolor-icon-theme} >> $out/nix-support/runtime-deps
echo ${shared-mime-info} >> $out/nix-support/runtime-deps
'';
}
Electron Applications
Electron apps bundle their runtime but may need system libraries:
{ stdenv, makeWrapper, electron, libpulseaudio, libnotify }:
stdenv.mkDerivation {
pname = "electron-app";
version = "1.0";
src = ./.;
nativeBuildInputs = [ makeWrapper ];
installPhase = ''
mkdir -p $out/bin $out/share/app
cp -r app/* $out/share/app/
makeWrapper ${electron}/bin/electron $out/bin/electron-app \
--add-flags "$out/share/app" \
--prefix LD_LIBRARY_PATH : "${lib.makeLibraryPath [ libpulseaudio libnotify ]}"
'';
}
Shell Scripts with Many Tools
Scripts using many utilities:
{ stdenv, lib, makeWrapper, coreutils, findutils, gnugrep, gnused, gawk }:
stdenv.mkDerivation {
pname = "toolbox-script";
version = "1.0";
src = ./.;
nativeBuildInputs = [ makeWrapper ];
installPhase = ''
mkdir -p $out/bin
cp script.sh $out/bin/toolbox
wrapProgram $out/bin/toolbox \
--prefix PATH : ${lib.makeBinPath [
coreutils
findutils
gnugrep
gnused
gawk
]}
'';
}
Debugging Runtime Dependency Issues
Program Can’t Find Shared Library
$ ./result/bin/myapp
error while loading shared libraries: libfoo.so.1: cannot open shared object file
This means that the binary didn’t have a library search path which included the library in question.
Specifically, there was a DT_NEEDED entry for libfoo.so.1 but there wasn’t a
path in DT_RUNPATH which contained such a library. Please see
binary patching on
fixing binary files to work with nix.
Program Executes but Can’t Find Helper Tools
$ ./result/bin/myscript
grep: command not found
Solution: Wrap the program to provide tools on PATH:
postInstall = ''
wrapProgram $out/bin/myscript \
--prefix PATH : ${lib.makeBinPath [ gnugrep ]}
'';
Unexpected Large Closure
$ nix path-info -Sh ./result
./result 2.3G # Way too large!
Solution: Find the culprit:
# List closure sorted by size
$ nix path-info -rSh ./result | sort -h
# Check why a large package is included
$ nix why-depends --precise ./result /nix/store/xxx-large-package
Summary
Runtime dependencies in nix are:
- Automatically detected: Nix scans outputs for store path references
- Inspectable: Use
nix-store -q --referencesand--requisites - Explicitly controllable: Use
wrapProgramfor environment setup - Can be forced: Write references to
$out/nix-supportwhen needed - May need patching: Fix shebangs and hardcoded paths
- Should be minimized: Smaller closures are faster to deploy
Understanding runtime dependencies helps you create packages that work reliably in nix’s isolated environment while keeping deployments efficient.
Shell Hooks and Toolchain Support
In the previous chapters, we’ve seen that stdenv.mkDerivation
provides defaults for building software with Make and autotools. However, many modern
projects use different build systems like CMake, Meson, or Bazel. Rather than requiring
package authors to manually configure these tools for every package, nix uses a mechanism
called “setup hooks” to automatically adapt the build environment.
This chapter explains how setup hooks work, how they modify the build process, and how to use common build systems with nix.
What Are Setup Hooks?
Setup hooks are shell scripts that run during the build setup phase, before any of the
standard phases like unpackPhase or configurePhase execute. They allow packages to
inject logic that modifies the build environment for themselves and any package that
depends on them.
Setup hooks are particularly useful for:
- Configuring environment variables for build tools
- Adding new build phases or modifying existing ones
- Registering files for automatic processing
- Setting up language-specific build environments
When a package is added to nativeBuildInputs, its setup hook (if it has one) will
automatically run during the build setup. This makes build tool integration seamless.
How Setup Hooks Work
During the setup phase (before any user-defined or standard phases run), nix sources
the setup hooks of all packages in nativeBuildInputs and propagatedNativeBuildInputs.
These hooks can:
- Set environment variables - Configure paths, flags, or tool-specific settings
- Modify phases - Append to or replace phase implementations
- Register hooks - Add pre/post hooks to existing phases
- Define functions - Provide utilities for use in build scripts
The key insight is that setup hooks run in the same shell environment where your build phases will execute, so any variables or functions they define are available throughout the build.
A Simple Example: pkg-config
Before diving into complex build systems, let’s look at a simple setup hook. The
pkg-config package provides a setup hook that sets PKG_CONFIG_PATH:
{ stdenv, pkg-config, openssl }:
stdenv.mkDerivation {
pname = "example";
version = "1.0";
src = ./.;
nativeBuildInputs = [ pkg-config ];
buildInputs = [ openssl ];
# pkg-config's setup hook automatically sets PKG_CONFIG_PATH
# to include openssl's .pc files
}
The setup hook (simplified) does something like:
# From pkg-config's setup hook
addPkgConfigPath() {
addToSearchPath PKG_CONFIG_PATH $1/lib/pkgconfig
addToSearchPath PKG_CONFIG_PATH $1/share/pkgconfig
}
addEnvHooks "$hostOffset" addPkgConfigPath
This automatically adds the lib/pkgconfig and share/pkgconfig directories of all
buildInputs to PKG_CONFIG_PATH, making libraries discoverable.
CMake: A Detailed Example
CMake is one of the most popular build systems for C/C++ projects. Let’s see how its
setup hook modifies stdenv.mkDerivation behavior.
Basic CMake Package
{ stdenv, cmake, fetchurl }:
stdenv.mkDerivation {
pname = "example-cmake-project";
version = "1.0";
src = fetchurl {
url = "https://example.com/project-1.0.tar.gz";
sha256 = "...";
};
nativeBuildInputs = [ cmake ];
}
By simply adding cmake to nativeBuildInputs, the setup hook automatically:
- Changes
configurePhaseto runcmakeinstead of./configure - Sets up
CMAKE_PREFIX_PATH(actuallyNIXPKGS_CMAKE_PREFIX_PATH) to find dependencies - Configures the install prefix to
$out - Sets up appropriate build types and CMake flags
Meson: Modern Build System
Meson is another popular build system that emphasizes speed and user-friendliness.
Like CMake, it has a setup hook that integrates with stdenv.mkDerivation.
Basic Meson Package
{ stdenv, meson, ninja, pkg-config, glib }:
stdenv.mkDerivation {
pname = "example-meson-project";
version = "1.0";
src = ./.;
nativeBuildInputs = [ meson ninja pkg-config ];
buildInputs = [ glib ];
# Meson's setup hook handles the rest
}
Note: Meson requires Ninja as the actual build backend, so both meson and ninja
should be in nativeBuildInputs.
What Meson’s Setup Hook Provides
The Meson setup hook:
- Replaces
configurePhasewithmeson setup - Sets the build directory to
buildby default - Configures the install prefix to
$out - Sets up cross-compilation parameters automatically
The configure phase becomes:
meson setup build --prefix=$out $mesonFlags
And the build phase uses:
ninja -C build
Writing Custom Setup Hooks
Sometimes you need to create your own setup hook for a custom build tool or to provide reusable build logic.
Setup Hook File Location
A package can provide a setup hook by placing a script at:
$out/nix-support/setup-hook
Example: Custom Build Tool
{ stdenv }:
stdenv.mkDerivation {
pname = "my-build-tool";
version = "1.0";
src = ./.;
installPhase = ''
mkdir -p $out/bin $out/nix-support
cp my-build-tool $out/bin/
# Create setup hook
cat > $out/nix-support/setup-hook <<'EOF'
# This hook runs when my-build-tool is in nativeBuildInputs
# Add our tool to PATH
addToSearchPath PATH @out@/bin
# Set environment variables
export MY_BUILD_TOOL_HOME=@out@
# Customize the build phase
myBuildToolBuildPhase() {
echo "Running my-build-tool..."
my-build-tool build $myBuildToolFlags
}
# Use this as the build phase if BUILD_TOOL variable is set
if [ -n "$USE_MY_BUILD_TOOL" ]; then
buildPhase=myBuildToolBuildPhase
fi
EOF
# Substitute @out@ with actual store path
substituteInPlace $out/nix-support/setup-hook \
--replace @out@ $out
'';
}
Now packages can use it:
{ stdenv, my-build-tool }:
stdenv.mkDerivation {
pname = "project";
version = "1.0";
src = ./.;
nativeBuildInputs = [ my-build-tool ];
# The setup hook automatically configures the environment
USE_MY_BUILD_TOOL = true;
myBuildToolFlags = [ "--optimize" "--verbose" ];
}
Common Setup Hook Patterns
Setup hooks often use these patterns:
Adding to Search Paths
# Add directory to PATH for all dependencies
addToSearchPathWithCustom() {
addToSearchPath PATH $1/bin
addToSearchPath LIBRARY_PATH $1/lib
}
addEnvHooks "$hostOffset" addToSearchPathWithCustom
Modifying Phases
# Append to an existing phase
postConfigureHooks+=('echo "Configure completed"')
# Or replace a phase entirely
configurephase=myCustomConfigurePhase
Registering File Types
# Process all .proto files automatically
processProto() {
for proto in "$1"/**/*.proto; do
protoc "$proto"
done
}
addEnvHooks "$hostOffset" processProto
The shellHook Attribute
The shellHook attribute in mkDerivation is different from setup hooks. It only
runs when entering a nix-shell development environment, not during builds:
stdenv.mkDerivation {
pname = "example";
version = "1.0";
src = ./.;
nativeBuildInputs = [ cmake ];
# This only runs in nix-shell, not during nix-build
shellHook = ''
echo "Welcome to the development environment!"
echo "CMake version: $(cmake --version | head -1)"
echo "Build directory: $(pwd)"
'';
}
This is useful for:
- Setting up development tools
- Displaying helpful information
- Configuring IDE integration
- Setting up pre-commit hooks
- Running a workflow (e.g. update script)
Debugging Setup Hooks
When things don’t work as expected, you can debug setup hooks:
View All Hooks
In a nix-shell, inspect what’s been set up:
$ nix-shell '<nixpkgs>' -A your-package
# See environment variables
[nix-shell]$ env | grep -i cmake
# See defined functions
[nix-shell]$ declare -f | grep -A 5 "^cmake"
# See phase definitions
[nix-shell]$ declare -p configurePhase
Trace Hook Execution
Enable verbose output:
$ nix-build --verbose your-package.nix
Or add debugging to your package:
stdenv.mkDerivation {
pname = "example";
version = "1.0";
src = ./.;
nativeBuildInputs = [ cmake ];
preConfigure = ''
echo "=== Environment at configure time ==="
echo "CMAKE_PREFIX_PATH: $CMAKE_PREFIX_PATH"
echo "NIXPKGS_CMAKE_PREFIX_PATH: $NIXPKGS_CMAKE_PREFIX_PATH"
echo "cmakeFlags: $cmakeFlags"
'';
}
Further Reading
For more detailed documentation on specific build systems in nixpkgs:
- Setup Hooks: Nixpkgs Manual - Setup Hooks
The nixpkgs repository also contains many examples of setup hooks in:
nixpkgs/pkgs/build-support/setup-hooks/
Summary
Setup hooks are a powerful mechanism that allows build tools to seamlessly integrate
with stdenv.mkDerivation:
- Automatic integration: Adding a tool to
nativeBuildInputsautomatically activates its hook - Environment configuration: Hooks set up variables, paths, and tool-specific settings
- Phase modification: Hooks can customize how configure, build, and install work
- Reusability: Write a hook once, use it across many packages
- Transparency: Hooks make build system integration feel natural and idiomatic
By understanding setup hooks, you can package software using any build system while maintaining the nix philosophy of reproducible, isolated builds.
Patching Packages
Software doesn’t always build perfectly in nix. Sometimes you need to modify source code before building: fixing hardcoded paths, backporting bug fixes, or applying patches that haven’t been released yet. Understanding how to patch packages is essential for maintaining a functional nixpkgs setup.
This chapter covers the various ways to modify source code during the build process, from simple string replacements to applying complex patch files.
Why Patch?
Common scenarios where patching is necessary:
- Unreleased fixes: A critical bug has been fixed upstream, but no release includes it yet
- Hardcoded paths: The software assumes files are in
/usror/bin - Nix-specific changes: Adjustments needed for nix’s isolated build environment
- Compatibility: Changes needed for newer or older dependencies
- Backports: Applying fixes from newer versions to older, stable versions
The goal is always the same: modify the source code in a reproducible way that can be tracked and maintained.
Small Changes: substituteInPlace
For simple string replacements, use substituteInPlace. This is part of stdenv
and performs in-place search-and-replace operations.
Basic Example
{ stdenv, curl, python3 }:
stdenv.mkDerivation {
pname = "my-tool";
version = "1.0";
src = ./.;
postPatch = ''
# Fix hardcoded path to curl
substituteInPlace src/download.sh \
--replace-fail '/usr/bin/curl' '${curl}/bin/curl'
# Fix Python shebang
substituteInPlace scripts/helper.py \
--replace-fail '/usr/bin/python' '${python3}/bin/python'
'';
}
Multiple Replacements
You can chain multiple replacements in a single call:
postPatch = ''
substituteInPlace config.mk \
--replace-fail '/usr' "$out" \
--replace-fail '/bin/bash' '${bash}/bin/bash' \
--replace-fail 'CC = gcc' 'CC = ${stdenv.cc}/bin/gcc'
'';
Common Patterns
Fixing Hardcoded Install Paths
postPatch = ''
substituteInPlace Makefile \
--replace 'PREFIX = /usr/local' 'PREFIX = ${placeholder "out"}'
'';
The placeholder function generates the output path reference that gets resolved
during the build.
Updating Configuration Paths
postPatch = ''
substituteInPlace src/config.h \
--replace '/etc/myapp' "$out/etc/myapp" \
--replace '/var/lib/myapp' "/var/lib/myapp" # Keep system paths
'';
replaceVars: Template Substitution
For more complex substitutions involving multiple variables, use replaceVars. This
function processes template files and replaces @variable@ patterns with their
corresponding values, with built-in safety checks to ensure all variables are replaced.
{ stdenv, replaceVars, python3, curl }:
stdenv.mkDerivation {
pname = "my-app";
version = "1.0";
src = ./.;
# Create a patched config file from a template
postInstall = ''
mkdir -p $out/etc
cp ${replaceVars ./config.template {
python = "${python3}/bin/python";
curl = "${curl}/bin/curl";
prefix = placeholder "out";
}} $out/etc/config.conf
'';
}
The template file config.template might look like:
python_path = @python@
curl_path = @curl@
install_prefix = @prefix@
Each @variable@ gets replaced with the corresponding attribute value.
Advantages over substituteInPlace:
- Type safety: Fails if variables in the template lack matching replacements
- Detection: Catches leftover unmatched
@variable@patterns - Cleaner: Keeps templates separate from nix code
- Explicit: Use
nullto deliberately skip certain variables
Example: Patching Python Module to Use Nix Store Paths
A common scenario is patching Python modules that execute external programs. Python code often expects tools to be in system paths, but in nix they’re in the store.
Consider a Python module that calls git and ssh:
# mymodule/vcs.py
import subprocess
def clone_repo(url):
# This assumes git is in PATH
subprocess.run(['git', 'clone', url])
def ssh_command(host, cmd):
# This assumes ssh is in PATH
subprocess.run(['ssh', host, cmd])
In nix, we need to patch these to use full store paths:
{ lib
, python3Packages
, git
, openssh
, replaceVars
}:
python3Packages.buildPythonPackage {
pname = "mymodule";
version = "1.0";
src = ./.;
postPatch = ''
# Patch the Python code to use nix store paths
substituteInPlace mymodule/vcs.py \
--replace-fail "['git'" "['${git}/bin/git'" \
--replace-fail "['ssh'" "['${openssh}/bin/ssh'"
'';
meta = with lib; {
description = "VCS utilities with patched binary paths";
license = licenses.mit;
};
}
This pattern is essential for Python packages that shell out to system utilities, ensuring they work correctly as we cannot guarantee what will be on PATH when the script is being ran.
Large Changes: The patches Attribute
For substantial modifications, use actual patch files. These are typically generated
with git diff or diff -u and applied during the patchPhase.
Basic Patch Application
{ stdenv, fetchurl, fetchpatch }:
stdenv.mkDerivation {
pname = "example";
version = "1.0";
src = fetchurl {
url = "https://example.com/example-1.0.tar.gz";
sha256 = "...";
};
patches = [
# Apply a local patch file
./fix-build-error.patch
# Fetch a patch from upstream
(fetchpatch {
url = "https://github.com/example/example/commit/abc123.patch";
sha256 = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";
})
];
}
Creating Patch Files
Generate patches from a package source:
nix-shell -A <pkg> # Create a nix shell which can perform the build
$ unpackPhase
$ cd $sourceRoot
$ git init . # Create a git repo, so we can generate a diff
$ git add .
$ git commit -m "init"
$ configurePhase # Get to the point in the build in which it fails
$ buildPhase
# Make changes to the code until build succeeds, then generate patch:
$ git diff > ../fix-build-error.patch # Then move the patch to the package dir
Patch File Best Practices
Place patches in the same directory as your nix expression:
pkgs/example/
├── default.nix
├── fix-build-error.patch
├── backport-security-fix.patch
└── nix-compat.patch
If you have many patches, it may be good to prefix the patches with a number to designate the order in which they should be applied. E.g. 0001-first.patch. Lists in Nix will preserve the order in which they were listed, but the numbered prefix is a good signal to other contributors that the order was intentional.
Document Your Patches
Always add comments explaining why patches exist:
patches = [
# Fix build error with GCC 13
# Upstream PR: https://github.com/example/example/pull/123
# Remove when version > 1.0
./fix-build-error.patch
# Backport security fix for CVE-2024-12345
# Upstream commit: https://github.com/example/example/commit/abc123
# Remove when version >= 1.1
(fetchpatch {
url = "https://github.com/example/example/commit/abc123.patch";
sha256 = "sha256-...";
})
# Nix-specific: Fix hardcoded /usr paths
# This will always be needed
./nix-compat.patch
];
Good comments include:
- Why the patch is needed
- Where it came from (upstream PR/commit)
- When it can be removed (version number or condition)
Using fetchpatch
fetchpatch is the preferred way to fetch patches from the internet. It automatically
sanitizes and normalizes patches for better reproducibility.
Basic Usage
{ fetchpatch }:
# In your derivation:
patches = [
(fetchpatch {
url = "https://github.com/user/repo/commit/abc123.patch";
sha256 = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";
})
];
Getting the Hash
You can attempt to use nix-prefetch-url to get the hash:
$ nix-prefetch-url https://github.com/user/repo/commit/abc123.patch
sha256-RealHashWillBeShownHere
However, fetchpatch performs its own sanitization, so the hash from
nix-prefetch-url often won’t match. Instead, use a fake hash first:
(fetchpatch {
url = "https://github.com/user/repo/commit/abc123.patch";
sha256 = lib.fakeHash; # or "sha256-AAAA..."
})
Build and the error will show the correct hash:
error: hash mismatch in fixed-output derivation
specified: sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
got: sha256-RealHashForFetchpatchOutput=
Copy the “got” hash to your expression.
Why fetchpatch Hashes Differ
fetchpatch performs several normalizations:
- Strips timestamps from patch headers
- Normalizes line endings
- Removes trailing whitespace
- Sorts git index lines
fetchpatchalso allows for explicit includsion or exclusion of files.
This makes patches more stable across different git versions and patch formats, but means the hash differs from the raw file hash.
Excluding Files from Patches
Sometimes patches include unwanted changes (like test data or generated files):
(fetchpatch {
url = "https://github.com/user/repo/commit/abc123.patch";
sha256 = "sha256-...";
excludes = [
"tests/fixtures/*"
"*.png"
];
})
Including Only Specific Files
Conversely, you can extract only certain changes:
(fetchpatch {
url = "https://github.com/user/repo/commit/abc123.patch";
sha256 = "sha256-...";
includes = [
"src/*.c"
"include/*.h"
];
})
Relative Paths in Patches
If a patch’s paths don’t match your source tree:
(fetchpatch {
url = "https://github.com/user/repo/commit/abc123.patch";
sha256 = "sha256-...";
stripLen = 1; # Remove one directory level from paths
})
This is useful when a patch is generated from a subdirectory or monorepo.
Complex Example: Multiple Patches
Here’s a realistic example combining different patching techniques:
{ lib
, stdenv
, fetchFromGitHub
, fetchpatch
, cmake
, openssl
, zlib
}:
stdenv.mkDerivation rec {
pname = "example-app";
version = "2.1.0";
src = fetchFromGitHub {
owner = "example";
repo = "app";
rev = "v${version}";
sha256 = "sha256-...";
};
nativeBuildInputs = [ cmake ];
buildInputs = [ openssl zlib ];
patches = [
# Fix build with OpenSSL 3.0
# Upstream PR: https://github.com/example/app/pull/456
# Remove when version > 2.1.0
(fetchpatch {
name = "openssl-3-compat.patch";
url = "https://github.com/example/app/pull/456.patch";
sha256 = "sha256-...";
})
# Backport crash fix from main branch
# Upstream commit: https://github.com/example/app/commit/def789
# Remove when version >= 2.2.0
(fetchpatch {
name = "fix-crash-on-invalid-input.patch";
url = "https://github.com/example/app/commit/def789.patch";
sha256 = "sha256-...";
# Only include the actual fix, not test changes
includes = [ "src/*.cpp" "include/*.h" ];
})
# Nix-specific: Fix hardcoded paths
# This patch will always be needed
./fix-hardcoded-paths.patch
];
postPatch = ''
# Additional small fixes
substituteInPlace CMakeLists.txt \
--replace '/usr/share' "$out/share"
# Update version file
echo "${version}" > VERSION
# Fix test data paths
substituteInPlace tests/test_config.cpp \
--replace '../testdata' "$PWD/testdata"
'';
meta = with lib; {
description = "Example application with multiple patches";
homepage = "https://github.com/example/app";
license = licenses.mit;
};
}
Binary Patching: patchelf and autoPatchelfHook
Sometimes you need to patch compiled binaries, especially when dealing with
proprietary software or pre-built binaries. The patchelf tool modifies
ELF (Linux) binaries.
Manual Binary Patching
{ stdenv, patchelf, lib }:
stdenv.mkDerivation {
pname = "proprietary-app";
version = "1.0";
src = ./.;
nativeBuildInputs = [ patchelf ];
installPhase = ''
mkdir -p $out/bin
cp app $out/bin/
# Set the interpreter (dynamic linker)
patchelf --set-interpreter ${stdenv.cc.bintools.dynamicLinker} \
$out/bin/app
# Set RPATH so the app can find shared libraries
patchelf --set-rpath ${lib.makeLibraryPath [ stdenv.cc.cc.lib ]} \
$out/bin/app
'';
}
Using autoPatchelfHook
For applications with many binaries and libraries, autoPatchelfHook automatically
handles patching:
{ stdenv, autoPatchelfHook, openssl, zlib }:
stdenv.mkDerivation {
pname = "binary-app";
version = "1.0";
src = ./.;
nativeBuildInputs = [ autoPatchelfHook ];
# Specify runtime dependencies
buildInputs = [ openssl zlib ];
installPhase = ''
mkdir -p $out/{bin,lib}
cp -r bin/* $out/bin/
cp -r lib/* $out/lib/
# autoPatchelfHook runs automatically after install
'';
}
The hook:
- Finds all ELF binaries
- Sets the correct interpreter
- Fixes RPATH to find dependencies
- Reports missing dependencies
Debugging autoPatchelfHook
If autoPatchelfHook reports missing libraries:
buildInputs = [ openssl zlib ];
# Add missing dependencies
runtimeDependencies = [
# Additional libraries needed at runtime
# These are added to RPATH, generally runtimeDependencies is needed for dlopen calls
someOtherLib
];
# Or for system libraries
autoPatchelfIgnoreMissingDeps = [
# Libraries that are okay to be missing, provided impurely
"libGL.so.1"
];
When to Use Binary Patching
Use patchelf and autoPatchelfHook when:
- Packaging pre-compiled binaries
- Dealing with proprietary software
- Building from binary releases (AppImages, .deb packages, etc.)
- Dynamic libraries have wrong RPATH
For source-based builds, prefer source patches over binary patching.
Patch Application Order
Understanding the order of operations helps debug patching issues:
- unpackPhase: Extract the source
- patchPhase: Apply patches from
patchesattribute - postPatch: Runhook ran after patches (ideal for
substituteInPlace) - configurePhase: Run build system configuration
- …rest of build
If patches fail, check:
- Patch paths match the source tree structure
- Patches apply cleanly (no conflicts)
- The source version matches what the patch expects
Best Practices
- Prefer source patches over binary patches - More maintainable and transparent
- Document why patches exist - Future maintainers need context
- Include removal conditions - Specify when patches can be removed
- Use fetchpatch for upstream patches - Better reproducibility
- Keep patches minimal - Only change what’s necessary
- Test patch removal regularly - Don’t keep stale patches
- Upstream your patches - Contribute fixes back to projects
Common Pitfalls
Forgetting to Update After Upstream Changes
When updating package versions, check if patches still apply:
# BAD: Patches for old version might not apply
version = "2.0"; # Updated, but patches still reference old code
patches = [ ./fix-for-1.0.patch ]; # This might fail!
Always test builds after version updates.
Patching Generated Files
Don’t patch files that get regenerated:
# BAD: configure is regenerated by autoreconf
patches = [ ./fix-configure.patch ];
# GOOD: Patch the source and regenerate
patches = [ ./fix-configure.ac.patch ];
nativeBuildInputs = [ autoreconfHook ];
Summary
Patching in nix provides powerful tools for modifying source code:
- substituteInPlace: Quick string replacements in source files
- patches attribute: Apply patch files during patchPhase
- fetchpatch: Fetch and normalize patches from the internet
- patchelf/autoPatchelfHook: Modify compiled binaries
- Documentation: Always explain why patches exist and when to remove them
The key is choosing the right tool:
- Small changes →
substituteInPlace - Large changes → patch files
- Binary modifications →
autoPatchelfHook
Well-maintained patches make packages reliable and easy to update, while good documentation ensures the changes remain understandable over time.
Multiple Outputs
By default, a nix package produces a single output in the nix store. However, many packages can be split into multiple outputs, each containing a different subset of files. This splitting serves two critical purposes: reducing closure sizes and enabling cross-compilation.
This chapter explains why multiple outputs exist, how to create and use them, and common pitfalls to avoid.
Why Multiple Outputs?
Closure Size Reduction
The primary motivation for multiple outputs is reducing the runtime closure size of packages. Consider a typical library package that contains:
- Compiled shared libraries (
.soor.dylibfiles) - Header files (
.hfiles) - pkg-config files (
.pcfiles) - Static libraries (
.afiles) - Documentation and man pages
- Binaries and utilities
When you build an application that depends on this library, you typically only need the shared libraries at runtime. The headers, static libraries, and documentation are only needed during the build.
This can reduce runtime closures by 30-50% for complex applications, which means:
- Faster downloads from binary caches
- Less disk usage on deployed systems
- Smaller Docker images
- Faster deployments and updates
Cross-Compilation Support
Multiple outputs also enable cleaner cross-compilation. When cross-compiling, you need to distinguish between:
- Build platform tools: Programs that run on the build machine
- Host platform libraries: Libraries for the target architecture
With multiple outputs, the bin output can be marked for the build platform while
lib and out are for the host platform. This ensures build tools don’t get
mixed into the target system’s closure.
Common Output Types
Nixpkgs uses standard output names with conventional purposes:
| Output | Contents | Purpose |
|---|---|---|
out | Default output | Catch-all for files not in other outputs |
dev | Development files | Headers, pkg-config, CMake configs, static libs |
lib | Shared libraries | .so, .dylib, .dll files |
bin | Executables | Binary programs |
man | Man pages | Documentation in man format |
doc | Documentation | General documentation, HTML, PDFs |
info | Texinfo docs | GNU Info documentation |
devdoc | Developer docs | API documentation |
include | C/C++ Headers | Decouple headers from dev (still being adopted circa 2026) |
Note: If only one or two outputs exist, out usually contains everything. The
split only happens when it provides meaningful closure size reduction.
Using Multiple Outputs
Referencing Outputs as a Consumer
When a package has multiple outputs, you can reference specific ones:
{ stdenv, openssl, zlib, pkg-config }:
stdenv.mkDerivation {
pname = "myapp";
version = "1.0";
src = ./.;
nativeBuildInputs = [
pkg-config
];
buildInputs = [
# Reference the dev output for headers and pkg-config files
# Generally this is not needed to be done explicitly by users and instead
# should pass the package instead of an output.
openssl.dev
zlib.dev
# The library outputs are automatically pulled in at runtime
# You don't usually need to specify them explicitly
];
}
The nix build system automatically handles runtime dependencies:
- During build:
devoutput provides headers - At runtime: Only
liboroutare included in the closure
Default Output Selection
When you reference a package without specifying an output, nix uses defaults:
buildInputs = [ openssl ];
# In buildInputs, defaults to openssl.dev (if it exists)
propagatedBuildInputs = [ openssl ];
# Also defaults to .dev for propagation
These defaults are context-sensitive and designed to “do the right thing” for typical use cases.
Explicit Output Selection
For clarity or special cases, be explicit:
buildInputs = [
openssl.dev # Headers for compilation
];
# Force a specific output
preInstall = ''
cp ${openssl.out}/lib/libssl.so $out/lib/
'';
Helper Functions
Use nixpkgs helper functions for clearer intent. These will return the out
output if the prefered output does not exist.
{ lib, openssl, curl, postgresql }:
stdenv.mkDerivation {
pname = "example";
version = "1.0";
src = ./.;
buildInputs = [
(lib.getDev openssl) # Development files
(lib.getLib curl) # Shared libraries
(lib.getBin postgresql) # Executables
];
}
Available helpers:
lib.getDev pkg- Get development outputlib.getLib pkg- Get library outputlib.getBin pkg- Get binary outputlib.getMan pkg- Get man pageslib.getDevMan pkg- Get dev and man outputs
Creating Multiple Outputs
Basic Output Declaration
To split a package into multiple outputs, declare them in the outputs attribute:
stdenv.mkDerivation {
pname = "mylib";
version = "1.0";
src = ./.;
# Declare the outputs
outputs = [ "out" "dev" "doc" ];
# The first output in the list becomes the default
}
Important: The first output in the list is the default. Typically, out should
be first unless you have a specific reason otherwise. For example, curl has
bin as the default since it’s more commonly used as a command than a library.
Automatic Output Splitting
Stdenv provides automatic output splitting for common cases. Simply declaring
outputs = [ "out" "dev" ]; will automatically:
- Move
include/to$dev - Move
.pcfiles to$dev - Move CMake config files to
$dev - Move static libraries to
$dev - Keep shared libraries in
$outor$lib
Example with automatic splitting:
{ stdenv, cmake }:
stdenv.mkDerivation {
pname = "mylib";
version = "1.0";
src = ./.;
outputs = [ "out" "dev" ];
nativeBuildInputs = [ cmake ];
# After installation, stdenv automatically splits:
# $out/lib/*.so -> stays in $out
# $out/include/* -> moved to $dev/include
# $out/lib/pkgconfig/*.pc -> moved to $dev/lib/pkgconfig
# $out/lib/*.a -> moved to $dev/lib
}
Note: This can sometimes cause issues when the contents of a .cmake or .pc
are not updated to where the files are moved.
Manual Output Assignment
For custom file placement, use moveToOutput:
stdenv.mkDerivation {
pname = "mylib";
version = "1.0";
src = ./.;
outputs = [ "out" "dev" "doc" "examples" ];
postInstall = ''
# Move headers to dev output
moveToOutput "include" "$dev"
# Move documentation to doc output
moveToOutput "share/doc" "$doc"
# Move examples to their own output
mkdir -p $examples/share
mv $out/share/examples $examples/share/
'';
}
moveToOutput ensures that:
- The path is actually moved (not copied)
- Any references in moved files are updated
- Parent directories are cleaned up if empty
Output-Specific Paths
Each output is bound to a variable during the build, similar to $out:
outputs = [ "out" "dev" "bin" "doc" ];
installPhase = ''
# Install shared libraries to main output
mkdir -p $out/lib
cp *.so $out/lib/
# Install headers to dev output
mkdir -p $dev/include
cp *.h $dev/include/
# Install binaries to bin output
mkdir -p $bin/bin
cp mytool $bin/bin/
# Install docs to doc output
mkdir -p $doc/share/doc
cp -r docs/* $doc/share/doc/
'';
The placeholder Function
When you need to reference an output path during evaluation but before the actual
store path is known, use placeholder:
{ stdenv }:
stdenv.mkDerivation {
pname = "example";
version = "1.0";
src = ./.;
outputs = [ "out" "dev" ];
# WRONG: $dev is empty during some phases
configureFlags = [
"--includedir=$dev/include" # This won't work!
];
# CORRECT: Use placeholder
configureFlags = [
"--includedir=${placeholder "dev"}/include"
];
# Not possible to communicate during evaluation without placeholder
passthru.EXAMPLE_HOME = "${placeholder "out"}/share";
}
Why placeholder Is Needed
During evaluation and before the build, output paths aren’t yet determined. The hash of the output depends
on the build inputs and process, thus you have a circular problem of steps which influence the output, and output influences the build.
Using $dev directly will only work if resolving dev is deferred to build time.
placeholder generates a temporary string that stdenv replaces with the actual
output path at the right time.
Common placeholder Usage
outputs = [ "out" "dev" "lib" ];
# In configure flags, these are escaped before passed to `./configure`
configureFlags = [
"--prefix=${placeholder "out"}"
"--includedir=${placeholder "dev"}/include"
"--libdir=${placeholder "lib"}/lib"
];
# In CMake flags
cmakeFlags = [
"-DCMAKE_INSTALL_PREFIX=${placeholder "out"}"
"-DCMAKE_INSTALL_INCLUDEDIR=${placeholder "dev"}/include"
];
# In passthru, these are not aware of the build and purely exist in evaluation
passthru.EXAMPLE_HOME="${placeholder "out"}/share";
When NOT to Use placeholder
You don’t need placeholder in any phase, as outputs should be assigned during build environment creation by nix:
outputs = [ "out" "dev" ];
# GOOD: postInstall also has outputs available, may need to `mkdir $dev` if installation logic doesn't already do this
postInstall = ''
moveToOutput "include" "$dev"
'';
Advanced Multiple Output Patterns
Hide rarely used commands in the bin output, only encouraged if they greatly increase the closure size. Generally this is a poor user experience.
# User-facing binaries stay in out
# (out is the default, so users get these)
# Developer tools go to bin
postInstall = ''
mkdir -p $bin/bin
mv $out/bin/toolkit-config $bin/bin/
mv $out/bin/toolkit-debug $bin/bin/
'';
Libraries with Optional Features
Split optional features into separate outputs:
stdenv.mkDerivation {
pname = "multimedia-lib";
version = "1.0";
src = ./.;
outputs = [ "out" "dev" "plugins" ];
postInstall = ''
# Core library in out
# Headers in dev (automatic)
# Optional plugins in separate output
mkdir -p $plugins/lib/plugins
mv $out/lib/plugins/* $plugins/lib/plugins/
'';
}
Users who don’t need plugins won’t download them.
Debugging Multiple Outputs
Check What Outputs Exist
$ nix-instantiate --eval -E 'with import <nixpkgs> {}; openssl.outputs'
[ "bin" "dev" "out" "man" "doc" ]
See What’s in Each Output
$ nix-build '<nixpkgs>' -A openssl.dev
$ tree result-dev/
result-dev/
├── include
│ └── openssl
│ ├── aes.h
│ ├── ...
└── lib
└── pkgconfig
├── libcrypto.pc
└── libssl.pc
$ nix-build '<nixpkgs>' -A openssl.out
$ tree result/
result/
└── lib
├── libcrypto.so.3
└── libssl.so.3
Building all outputs
In nix 1.0 and 2.0, there’s a psuedo all attr which build all outputs:
$ nix-build -A openssl.all
...
/nix/store/uuu-openssl-3.6.1-bin
/nix/store/vvv-openssl-3.6.1-dev
/nix/store/www-openssl-3.6.1
/nix/store/xxx-openssl-3.6.1-man
/nix/store/yyy-openssl-3.6.1-doc
/nix/store/zzz-openssl-3.6.1-debug
In nix 3.0 (flakes), the ^* syntax is used to build all outputs:
$ nix build .#openssl.^* --print-out-paths
...
/nix/store/uuu-openssl-3.6.1-bin
/nix/store/vvv-openssl-3.6.1-dev
/nix/store/www-openssl-3.6.1
/nix/store/xxx-openssl-3.6.1-man
/nix/store/yyy-openssl-3.6.1-doc
/nix/store/zzz-openssl-3.6.1-debug
Common Pitfalls
Forgetting placeholder in Configure Flags
# BAD: $dev will get escaped
configureFlags = [ "--includedir=$dev/include" ];
# GOOD: Use placeholder
configureFlags = [ "--includedir=${placeholder "dev"}/include" ];
Wrong Default Output
# BAD: dev is default, users get headers by default
outputs = [ "dev" "out" ];
# GOOD: out is default
outputs = [ "out" "dev" ];
The first output is what users get when they install the package.
Circular Dependencies Between Outputs
# BAD: dev references out, out references dev
outputs = [ "out" "dev" ];
postInstall = ''
echo "${placeholder "out"}" > $dev/dep
echo "${placeholder "dev"}" > $out/dep
'';
Outputs should form a directed acyclic graph (DAG). Typically:
devcan referenceoutorliboutshould not referencedev, referencingbinis frowned uponbincan referenceliborout
Not using dev with propagated inputs
If a dev output is missing, then propagated inputs will have their closure
added to out. In some cases, this can bloat the outputs by several GBs.
# BAD: Defaulting to out for propagating inputs
outputs = [ "out" ];
# openssl.dev is added to $out/nix-support/propagated-build-inputs
propagatedBuildInputs = [ openssl ];
# GOOD: Use dev output
outputs = [ "out" "dev" ];
# openssl.dev is added to $dev/nix-support/propagated-build-inputs
propagatedBuildInputs = [ openssl ];
Not Using moveToOutput
# BAD: Files stay in out, defeating the purpose
outputs = [ "out" "dev" ];
postInstall = ''
mkdir -p $dev/include
cp $out/include/* $dev/include/ # Copied, not moved!
# Files still in $out/include
'';
# GOOD: Use moveToOutput
postInstall = ''
moveToOutput "include" "$dev" # Properly moved
'';
Hardcoding Output Paths
# BAD: Assumes specific output structure, this always be incorrect as it will affect the drv output paths
postPatch = ''
substituteInPlace config.sh \
--replace '/usr/include' '/nix/store/xxx-mylib-dev/include'
'';
# GOOD: Use placeholder
postPatch = ''
substituteInPlace config.sh \
--replace '/usr/include' '${placeholder "dev"}/include'
'';
When to Use Multiple Outputs
Use multiple outputs when:
- Large headers or docs: Headers/docs are >5MB and rarely needed at runtime
- Optional components: Some users need plugins/tools, others don’t
- Cross-compilation: Separating build tools from target libraries
- Clear boundaries: Natural separation exists (libs vs bins vs docs)
Don’t use multiple outputs when: 2. Everything needed together: No clear separation of concerns 3. Single-purpose packages: Only produces one type of file
Complete Example
Here’s a realistic library package with multiple outputs:
{ lib
, stdenv
, fetchFromGitHub
, cmake
, pkg-config
, doxygen
}:
stdenv.mkDerivation rec {
pname = "graphics-lib";
version = "2.5.0";
src = fetchFromGitHub {
owner = "example";
repo = "graphics-lib";
rev = "v${version}";
sha256 = "sha256-...";
};
# Declare outputs: out is default for end users
outputs = [ "out" "dev" "doc" "examples" ];
nativeBuildInputs = [
cmake
pkg-config
doxygen
];
cmakeFlags = [
"-DBUILD_EXAMPLES=ON"
# The CMake setup hook will do these two definitions for you, but
# are included for demonstration purposes
"-DCMAKE_INSTALL_PREFIX=${placeholder "out"}"
"-DCMAKE_INSTALL_INCLUDEDIR=${placeholder "dev"}/include"
];
# Build API documentation
postBuild = ''
doxygen Doxyfile
'';
postInstall = ''
# Documentation goes to doc output
mkdir -p $doc/share/doc/graphics-lib
cp -r html $doc/share/doc/graphics-lib/
# Examples go to their own output
mkdir -p $examples/share/graphics-lib
mv $out/share/examples $examples/share/graphics-lib/
# Automatic splitting moves:
# - Headers to $dev/include
# - .pc files to $dev/lib/pkgconfig
# - Static libs to $dev/lib
# - Shared libs stay in $out/lib
'';
meta = with lib; {
description = "Graphics library with multiple outputs";
homepage = "https://github.com/example/graphics-lib";
license = licenses.mit;
platforms = platforms.unix;
};
}
Users of this library:
# Building an application: gets dev + out in closure during build
buildInputs = [ graphics-lib ];
# Only needs out at runtime (libraries)
# Runtime closure: graphics-lib.out (~2MB)
# Development closure: graphics-lib.dev (~10MB) + graphics-lib.out
# Documentation isn't pulled unless explicitly requested
# Documentation closure: graphics-lib.doc (~50MB)
Summary
Multiple outputs are a powerful feature for:
- Reducing closure sizes: Runtime closures can be 30-50% smaller
- Cleaner cross-compilation: Separate build tools from target artifacts
- Flexible deployments: Choose what to include in different contexts
Key concepts:
- outputs attribute: Declares what outputs exist
- placeholder function: Reference outputs before paths are known
- moveToOutput: Correctly split files between outputs
- Default output: First in
outputslist is what users get by default - Helper functions:
lib.getDev,lib.getLib, etc. for clarity
Used wisely, multiple outputs significantly reduce resource usage in nix systems without compromising functionality.
Build Helpers
“Build helpers” generally build upon stdenv.mkDerivation but handle more specific
workflows such as python or go package installation. By narrowing the focus, these
builders help provide a nicer abstraction for packaging these types of projects.
Most common helpers
Today, C/C++, python, rust, and go constitute the vast majority of maintained software. We will explore these four ecosystems which should enablel you to be very productive in leveraging Nix as a build tool.
Autotools
Although autotools has largely been deprecated in favor of newer build systems, it’s still commonly used in GNU and other foundational projects. The GNU build system (autotools) consists of autoconf, automake, and libtool which together generate portable Makefiles from template files.
Basic autotools package
Many autotools-based packages work out of the box with stdenv.mkDerivation since
stdenv includes built-in support for the standard ./configure && make && make install
workflow. Here’s a simple example:
{ lib, stdenv, fetchurl }:
stdenv.mkDerivation rec {
pname = "hello";
version = "2.12";
src = fetchurl {
url = "mirror://gnu/hello/${pname}-${version}.tar.gz";
sha256 = "1ayhp9v4m4rdhjmnl2bq3cibrbqqkgjbl3s7yk2nhlh8vj3ay16g";
};
meta = with lib; {
description = "A program that produces a familiar, friendly greeting";
homepage = "https://www.gnu.org/software/hello/";
license = licenses.gpl3Plus;
};
}
For packages that ship with pre-generated configure scripts (typically in release tarballs),
no additional configuration is needed. stdenv will automatically run ./configure,
make, and make install during the appropriate build phases.
Using autoreconfHook
Some projects require regenerating the build system files before building. This is common when:
- Packaging from a Git repository instead of a release tarball
- The distributed tarball is missing generated autotools files
- You need to apply patches that modify
configure.acorMakefile.am
In these cases, use autoreconfHook to regenerate the configure script and related files:
{ lib, stdenv, fetchFromGitHub, autoreconfHook }:
stdenv.mkDerivation rec {
pname = "example";
version = "1.0.0";
src = fetchFromGitHub {
owner = "example";
repo = "example";
rev = "v${version}";
sha256 = "...";
};
nativeBuildInputs = [ autoreconfHook ];
meta = with lib; {
description = "An example autotools project";
license = licenses.mit;
};
}
The autoreconfHook will automatically run autoreconf -vfi before the configure phase,
which regenerates all the necessary build system files.
VCS sources vs release tarballs
Software distributions often provide two types of source archives:
-
Release tarballs: Pre-built archives containing generated
configurescripts and Makefile templates. These are ready to build with just./configure && make. -
VCS snapshots: Direct exports from version control (Git, SVN, etc.) that contain only the source files and autotools templates (
configure.ac,Makefile.am). These require running autotools to generate the build system.
When a project’s release tarball is missing autotools-generated files (or the release
tarball is not available), you’ll need to fetch from VCS and use autoreconfHook:
{ lib, stdenv, fetchFromGitHub, autoreconfHook, pkgconfig }:
stdenv.mkDerivation rec {
pname = "some-tool";
version = "2.1.0";
# Fetching from VCS instead of release tarball
src = fetchFromGitHub {
owner = "upstream";
repo = pname;
rev = "v${version}";
sha256 = "...";
};
# Required to regenerate build system from VCS checkout
nativeBuildInputs = [ autoreconfHook pkgconfig ];
meta = with lib; {
description = "Example tool built from VCS";
license = licenses.gpl2Plus;
};
}
Common fixes and solutions
Missing build dependencies
Autotools-based packages often use pkg-config to find dependencies. Make sure
to include pkg-config in nativeBuildInputs:
nativeBuildInputs = [ autoreconfHook pkg-config ];
buildInputs = [ libfoo libbar ];
Configure flags
Many autotools packages accept flags to customize the build. Common patterns include:
configureFlags = [
"--enable-feature"
"--disable-unwanted"
"--with-library=${lib.getDev somelibrary}"
"--without-optional-dep"
];
Parallel building issues
Some older autotools projects have race conditions in parallel builds. If you encounter random build failures, try disabling parallel building:
enableParallelBuilding = false;
Missing install directories
Occasionally the install phase fails because the Makefile doesn’t create necessary
directories. Use preInstall to create them:
preInstall = ''
mkdir -p $out/bin $out/share/man/man1
'';
Documentation build failures
Some packages try to build documentation that requires tools not in the build environment. You can often skip documentation:
configureFlags = [ "--disable-doc" "--disable-gtk-doc" ];
# or use make flags
makeFlags = [ "DOC=no" ];
Out-of-tree builds
Some autotools projects have issues with in-tree builds. Use preConfigure to set up
an out-of-tree build:
preConfigure = ''
mkdir build
cd build
configureScript=../configure
'';
Summary
Autotools support in nixpkgs is mature and works well for most projects:
- Use
autoreconfHookwhen building from VCS or when regeneration is needed - Prefer release tarballs when available, as they include pre-generated files
- Common issues can usually be solved with configure flags or simple phase hooks
- Check
nixpkgsfor similar packages when encountering build issues
CMake
CMake is a widely-used cross-platform build system generator that creates native build files (like Makefiles or Ninja build files) from a platform-independent configuration. It’s particularly popular in C and C++ projects, and has become the de facto standard for many modern projects.
Basic CMake package
CMake support in nixpkgs is provided through the cmake setup hook. Simply adding
cmake to nativeBuildInputs will configure stdenv to use CMake for the build:
{ lib, stdenv, fetchFromGitHub, cmake }:
stdenv.mkDerivation rec {
pname = "example";
version = "1.0.0";
src = fetchFromGitHub {
owner = "example";
repo = "example";
rev = "v${version}";
sha256 = "...";
};
nativeBuildInputs = [ cmake ];
meta = with lib; {
description = "An example CMake project";
license = licenses.mit;
};
}
The CMake hook automatically handles the configure and build phases:
- Configure phase: Runs
cmakewith appropriate flags - Build phase: Runs
make(orninjaif available) - Install phase: Runs
make install
CMake flags and configuration
CMake uses -D flags to set configuration variables. These can be passed using
cmakeFlags:
{ lib, stdenv, fetchFromGitHub, cmake }:
stdenv.mkDerivation rec {
pname = "myapp";
version = "2.0";
src = fetchFromGitHub {
owner = "example";
repo = pname;
rev = version;
sha256 = "...";
};
nativeBuildInputs = [ cmake ];
cmakeFlags = [
"-DENABLE_FEATURE=ON"
"-DBUILD_TESTING=OFF"
"-DUSE_SYSTEM_LIBS=ON"
];
meta = with lib; {
description = "Example with CMake flags";
license = licenses.asl20;
};
}
Common CMake toggles
Here are frequently-used CMake variables that control the build:
Build type:
# Expected usage
cmakeFlags = [
"-DCMAKE_BUILD_TYPE=Release" # Release, Debug, RelWithDebInfo, MinSizeRel
];
# Build Type is handled as a special case so that it can always be appended last
# Multiple definitions of build type will only have the last occurance honored.
cmakeBuildType = "Release";
Disable tests:
cmakeFlags = [
"-DBUILD_TESTING=OFF" # Standard CMake variable, `doCheck = false;` also sets this
"-DENABLE_TESTS=OFF" # Some projects use this instead
];
Library selection:
cmakeFlags = [
"-DUSE_SYSTEM_LIBS=ON" # Use system libraries instead of bundled
"-DWITH_OPENSSL=${openssl}" # Specify library location
];
Installation paths:
cmakeFlags = [
"-DCMAKE_INSTALL_BINDIR=bin"
"-DCMAKE_INSTALL_LIBDIR=lib"
"-DCMAKE_INSTALL_INCLUDEDIR=include"
];
These are usually set correctly by the cmake setup hook, but some packages may need project-specific values.
Using Ninja instead of Make
CMake can generate Ninja build files instead of Makefiles for faster builds.
Add ninja to nativeBuildInputs alongside cmake:
{ lib, stdenv, fetchFromGitHub, cmake, ninja }:
stdenv.mkDerivation rec {
pname = "fast-build";
version = "1.0";
src = fetchFromGitHub {
owner = "example";
repo = pname;
rev = "v${version}";
sha256 = "...";
};
nativeBuildInputs = [ cmake ninja ];
meta = with lib; {
description = "CMake project using Ninja for faster builds";
license = licenses.mit;
};
}
The CMake hook will automatically detect Ninja and use it instead of Make.
Finding dependencies
CMake projects use find_package() and related functions to locate dependencies. For this to work in Nix,
dependencies must be in buildInputs or propagatedBuildInputs:
{ lib, stdenv, fetchFromGitHub, cmake
, boost, openssl, zlib
}:
stdenv.mkDerivation rec {
pname = "myproject";
version = "3.0";
src = fetchFromGitHub {
owner = "example";
repo = pname;
rev = "v${version}";
sha256 = "...";
};
nativeBuildInputs = [ cmake ];
buildInputs = [ boost openssl zlib ];
meta = with lib; {
description = "Project with multiple dependencies";
license = licenses.gpl3;
};
}
For header-only libraries or libraries needed by downstream packages, use
propagatedBuildInputs instead:
propagatedBuildInputs = [ eigen ]; # Header-only library
Common issues and fixes
CMake can’t find dependencies
If CMake fails to find a package even though it’s in buildInputs, you may need
to help it locate the dependency:
cmakeFlags = [
"-DBoost_ROOT=${boost}"
"-DOPENSSL_ROOT_DIR=${openssl.dev}"
];
Unwanted bundled dependencies
Many CMake projects bundle third-party libraries. It’s highly preferable for them to use system libraries:
postPatch = ''
# Remove bundled libraries
rm -rf third_party/zlib third_party/curl
'';
cmakeFlags = [
"-DUSE_SYSTEM_ZLIB=ON"
"-DUSE_SYSTEM_CURL=ON"
];
Tests fail or aren’t needed
Disable building and running tests if they’re problematic or unnecessary:
cmakeFlags = [ "-DBUILD_TESTING=OFF" ];
# Or keep and build tests but don't run them
doCheck = false;
If you want to run tests, ensure test dependencies are available:
nativeBuildInputs = [ cmake ];
checkInputs = [ gtest ];
doCheck = true;
Install paths are wrong
Some projects hardcode installation paths. Override them with CMake variables or fix the install phase:
cmakeFlags = [
"-DCMAKE_INSTALL_PREFIX=${placeholder "out"}"
"-DCMAKE_INSTALL_BINDIR=bin"
"-DCMAKE_INSTALL_LIBDIR=lib"
];
Or fix it in the install phase:
postInstall = ''
mkdir -p $out/bin
mv $out/usr/bin/* $out/bin/
rm -rf $out/usr
'';
Parallel build failures
While rare with CMake, parallel builds can sometimes cause issues:
enableParallelBuilding = false;
Cross-compilation issues
When cross-compiling, ensure CMake knows about the target platform. The setup hook usually handles this, but you may need:
cmakeFlags = [
"-DCMAKE_SYSTEM_NAME=${stdenv.hostPlatform.uname.system}"
"-DCMAKE_SYSTEM_PROCESSOR=${stdenv.hostPlatform.uname.processor}"
];
Build type warnings
Some packages warn about build type not being set. This is usually harmless, but you can silence it:
cmakeFlags = [
"-DCMAKE_BUILD_TYPE=${if stdenv.isDarwin then "Release" else "RelWithDebInfo"}"
];
GUI tools and Qt/GTK
CMake projects using GUI frameworks may need additional setup:
{ lib, stdenv, fetchFromGitHub, cmake
, qt5 # or gtk3
}:
stdenv.mkDerivation rec {
pname = "gui-app";
version = "1.0";
src = fetchFromGitHub {
owner = "example";
repo = pname;
rev = "v${version}";
sha256 = "...";
};
nativeBuildInputs = [ cmake qt5.wrapQtAppsHook ];
buildInputs = [ qt5.qtbase qt5.qttools ];
meta = with lib; {
description = "Qt-based GUI application";
license = licenses.lgpl3;
};
}
Summary
CMake support in nixpkgs is robust and straightforward:
- Add
cmaketonativeBuildInputsto enable CMake support - Use
cmakeFlagsto pass configuration options via-Dflags - Add
ninjafor faster builds - Most dependency issues can be solved with appropriate CMake flags
- Check existing nixpkgs packages for similar CMake-based projects when troubleshooting
Meson
Meson is a modern build system designed to be fast, user-friendly, and correct. It was created to address shortcomings in older build systems like autotools and CMake, with a focus on speed and ease of use. Meson uses the Ninja build backend by default, resulting in very fast incremental builds.
Key advantages of Meson include:
- Fast builds through Ninja backend
- Simple, readable build definitions in Python-like syntax
- Strong emphasis on correctness and reproducibility
- Excellent cross-compilation support
- Native support for modern language features
Meson is increasingly popular for systems programming projects, particularly in the GNOME ecosystem and other modern C/C++ projects.
Basic Meson package
Meson support in nixpkgs requires both meson and ninja in nativeBuildInputs.
The meson setup hook automatically configures the build system:
{ lib, stdenv, fetchFromGitHub, meson, ninja }:
stdenv.mkDerivation rec {
pname = "example";
version = "1.0.0";
src = fetchFromGitHub {
owner = "example";
repo = "example";
rev = "v${version}";
sha256 = "...";
};
nativeBuildInputs = [ meson ninja ];
meta = with lib; {
description = "An example Meson project";
license = licenses.mit;
};
}
The Meson hook handles the build phases automatically:
- Configure phase: Runs
meson setupto generate build files - Build phase: Runs
ninjato compile the project - Install phase: Runs
ninja installto install files
Meson configuration options
Meson uses -D flags to set build options. These are passed using mesonFlags:
{ lib, stdenv, fetchFromGitHub, meson, ninja, pkg-config }:
stdenv.mkDerivation rec {
pname = "myapp";
version = "2.0";
src = fetchFromGitHub {
owner = "example";
repo = pname;
rev = "v${version}";
sha256 = "...";
};
nativeBuildInputs = [ meson ninja pkg-config ];
mesonFlags = [
"-Dfeature=enabled"
"-Dtests=disabled"
"-Ddocs=false"
];
meta = with lib; {
description = "Example with Meson flags";
license = licenses.gpl3;
};
}
Common Meson options
Meson projects typically use standardized option types: feature (enabled/disabled/auto),
boolean (true/false), combo (enumerated choices), string, and integer.
Build type:
mesonFlags = [
"-Dbuildtype=release" # plain, debug, debugoptimized, release, minsize
];
Note: nixpkgs sets this automatically based on build settings.
Disable tests and documentation:
mesonFlags = [
"-Dtests=disabled"
"-Ddocs=disabled"
];
Enable or disable features:
mesonFlags = [
"-Dfeature_name=enabled" # Force feature on
"-Doptional_feature=auto" # Enable if dependencies available
"-Dunwanted=disabled" # Force feature off
];
Library preferences:
mesonFlags = [
"-Duse_system_libs=true"
"-Dprefer_static=false"
];
Installation directories:
mesonFlags = [
"-Dbindir=bin"
"-Dlibdir=lib"
"-Dincludedir=include"
];
These are usually set correctly by default through Meson’s standard directory options.
Meson build types
Meson supports several predefined build types that control optimization and debug info:
plain: No added flagsdebug: Minimal optimization, full debug infodebugoptimized: Optimization with debug info (default)release: Full optimization, no debug infominsize: Optimize for size
The meson hook in nixpkgs typically sets appropriate defaults, but you can override:
mesonBuildType = "release";
Finding dependencies
Meson projects use dependency() to find libraries. Dependencies should be in
buildInputs (libraries) or nativeBuildInputs (tools):
{ lib, stdenv, fetchFromGitHub, meson, ninja, pkg-config
, glib, gtk3, libxml2
}:
stdenv.mkDerivation rec {
pname = "myproject";
version = "3.0";
src = fetchFromGitHub {
owner = "example";
repo = pname;
rev = "v${version}";
sha256 = "...";
};
nativeBuildInputs = [ meson ninja pkg-config ];
buildInputs = [ glib gtk3 libxml2 ];
meta = with lib; {
description = "GTK application with multiple dependencies";
license = licenses.lgpl2Plus;
};
}
Note that pkg-config is frequently needed with Meson projects, as it’s the primary
mechanism for finding dependencies.
Common issues and fixes
Missing pkg-config
Meson relies heavily on pkg-config for dependency discovery. If you see errors about
missing dependencies that are actually in buildInputs, add pkg-config:
nativeBuildInputs = [ meson ninja pkg-config ];
Dependency not found
If Meson can’t find a dependency through pkg-config, you may need to help it:
mesonFlags = [
"-Dlibfoo=enabled"
];
# Or set PKG_CONFIG_PATH
preConfigure = ''
export PKG_CONFIG_PATH="${lib.getDev somelibrary}/lib/pkgconfig:$PKG_CONFIG_PATH"
'';
Tests fail or require network access
Disable tests if they’re problematic:
mesonFlags = [ "-Dtests=disabled" ];
# Or don't run them even if built
doCheck = false;
For projects that hardcode test execution, you might need to patch meson.build:
postPatch = ''
# Disable test subdirectory
substituteInPlace meson.build \
--replace "subdir('tests')" ""
'';
Introspection and cross-compilation
When cross-compiling, disable introspection if it causes issues:
mesonFlags = [
"-Dintrospection=disabled"
];
Unwanted dependencies
Some Meson projects auto-detect optional dependencies. Explicitly disable them:
mesonFlags = [
"-Doptional_feature=disabled"
];
Documentation build failures
Documentation builds often require tools like doxygen or sphinx. Either add them or disable docs:
# Disable
mesonFlags = [ "-Ddocs=disabled" ];
# Or add dependencies
nativeBuildInputs = [ meson ninja pkg-config sphinx ];
mesonFlags = [ "-Ddocs=enabled" ];
Install paths issues
Some projects may install files to unexpected locations. Fix in post-install:
postInstall = ''
mkdir -p $out/bin
mv $out/usr/local/bin/* $out/bin/ || true
'';
Subprojects and wraps
Meson supports “wraps” for bundled dependencies. To use system libraries instead:
postPatch = ''
# Remove wrap files to force system dependencies
rm -rf subprojects/*.wrap
'';
mesonFlags = [
"-Ddefault_library=shared"
"--wrap-mode=nodownload" # Prevent downloading dependencies
];
Detailed example
Here’s a comprehensive example of a Meson-based GTK application with multiple dependencies and configuration options:
{ lib
, stdenv
, fetchFromGitHub
, meson
, ninja
, pkg-config
, wrapGAppsHook
, desktop-file-utils
, appstream-glib
, glib
, gtk4
, libadwaita
, json-glib
, sqlite
, curl
}:
stdenv.mkDerivation rec {
pname = "example-gtk-app";
version = "2.5.0";
src = fetchFromGitHub {
owner = "example";
repo = "gtk-app";
rev = "v${version}";
sha256 = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";
};
nativeBuildInputs = [
meson
ninja
pkg-config
wrapGAppsHook # Wrapper for GTK apps
desktop-file-utils # Desktop file validation
appstream-glib # AppStream metadata validation
];
buildInputs = [
glib
gtk4
libadwaita
json-glib
sqlite
curl
];
mesonFlags = [
"-Dbuildtype=release"
"-Dtests=disabled" # Disable tests for this build
"-Dprofile=default" # Use default profile (vs. development)
"-Dnetwork_backend=curl" # Choose curl over other backends
];
# Meson projects often validate desktop files and appstream data
# during build, which may fail in sandbox
postPatch = ''
# Don't fail build on validation warnings
substituteInPlace meson.build \
--replace "error_on_warning: true" "error_on_warning: false"
'';
# Run tests (if we enabled them)
# doCheck = true;
postInstall = ''
# Ensure all expected files are in place
test -f $out/bin/${pname}
test -f $out/share/applications/${pname}.desktop
test -f $out/share/icons/hicolor/scalable/apps/${pname}.svg
'';
meta = with lib; {
description = "Example GTK4/Libadwaita application";
longDescription = ''
A feature-rich GTK4 application demonstrating modern GNOME
application development with Meson build system.
'';
homepage = "https://github.com/example/gtk-app";
license = licenses.gpl3Plus;
platforms = platforms.linux;
};
}
Summary
Meson provides a modern, fast build system with excellent nixpkgs integration:
- Always include both
mesonandninjainnativeBuildInputs - Add
pkg-configfor dependency discovery in most projects - Use
mesonFlagsto configure build options with-Dflags - Meson’s standardized option types (feature/boolean/combo) provide consistency
- The build system is considered the modern replacement for C/C++ projects
- Cross-compilation and reproducibility are first-class features in Meson
Python
Python packaging in Nix requires special consideration due to how Python finds and loads
modules. Unlike compiled languages where dependencies are linked at build time, Python
searches for modules at runtime in specific directories. This generally is handled
by buildPythonPackage and buildPythonApplication gracefully, but can cause some
issues in various situations.
Python and Nix: special concerns
Module discovery and site-packages
Traditional Python installations place packages in a shared site-packages directory
(e.g., /usr/lib/python3.11/site-packages). When you import a module, Python searches
these standard locations.
Nix takes a different approach:
- Each package has its own isolated store path
- Python packages are installed to
$out/lib/python3.X/site-packages/ - Dependencies aren’t automatically visible to Python programs
To solve this, Nix uses wrapper scripts that set PYTHONPATH to include all dependencies,
or builds Python environments that aggregate packages into a single site-packages directory.
Import-time vs runtime dependencies
Python has a unique characteristic where imports can happen at any point during execution, not just at program startup. This means:
- All Python dependencies must be available in
PYTHONPATH - Missing dependencies only manifest when code tries to import them
- Optional dependencies may only be discovered during specific code paths
Basic Python package
Python packages in nixpkgs use buildPythonPackage from the Python package set. Here’s
a minimal example:
{ lib, buildPythonPackage, fetchPypi }:
buildPythonPackage rec {
pname = "requests";
version = "2.31.0";
src = fetchPypi {
inherit pname version;
sha256 = "sha256-lC8FSGGo...";
};
meta = with lib; {
description = "Python HTTP for Humans";
homepage = "https://requests.readthedocs.io";
license = licenses.asl20;
};
}
The buildPythonPackage function automatically handles:
- Running
python setup.py,pip install,poetry install, and other installers. - Installing to the correct
site-packagesdirectory - Generating wrapper scripts with proper
PYTHONPATH - Managing Python version compatibility
buildPythonPackage vs buildPythonApplication
Nixpkgs provides two main functions for Python projects:
buildPythonPackage
Use for Python libraries that other Python packages depend on.
Packages built with buildPythonPackage:
- Are included in Python environments via
python.withPackages - Can be used as dependencies by other Python packages
- Are installed into
site-packages
buildPythonApplication
Use for end-user Python applications with executable scripts, and not to be used as a module:
{ lib, buildPythonApplication, fetchPypi, requests, click }:
buildPythonApplication rec {
pname = "myapp";
version = "2.0.0";
src = fetchPypi {
inherit pname version;
sha256 = "...";
};
propagatedBuildInputs = [ requests click ];
meta = with lib; {
description = "A Python application";
license = licenses.gpl3;
mainProgram = "myapp";
};
}
Packages built with buildPythonApplication:
- Are meant to be installed directly into user profiles
- Cannot be used as dependencies by other Python packages
- Have stricter dependency isolation to prevent conflicts
Rule of thumb: If your package provides a library that others import, use
buildPythonPackage. If it’s primarily a command-line tool or standalone application,
use buildPythonApplication.
Build formats and pyproject
Python has evolved through several packaging standards:
- setup.py (legacy): Imperative Python script
- setup.cfg (transitional): Declarative configuration
- pyproject.toml (modern): Standard metadata format (PEP 517/518)
pyproject = true
Modern Python packages use pyproject.toml with PEP 517 build backends. Enable this
with pyproject = true:
{ lib, buildPythonPackage, fetchPypi, setuptools }:
buildPythonPackage rec {
pname = "modern-package";
version = "3.0.0";
pyproject = true;
src = fetchPypi {
inherit pname version;
sha256 = "...";
};
nativeBuildInputs = [ setuptools ];
meta = with lib; {
description = "A modern Python package using pyproject.toml";
license = licenses.mit;
};
}
When pyproject = true:
- The build backend (setuptools, poetry-core, hatchling, etc.) must be in
nativeBuildInputs - Nix uses PEP 517 compliant build process
- No
setup.pyis required
Common build backends
setuptools (most common):
nativeBuildInputs = [ setuptools ];
poetry-core (for Poetry projects):
nativeBuildInputs = [ poetry-core ];
hatchling (for Hatch projects):
nativeBuildInputs = [ hatchling ];
flit-core (for Flit projects):
nativeBuildInputs = [ flit-core ];
Common configuration options
Dependencies
propagatedBuildInputs: Runtime dependencies (libraries your package imports)
propagatedBuildInputs = [ requests numpy pandas ];
nativeBuildInputs: Build-time tools (build backends, etc.)
nativeBuildInputs = [ setuptools wheel ];
checkInputs: Test-only dependencies
checkInputs = [ pytest pytest-cov ];
Format specification
For older packages not using pyproject.toml:
# For setup.py based packages (default)
format = "setuptools";
# For packages with just a .whl wheel
format = "wheel";
# For other formats
format = "other";
Disabling tests
# Don't run tests at all
doCheck = false;
# Disable only some tests
checkInputs = [ pytestCheckHook ];
disabledTest = [ "network" ];
Python version constraints
The disabled attr can be used to throw an evaluation error, thus making the
usage fail quicker for the user.
{ lib, buildPythonPackage, fetchPypi, pythonOlder }:
buildPythonPackage rec {
pname = "modern-package";
version = "1.0.0";
# Disable for Python < 3.8
disabled = pythonOlder "3.8";
# ...
}
Common issues and fixes
Missing dependencies
If imports fail at runtime, add the dependency to propagatedBuildInputs:
propagatedBuildInputs = [ missing-module ];
Note: buildInputs doesn’t work for Python packages - always use propagatedBuildInputs
for runtime dependencies.
Test failures
Disable problematic tests:
checkPhase = ''
# Skip specific tests
pytest -k "not test_problematic"
# Skip entire test files
pytest --ignore=tests/test_network.py
'';
Or disable all tests:
doCheck = false;
Build backend not found
Modern packages need their build backend specified:
pyproject = true;
nativeBuildInputs = [ setuptools ]; # or poetry-core, hatchling, etc.
Import errors for C extensions
Some packages build C extensions and need build dependencies:
{ lib, buildPythonPackage, fetchPypi, libxml2, libxslt }:
buildPythonPackage rec {
pname = "lxml";
version = "4.9.3";
src = fetchPypi {
inherit pname version;
sha256 = "...";
};
buildInputs = [ libxml2 libxslt ];
meta = with lib; {
description = "Python bindings for libxml2 and libxslt";
license = licenses.bsd3;
};
}
Optional dependencies
Some packages have optional feature sets. Use passthru.optional-dependencies:
propagatedBuildInputs = [ core-dep ];
passthru.optional-dependencies = {
dev = [ pytest mypy ];
docs = [ sphinx ];
};
Packages not in PyPI
For packages not on PyPI, use fetchFromGitHub:
{ lib, buildPythonPackage, fetchFromGitHub, setuptools }:
buildPythonPackage rec {
pname = "custom-package";
version = "1.0.0";
pyproject = true;
src = fetchFromGitHub {
owner = "user";
repo = "custom-package";
rev = "v${version}";
sha256 = "...";
};
nativeBuildInputs = [ setuptools ];
meta = with lib; {
description = "Custom package from GitHub";
license = licenses.mit;
};
}
Setup requires
Some packages declare build dependencies in setup_requires which Nix doesn’t handle
automatically. Add them to nativeBuildInputs:
nativeBuildInputs = [ setuptools setuptools-scm wheel ];
Detailed example
Here’s a comprehensive example of a Python application with multiple dependencies, tests, and modern packaging:
{ lib
, buildPythonApplication
, fetchFromGitHub
, pythonOlder
# Build dependencies
, setuptools
, setuptools-scm
# Runtime dependencies
, click
, requests
, pyyaml
, rich
, sqlalchemy
# Test dependencies
, pytest
, pytest-cov
, pytest-mock
, pytestCheckHook
}:
buildPythonApplication rec {
pname = "example-cli-tool";
version = "2.5.0";
pyproject = true;
# Require Python 3.9 or newer
disabled = pythonOlder "3.9";
src = fetchFromGitHub {
owner = "example";
repo = pname;
rev = "v${version}";
hash = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";
};
# Needed because project uses setuptools-scm for version detection
env.SETUPTOOLS_SCM_PRETEND_VERSION = version;
nativeBuildInputs = [
setuptools
setuptools-scm
];
propagatedBuildInputs = [
click
requests
pyyaml
rich
sqlalchemy
];
nativeCheckInputs = [
pytest
pytest-cov
pytest-mock
pytestCheckHook
];
# Set pytest flags
pytestFlagsArray = [
"-v"
"--cov=${pname}"
"--cov-report=term-missing"
];
# Disable specific tests that require network or are flaky
disabledTests = [
"test_api_connection"
"test_downloads_file"
];
# Skip entire test modules
disabledTestPaths = [
"tests/integration/"
];
pythonImportsCheck = [
"example_cli_tool"
"example_cli_tool.commands"
];
meta = with lib; {
description = "Example CLI tool for managing workflows";
longDescription = ''
A comprehensive command-line tool demonstrating best practices
for Python applications in nixpkgs, including proper dependency
management, testing, and packaging.
'';
homepage = "https://github.com/example/example-cli-tool";
changelog = "https://github.com/example/example-cli-tool/releases/tag/v${version}";
license = licenses.asl20;
maintainers = with maintainers; [ ];
mainProgram = "example-cli";
};
}
This example demonstrates:
- Using
buildPythonApplicationfor a CLI tool - Modern
pyproject = truepackaging - Python version constraints
- Build backend configuration (setuptools-scm)
- Clear separation of build, runtime, and test dependencies
- Comprehensive test configuration with pytestCheckHook
- Selective test disabling
- Import checks for basic validation
- Complete metadata including changelog
Creating Python environments
To use multiple Python packages together, create an environment. This is similar to using virtualenv:
# In a shell.nix or similar
{ pkgs ? import <nixpkgs> {} }:
let
pythonEnv = pkgs.python3.withPackages (ps: with ps; [
requests
numpy
pandas
matplotlib
]);
in
pkgs.mkShell {
buildInputs = [ pythonEnv ];
}
This creates a Python environment with all specified packages available for import.
Summary
Python packaging in Nix requires understanding module resolution and isolation:
- Use
buildPythonPackagefor libraries,buildPythonApplicationfor applications - Set
pyproject = truefor modern packages using pyproject.toml - Always use
propagatedBuildInputsfor runtime Python dependencies - Specify the build backend in
nativeBuildInputs(setuptools, poetry-core, etc.) - Use
pytestCheckHookfor comprehensive test integration - Check existing nixpkgs Python packages for patterns and examples
Rust
Rust has become increasingly popular for systems programming, offering memory safety without garbage collection. Rust is a favorite amongst many for it’s safety, speed, and correctness qualities.
Rust and Nix: special concerns
Cargo and dependency management
Rust uses Cargo for dependency management, which downloads dependencies from crates.io at build time. This conflicts with Nix’s requirement for reproducible, offline builds. Nix solves this by:
- Pre-fetching dependencies: Nix downloads all Cargo dependencies before the build
- Vendoring: Dependencies are placed in a local directory that Cargo uses instead of downloading
- Hash verification: A hash of all dependencies ensures reproducibility
This means Rust packages in Nix require an additional hash (cargoHash or cargoSha256)
that covers all dependencies listed in Cargo.lock.
Cargo.lock is essential
Unlike development where Cargo.lock may be gitignored for libraries, Nix requires
Cargo.lock to determine exact dependency versions. If a project doesn’t ship
Cargo.lock, you’ll need to generate it.
Build artifacts and incremental compilation
Cargo caches build artifacts for incremental compilation. In Nix:
- Each build starts fresh (no incremental compilation)
- Build artifacts from dependencies are cached separately
- This ensures reproducibility but may increase build times
Basic Rust package
Rust packages in nixpkgs use rustPlatform.buildRustPackage. Here’s a minimal example:
{ lib, rustPlatform, fetchFromGitHub }:
rustPlatform.buildRustPackage rec {
pname = "ripgrep";
version = "14.0.3";
src = fetchFromGitHub {
owner = "BurntSushi";
repo = "ripgrep";
rev = version;
sha256 = "sha256-...";
};
cargoHash = "sha256-...";
meta = with lib; {
description = "A line-oriented search tool";
homepage = "https://github.com/BurntSushi/ripgrep";
license = licenses.unlicense;
};
}
The buildRustPackage function handles:
- Setting up the Rust toolchain
- Fetching and vendoring Cargo dependencies
- Running
cargo build --release - Installing binaries to
$out/bin
Common Rust build options
cargoHash vs cargoSha256
Two attributes are available for specifying the dependency hash:
cargoHash (preferred, modern):
cargoHash = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";
cargoSha256 (legacy, still supported):
cargoSha256 = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";
To get the hash, set it to an empty string or fake hash and let Nix tell you the correct one:
cargoHash = ""; # or lib.fakeHash
Cargo build flags
Control how Cargo builds the package:
# Build only specific binaries
cargoBuildFlags = [ "--bin" "main-binary" ];
# Pass features to Cargo
buildFeatures = [ "feature1" "feature2" ];
buildNoDefaultFeatures = true;
# Or use cargoFlags for full control
cargoFlags = [ "--all-features" ];
Testing
Rust tests run by default during the check phase:
# Disable tests entirely
doCheck = false;
# Skip specific tests
checkFlags = [
"--skip test_requires_network"
"--skip integration_test"
];
# Test only specific packages in a workspace
cargoTestFlags = [ "--package" "subcrate" ];
Cargo workspaces
For projects with multiple crates in a workspace:
# Build all workspace members
cargoBuildFlags = [ "--workspace" ];
# Or build specific members
cargoBuildFlags = [ "--package" "crate1" "--package" "crate2" ];
Native dependencies
Rust projects often link to C libraries:
{ lib, rustPlatform, fetchFromGitHub, pkg-config, openssl }:
rustPlatform.buildRustPackage rec {
pname = "cargo-audit";
version = "0.18.3";
src = fetchFromGitHub {
owner = "rustsec";
repo = "rustsec";
rev = "cargo-audit/v${version}";
sha256 = "sha256-...";
};
cargoHash = "sha256-...";
nativeBuildInputs = [ pkg-config ];
buildInputs = [ openssl ];
# Set environment variables for build scripts
env.OPENSSL_NO_VENDOR = 1;
meta = with lib; {
description = "Audit Cargo.lock for security vulnerabilities";
license = licenses.asl20;
};
}
Common issues and fixes
Missing Cargo.lock
If the source doesn’t include Cargo.lock:
postPatch = ''
# Generate Cargo.lock if missing
cargo generate-lockfile
'';
# Or use cargoLock attribute to specify it separately
cargoLock = {
lockFile = ./Cargo.lock;
};
Build script failures
Cargo build scripts (build.rs) sometimes have issues in Nix:
# Disable specific build scripts
postPatch = ''
# Remove problematic build script
rm subcrate/build.rs
'';
# Set environment variables build scripts need
env.SOME_VAR = "value";
# Provide tools build scripts expect
nativeBuildInputs = [ cmake protobuf ];
Linking errors
When Rust code fails to link against system libraries:
# Ensure pkg-config can find libraries
nativeBuildInputs = [ pkg-config ];
buildInputs = [ openssl zlib ];
# Or set environment variables manually
env.PKG_CONFIG_PATH = "${lib.getDev openssl}/lib/pkgconfig";
# For libraries that don't use pkg-config
preBuild = ''
export LIBRARY_PATH="${lib.getLib openssl}/lib:$LIBRARY_PATH"
'';
Vendoring issues with git dependencies
Cargo dependencies from git repositories need special handling:
cargoLock = {
lockFile = ./Cargo.lock;
outputHashes = {
"some-crate-0.1.0" = "sha256-...";
};
};
Tests require network or filesystem access
Disable problematic tests:
# Skip network-dependent tests
checkFlags = [
"--skip test_downloads"
"--skip test_api"
];
# Or disable the check phase entirely
doCheck = false;
Cross-compilation
Rust has excellent cross-compilation support. Specify the target:
# Set in the derivation
cargoExtraArgs = [ "--target" "x86_64-unknown-linux-musl" ];
# Or configure via environment
env.CARGO_BUILD_TARGET = "x86_64-unknown-linux-musl";
Binary size optimization
Reduce binary size by stripping debug info:
# Strip debug symbols (usually done by default)
dontStrip = false;
# Use release profile optimizations
cargoFlags = [ "--profile" "release" ];
# Or configure in Cargo.toml instead
Detailed example
Here’s a comprehensive example of a Rust CLI application with native dependencies, tests, and multiple binaries:
{ lib
, rustPlatform
, fetchFromGitHub
, pkg-config
, installShellFiles
, stdenv
, openssl
, sqlite
, zlib
}:
rustPlatform.buildRustPackage rec {
pname = "example-rust-cli";
version = "2.5.0";
src = fetchFromGitHub {
owner = "example";
repo = "rust-cli";
rev = "v${version}";
hash = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";
};
cargoHash = "sha256-BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=";
nativeBuildInputs = [
pkg-config
installShellFiles
];
buildInputs = [
openssl
sqlite
zlib
];
# Build only the main binary, not helper tools
cargoBuildFlags = [ "--bin" "example-cli" ];
# Enable all features except "unstable"
buildNoDefaultFeatures = true;
buildFeatures = [ "tls" "sqlite" "compression" ];
# Skip tests that require network access
checkFlags = [
"--skip=integration::test_remote_api"
"--skip=sync::test_download"
];
# Don't run doc tests as they may be flaky
cargoTestFlags = [ "--lib" "--bins" ];
# Set environment variables for the build
env = {
OPENSSL_NO_VENDOR = 1;
ZSTD_SYS_USE_PKG_CONFIG = 1;
};
# Generate Cargo.lock if it's missing (usually it should be committed)
# postPatch = ''
# cargo generate-lockfile
# '';
postInstall = ''
# Install shell completions
installShellCompletion --cmd example-cli \
--bash <($out/bin/example-cli completions bash) \
--fish <($out/bin/example-cli completions fish) \
--zsh <($out/bin/example-cli completions zsh)
# Install man page
installManPage docs/example-cli.1
'';
meta = with lib; {
description = "Example Rust CLI tool for file management";
longDescription = ''
A comprehensive command-line tool written in Rust, demonstrating
best practices for packaging Rust applications in nixpkgs including
native dependencies, feature flags, and shell completion installation.
'';
homepage = "https://github.com/example/rust-cli";
changelog = "https://github.com/example/rust-cli/releases/tag/v${version}";
license = licenses.mit;
# Only build on platforms where all dependencies are available
platforms = platforms.unix;
# Mark as broken on specific platforms if needed
# broken = stdenv.isDarwin;
};
}
This example demonstrates:
- Using
buildRustPackagefor a CLI application - Cross-platform dependencies (macOS frameworks)
- Native library dependencies (OpenSSL, SQLite, zlib)
- Selective binary building with
cargoBuildFlags - Feature flag management
- Test filtering for problematic tests
- Environment variable configuration for build scripts
- Post-install tasks (shell completions, man pages)
- Comprehensive metadata with platform constraints
Working with Cargo workspaces
For projects with multiple crates:
rustPlatform.buildRustPackage rec {
pname = "workspace-project";
version = "1.0.0";
src = fetchFromGitHub {
owner = "example";
repo = pname;
rev = "v${version}";
hash = "sha256-...";
};
cargoHash = "sha256-...";
# Build specific workspace members
cargoBuildFlags = [
"--package" "main-app"
"--package" "helper-tool"
];
# Run tests for all workspace members
cargoTestFlags = [ "--workspace" ];
# Install binaries from multiple crates
postInstall = ''
install -Dm755 target/release/main-app $out/bin/main-app
install -Dm755 target/release/helper-tool $out/bin/helper-tool
'';
meta = with lib; {
description = "Multi-crate Rust workspace";
license = licenses.asl20;
};
}
Using buildRustCrate for advanced cases
For more control over the build process, use buildRustCrate (lower-level):
{ lib, rustPlatform }:
rustPlatform.buildRustCrate {
pname = "mycrate";
version = "1.0.0";
src = ./.;
# Specify dependencies manually
dependencies = [
# ... other crates
];
}
This is rarely needed; buildRustPackage handles most use cases.
Summary
Rust packaging in Nix leverages Cargo while maintaining reproducibility:
- Use
rustPlatform.buildRustPackagefor most Rust projects - Specify
cargoHashto lock dependency versions - Ensure
Cargo.lockis present in the source - Use
pkg-configand list native dependencies inbuildInputs - Control features with
buildFeaturesandbuildNoDefaultFeatures - Platform-specific dependencies (like macOS frameworks) should use conditionals
- Check existing nixpkgs Rust packages for patterns and solutions
Go
Go has gained widespread adoption for cloud infrastructure, network services, and command-line tools. The Go toolchain includes built-in dependency management through Go modules, which downloads dependencies from version control systems at build time. Nixpkgs provides robust support for Go projects through helper functions that integrate the Go build system with Nix’s reproducibility requirements.
Go and Nix: special concerns
Go modules and dependency management
Go uses modules for dependency management, specified in go.mod with exact versions
locked in go.sum. During a normal build, the Go toolchain downloads dependencies
from their source repositories (GitHub, GitLab, etc.).
This conflicts with Nix’s offline, reproducible build model. Nix solves this by:
- Pre-fetching dependencies: Nix downloads all Go module dependencies before the build
- Local module cache: Dependencies are placed in a local cache that the Go toolchain uses
- Hash verification: A hash of all dependencies (
vendorHash) ensures reproducibility
This means Go packages in Nix require an additional hash (vendorHash or vendorSha256)
that covers all dependencies listed in go.sum.
The vendor directory alternative
Some Go projects include a vendor/ directory with all dependencies committed to the
repository. For these projects, you can skip vendoring by setting vendorHash = null.
Go’s opinionated build system
Go has strong conventions about project structure and import paths:
- Import paths typically match repository URLs (e.g.,
github.com/user/project) - The
mainpackage defines executable programs - Module path must match the repository structure
These conventions generally work well with Nix, but occasionally require workarounds for projects with non-standard layouts.
Basic Go package
Go packages in nixpkgs use buildGoModule. Here’s a minimal example:
{ lib, buildGoModule, fetchFromGitHub }:
buildGoModule rec {
pname = "hugo";
version = "0.121.0";
src = fetchFromGitHub {
owner = "gohugoio";
repo = "hugo";
rev = "v${version}";
sha256 = "sha256-...";
};
vendorHash = "sha256-...";
meta = with lib; {
description = "A fast and modern static website engine";
homepage = "https://gohugo.io/";
license = licenses.asl20;
mainProgram = "hugo";
};
}
The buildGoModule function handles:
- Setting up the Go toolchain
- Fetching and caching module dependencies
- Running
go build - Installing binaries to
$out/bin
Common Go build options
vendorHash (previously vendorSha256)
vendorHash (preferred, modern):
vendorHash = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";
To get the hash, set it to an empty string or fake hash and let Nix tell you the correct one:
vendorHash = ""; # or lib.fakeHash
For projects with vendored dependencies:
vendorHash = null; # Use committed vendor/ directory
Controlling the build
Build specific packages:
# Build a specific subpackage
subPackages = [ "cmd/mytool" ];
# Build multiple subpackages
subPackages = [ "cmd/tool1" "cmd/tool2" ];
Disable Go module verification:
# Skip checksum verification (rarely needed)
proxyVendor = true;
Set build flags:
# Pass flags to go build
ldflags = [
"-s" # Strip debug symbols
"-w" # Strip DWARF symbols
"-X main.version=${version}" # Set variables
];
tags = [ "sqlite" "json1" ]; # Build tags
Testing
Go tests run by default during the check phase:
# Disable tests entirely
doCheck = false;
# Skip specific tests
checkFlags = [
"-skip=TestRequiresNetwork"
];
# Exclude test packages
excludedPackages = [
"integration"
"e2e"
];
Module path and package name
For most projects, the module path is detected automatically from go.mod. Occasionally
you need to override it:
# Override detected module path
modRoot = "subdir";
# For multi-module repositories
modRoot = "./sdk/go";
Common issues and fixes
Missing go.sum
If the source doesn’t include go.sum:
postPatch = ''
# Generate go.sum if missing
go mod tidy
'';
However, most projects should include go.sum in their repository.
Build-time dependencies
Some Go packages use cgo and need C libraries:
{ lib, buildGoModule, fetchFromGitHub, pkg-config, sqlite }:
buildGoModule rec {
pname = "tool-with-cgo";
version = "1.0.0";
src = fetchFromGitHub {
owner = "example";
repo = pname;
rev = "v${version}";
sha256 = "sha256-...";
};
vendorHash = "sha256-...";
nativeBuildInputs = [ pkg-config ];
buildInputs = [ sqlite ];
meta = with lib; {
description = "Tool using cgo with SQLite";
license = licenses.mit;
};
}
Tests require network or external resources
Disable problematic tests:
# Skip tests that need network
checkFlags = [
"-skip=TestDownload|TestAPI"
];
# Or disable all tests
doCheck = false;
# Exclude entire test packages
excludedPackages = [ "integration" "e2e" ];
vendorHash changes frequently
If dependencies change often during development, use a variable:
vendorHash = "sha256-..."; # Update this when go.mod changes
Some projects use go.sum for vendoring which is more stable.
Subpackage builds
For projects with multiple commands:
# Build all binaries in cmd/
subPackages = [ "cmd/..." ];
# Build specific binaries
subPackages = [ "cmd/server" "cmd/client" ];
# Install with custom names
postInstall = ''
mv $out/bin/server $out/bin/myserver
'';
Setting version information
Go projects often use linker flags to embed version info:
ldflags = [
"-s" "-w"
"-X main.version=${version}"
"-X main.commit=${src.rev}"
"-X main.date=1970-01-01T00:00:00Z"
];
CGO_ENABLED issues
Some packages need to enable or disable cgo explicitly:
# Disable cgo for static binaries
env.CGO_ENABLED = "0";
# Or enable it when needed
env.CGO_ENABLED = "1";
buildInputs = [ gcc ];
Module replace directives
Projects with replace directives in go.mod may need special handling:
postPatch = ''
# Remove replace directives that point to local paths
substituteInPlace go.mod \
--replace "replace example.com/module => ../module" ""
'';
Cross-compilation
Go has excellent cross-compilation support:
# Cross-compile for different platforms
# Nix handles this automatically based on the target platform
# Just ensure CGO is disabled if cross-compiling
env.CGO_ENABLED = "0";
Detailed example
Here’s a comprehensive example of a Go CLI application with multiple binaries, cgo dependencies, and version embedding:
{ lib
, buildGoModule
, fetchFromGitHub
, pkg-config
, installShellFiles
, stdenv
, darwin
, sqlite
, zlib
}:
buildGoModule rec {
pname = "example-go-tool";
version = "3.2.0";
src = fetchFromGitHub {
owner = "example";
repo = "go-tool";
rev = "v${version}";
hash = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";
};
vendorHash = "sha256-BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=";
nativeBuildInputs = [
pkg-config
installShellFiles
];
buildInputs = [
sqlite
zlib
] ++ lib.optionals stdenv.isDarwin [
darwin.apple_sdk.frameworks.Security
];
# Build only the main CLI, not helper tools
subPackages = [ "cmd/example-tool" ];
# Build tags for optional features
tags = [ "sqlite" "json1" ];
# Set version info and optimization flags
ldflags = [
"-s"
"-w"
"-X main.version=${version}"
"-X main.commit=${src.rev}"
"-X main.buildDate=1970-01-01T00:00:00Z"
];
# Skip tests that require network access
checkFlags = [
"-skip=TestHTTPClient|TestDownload"
];
# Don't test integration packages
excludedPackages = [
"integration"
"testutil"
];
# Some tests need CGO
preCheck = ''
export CGO_ENABLED=1
'';
postInstall = ''
# Rename binary if needed
mv $out/bin/example-tool $out/bin/etool
# Install shell completions
installShellCompletion --cmd etool \
--bash <($out/bin/etool completion bash) \
--fish <($out/bin/etool completion fish) \
--zsh <($out/bin/etool completion zsh)
# Install man page if available
if [ -f docs/etool.1 ]; then
installManPage docs/etool.1
fi
'';
meta = with lib; {
description = "Example Go CLI tool for data processing";
longDescription = ''
A comprehensive command-line tool written in Go, demonstrating
best practices for packaging Go applications in nixpkgs including
cgo dependencies, build tags, version embedding, and shell completions.
'';
homepage = "https://github.com/example/go-tool";
changelog = "https://github.com/example/go-tool/releases/tag/v${version}";
license = licenses.mit;
maintainers = with maintainers; [ ];
mainProgram = "etool";
platforms = platforms.unix;
};
}
This example demonstrates:
- Using
buildGoModulefor a CLI application - Cross-platform dependencies (macOS frameworks)
- Native library dependencies via cgo (SQLite, zlib)
- Selective binary building with
subPackages - Build tags for conditional compilation
- Version embedding with ldflags
- Test filtering and exclusions
- CGO configuration for tests
- Post-install tasks (renaming, shell completions, man pages)
- Comprehensive metadata
Building Go libraries
For Go libraries (not executables):
buildGoModule rec {
pname = "go-library";
version = "1.0.0";
src = fetchFromGitHub {
owner = "example";
repo = pname;
rev = "v${version}";
hash = "sha256-...";
};
vendorHash = "sha256-...";
# Don't install any binaries
subPackages = [ ];
# Libraries should run tests
doCheck = true;
# Make source available for other packages
postInstall = ''
mkdir -p $out/share/go
cp -r . $out/share/go/
'';
meta = with lib; {
description = "A reusable Go library";
license = licenses.asl20;
};
}
Most Go libraries in nixpkgs are included for their binaries or as dependencies, not installed directly.
Using buildGo118Module and other variants
For projects requiring specific Go versions:
# Use Go 1.21 specifically
buildGo121Module rec {
# ... same as buildGoModule
}
# For the latest Go version
buildGoModule rec {
# Uses the default Go version for nixpkgs
}
However, buildGoModule typically uses a recent Go version and works for most projects.
Handling monorepos
For projects with multiple modules:
buildGoModule rec {
pname = "tool-from-monorepo";
version = "1.0.0";
src = fetchFromGitHub {
owner = "example";
repo = "monorepo";
rev = "v${version}";
hash = "sha256-...";
};
# Specify subdirectory containing the module
modRoot = "tools/mytool";
vendorHash = "sha256-...";
# Build from the module root
subPackages = [ "." ];
meta = with lib; {
description = "Tool from a monorepo";
license = licenses.mit;
};
}
Summary
Go packaging in Nix leverages Go modules while maintaining reproducibility:
- Use
buildGoModulefor most Go projects - Specify
vendorHashto lock dependency versions (ornullfor vendored) - Use
subPackagesto build specific binaries from the project - Set
ldflagsfor version embedding and optimization - Handle cgo dependencies through
nativeBuildInputsandbuildInputs - Use
tagsfor conditional compilation features - Cross-platform dependencies should use conditionals for macOS frameworks
- Check existing nixpkgs Go packages for patterns and solutions
Trivial Builders
Not every package requires compiling source code. Nixpkgs provides a set of
“trivial builders” for common cases where you need to wrap a script, write a
small file, or produce a derivation from a simple command. These builders sit
on top of stdenv.mkDerivation but expose a much narrower interface, reducing
boilerplate and making intent explicit.
runCommand
runCommand creates a derivation from a single shell command. It is the
simplest way to produce a store path from an arbitrary build step.
{ runCommand }:
runCommand "my-output" { } ''
mkdir -p $out
echo "hello from nix" > $out/hello.txt
''
The three arguments are:
- name — the derivation name
- env — an attribute set of environment variables and derivation attributes
- buildCommand — the shell script that produces
$out
Passing environment variables
Any attribute in the second argument that is not a recognised derivation attribute is passed to the build as an environment variable:
runCommand "greeting" { who = "world"; } ''
mkdir -p $out
echo "hello, $who" > $out/greeting.txt
''
writeShellApplication
writeShellApplication produces a shell script wrapped with bash and
optionally checked with shellcheck at build time. It automatically sets
-euo pipefail and patches the PATH so that referenced packages are
available without absolute paths.
{ writeShellApplication, curl, jq }:
writeShellApplication {
name = "fetch-data";
runtimeInputs = [ curl jq ];
text = ''
curl -s "https://api.example.com/data" | jq '.results[]'
'';
}
The resulting derivation installs the script to $out/bin/<name>.
Options
| Attribute | Description |
|---|---|
name | Name of the script and the produced binary |
text | The body of the shell script |
runtimeInputs | Packages added to PATH at runtime |
checkPhase | Override the default shellcheck invocation |
excludeShellChecks | List of shellcheck codes to suppress (e.g. [ "SC2016" ]) |
meta | Standard nixpkgs meta attributes |
writeShellScript and writeShellScriptBin
When you need a script but do not want shellcheck or the automatic PATH
rewriting that writeShellApplication provides, use writeShellScript or
writeShellScriptBin.
writeShellScript writes the script to $out directly:
{ writeShellScript }:
writeShellScript "my-hook" ''
echo "running hook"
exec "$@"
''
writeShellScriptBin places it at $out/bin/<name>, matching the layout
expected by buildEnv and similar tools:
{ writeShellScriptBin }:
writeShellScriptBin "greet" ''
echo "hello, ''${1:-world}"
''
The difference from writeShellApplication is that these do not rewrite
PATH, do not enforce set -euo pipefail, and do not run shellcheck.
writePython3Bin
writePython3Bin produces a Python 3 script installed to $out/bin/<name>.
The interpreter is patched in at build time so the script is fully
self-contained in the Nix store.
{ writePython3Bin, python3Packages }:
writePython3Bin "check-yaml" {
libraries = [ python3Packages.pyyaml ];
} ''
import sys
import yaml
with open(sys.argv[1]) as f:
data = yaml.safe_load(f)
print(f"Loaded {len(data)} top-level keys")
''
The three arguments are:
- name — the binary name
- options — an attribute set;
librarieslists Python packages to make available - text — the Python source
writePython3
writePython3 is the same but writes the script directly to $out rather
than $out/bin/<name>:
writePython3 "helper.py" {
libraries = [ python3Packages.requests ];
} ''
import requests
print(requests.get("https://example.com").status_code)
''
writeText and writeTextFile
writeText creates a plain text file in the Nix store. It is one of the
simplest builders and is useful for configuration files, templates, or any
static text content.
{ writeText }:
writeText "config.json" ''
{
"debug": false,
"port": 8080
}
''
This produces a store path like /nix/store/…-config.json containing exactly
the provided text.
writeTextFile is the underlying function and accepts an attribute set with
more options:
{ writeTextFile }:
writeTextFile {
name = "my-config";
text = "key=value";
destination = "/etc/myapp/config"; # path inside $out
executable = false;
checkPhase = ''
grep -q "key" $out/etc/myapp/config
'';
}
| Attribute | Description |
|---|---|
name | Derivation name |
text | File contents |
destination | Path inside $out (default: $out) |
executable | Whether to set the executable bit |
checkPhase | Optional validation script |
meta | Standard nixpkgs meta attributes |
writeTextDir
writeTextDir is a convenience wrapper that places the text file at a
specified path within $out:
writeTextDir "share/myapp/config.toml" ''
[server]
port = 8080
''
The result is a derivation whose $out/share/myapp/config.toml contains the
text. This layout works well with symlinkJoin or buildEnv when assembling
packages from multiple pieces.
symlinkJoin
symlinkJoin merges multiple derivations into a single store path by creating
symlinks. This is useful for combining a program with extra files, or for
assembling a custom environment from several packages.
{ symlinkJoin, hello, writeTextDir }:
symlinkJoin {
name = "hello-with-config";
paths = [
hello
(writeTextDir "share/hello/config" "greeting=hi")
];
}
Adding wrappers
A common pattern is to use symlinkJoin together with makeWrapper to
produce a wrapped binary:
{ symlinkJoin, hello, makeWrapper }:
symlinkJoin {
name = "hello-wrapped";
paths = [ hello ];
buildInputs = [ makeWrapper ];
postBuild = ''
wrapProgram $out/bin/hello \
--set GREETING "hi"
'';
}
buildEnv
buildEnv creates a single store path that merges a set of packages by
symlinking their contents. It is similar to symlinkJoin but offers more
control over conflict resolution and is the mechanism behind nix-env profile
generations.
{ buildEnv, git, vim, ripgrep }:
buildEnv {
name = "my-tools";
paths = [ git vim ripgrep ];
}
Options
| Attribute | Description |
|---|---|
name | Derivation name |
paths | List of derivations to merge |
pathsToLink | Subdirectories to include (default: everything) |
extraOutputsToInstall | Extra package outputs to link in (e.g. [ "dev" "man" ]) |
ignoreCollisions | If true, silently ignore conflicting files (default: false) |
postBuild | Shell commands to run after the environment is assembled |
Linking only specific subdirectories
pathsToLink restricts which directories are symlinked. This keeps the
resulting environment lean when you only need binaries, for example:
buildEnv {
name = "bin-only";
paths = [ git vim ripgrep ];
pathsToLink = [ "/bin" ];
}
Handling collisions
By default, buildEnv will throw an error if two packages provide the same
path. Set ignoreCollisions = true to suppress this, taking the first match:
buildEnv {
name = "permissive-env";
paths = [ packageA packageB ];
ignoreCollisions = true;
}
Including extra outputs
Packages can have multiple outputs (see the multiple outputs chapter). By
default only the default output is linked. Use extraOutputsToInstall to pull
in additional ones:
buildEnv {
name = "dev-env";
paths = [ openssl zlib ];
extraOutputsToInstall = [ "dev" ];
}
This is useful when building a development environment where you need headers alongside libraries.
buildEnv vs symlinkJoin
buildEnv and symlinkJoin are closely related. The main practical
differences are:
buildEnvsupportspathsToLink,extraOutputsToInstall, andignoreCollisions;symlinkJoindoes notsymlinkJoinsupports apostBuildhook and acceptsbuildInputsfor tools likemakeWrapper;buildEnvalso haspostBuildbut is more commonly used for pure path merging
For simple merging, either works. Prefer buildEnv when you need fine-grained
control over which directories are linked or when assembling user environments.
Prefer symlinkJoin when you need to run wrapProgram or other post-assembly
steps.
Choosing the right builder
| Use case | Builder |
|---|---|
Arbitrary build step producing $out | runCommand |
| Shell script with runtime dependencies | writeShellApplication |
| Shell script without PATH rewriting | writeShellScriptBin |
| Python script | writePython3Bin |
| Static text file | writeText / writeTextFile |
| Text file at a specific path | writeTextDir |
| Merge derivations with collision control | buildEnv |
| Merge derivations with post-build wrapping | symlinkJoin |
Trivial builders are often the right tool when you need to glue packages
together, wrap upstream software with configuration, or produce small utilities
without a full build system. Because they share the same $out convention as
any other Nix derivation, the results compose naturally with the rest of
nixpkgs.
Using Nixpkgs
Nixpkgs is best thought of as a large database of expert knowledge on how to build software. Each entry is a Nix expression — a recipe that describes where to fetch source, what dependencies are needed, and how to compile and install the result. At the time of writing, nixpkgs contains definitions for over 100,000 packages, making it the largest software repository in existence by package count.
This expert knowledge is just a collection of Nix files in a Git repository.
There is no registry like you would find in a traditional package manager: importing nixpkgs evaluates the nix files
and hands you back an attribute set of derivations. Everything in the previous
chapters — derivations, stdenv.mkDerivation, build helpers — is what nixpkgs
is made of.
Because nixpkgs is a source repository, it has to be fetched before it can be used. This is where several distinct concerns come together:
- Where does nixpkgs come from? A channel, a pinned tarball, a local checkout, or a flake input are all valid answers, each with different tradeoffs around reproducibility and freshness.
- Which revision do you use? Nixpkgs is developed continuously. The revision you use determines which package versions are available and which bugs are present. Controlling this is called pinning.
- How do you customize it? Nixpkgs accepts a
configargument for high-level policy (such as allowing unfree packages) and anoverlaysargument for modifying or extending the package set.
This chapter works through each of these concerns in turn. We start with how to pin and configure nixpkgs itself, cover flakes as the modern answer to pinning and hermetic builds, and finish with the customisation mechanisms: config and overlays. This control also extends to the alternative package sets that nixpkgs produces for cross-compilation and static linking.
Importing Nixpkgs
Nixpkgs is a function. When you import it you get back a function that
accepts a configuration argument and returns a package set:
let
pkgs = import <nixpkgs> { };
in
pkgs.hello
The { } is the configuration argument. Passing an empty set is fine — nixpkgs
supplies sensible defaults for everything. The result, pkgs, is the attribute
set of packages you are already familiar with from the rest of this book.
The configuration argument
The configuration argument accepts three main attributes:
import <nixpkgs> {
system = "x86_64-linux";
config = { ... };
overlays = [ ... ];
}
system
system tells nixpkgs which platform to build packages for. It defaults to
the platform of the machine running Nix using builtins.currentSystem. You usually do not need to set this
explicitly, but it is important when producing packages for a different
architecture or when the same nixpkgs import is shared across machines:
pkgs = import <nixpkgs> { system = "aarch64-linux"; };
config
config is an attribute set of high-level policy flags. The most commonly
needed ones are:
pkgs = import <nixpkgs> {
config = {
# Allow packages with an unfree licence to be built
allowUnfree = true;
# Allow packages marked as broken (likely marked broken for a good reason)
allowBroken = false;
# Allow specific packages with known security vulnerabilities
permittedInsecurePackages = [
"openssl-1.1.1w"
];
};
};
Config is covered in full in the Nixpkgs Config section.
overlays
overlays is a list of functions that modify or extend the package set. They
are the primary mechanism for adding your own packages or overriding existing
ones:
pkgs = import <nixpkgs> {
overlays = [ (self: super: { hello = super.hello.override { ... }; }) ];
};
Overlays are covered in full in the Overlays section.
What import returns
The result of import <nixpkgs> { } is a large attribute set. The attributes
you will reach for most often are:
| Attribute | Description |
|---|---|
pkgs.<name> | Individual packages (pkgs.git, pkgs.python3, …) |
pkgs.lib | The nixpkgs standard library of Nix functions |
pkgs.stdenv | The default build environment |
pkgs.callPackage | Function for calling package expressions with nixpkgs dependencies |
pkgs.buildEnv | Merge multiple packages into one store path |
Importing multiple times
Each call to import <nixpkgs> { ... } produces an independent package set.
This is how you can have two different configurations of nixpkgs in the same
expression — for example, one with allowUnfree = true for a specific package
and one without for everything else:
let
pkgs = import <nixpkgs> { };
unfreePkgs = import <nixpkgs> { config.allowUnfree = true; };
in
{
environment.systemPackages = [
pkgs.git
unfreePkgs.steam
];
}
Note: Importing a new package set does come with time and memory costs. Try to avoid importing additional package sets.
In practice, most people will just import the package set with unfree packages as the free packages are not affected by enabling this flag.
Channels and Branches
“Channel” is an overloaded term in the Nix ecosystem. It refers to two related
but distinct things: the legacy nix-channel mechanism for subscribing to a
nixpkgs feed, and the release branches that those feeds track. Understanding
both is useful because the branch names appear everywhere — in flake inputs, in
pinning tools, and in documentation — regardless of whether you use the legacy
channel mechanism at all.
Release branches
Nixpkgs is developed on a rolling master branch and periodically stabilised
into versioned release branches. The main branches you will encounter are:
| Branch | Tracks | Character |
|---|---|---|
nixpkgs-unstable | master | Rolling, latest packages, occasional breakage |
nixos-unstable | master | Same as above, extra NixOS-specific CI |
nixos-YY.MM | Stable release (e.g. nixos-24.11) | Stable, security backports only |
nixpkgs-YY.MM-darwin | Stable release, Darwin builds | Stable, gated on macOS CI passing |
NixOS stable releases follow a YY.MM scheme and are cut twice a year, in May
and November. Once a release branch is cut, it receives only security and
critical bug fixes — package versions are frozen.
nixpkgs-unstable vs nixos-unstable
Both branches track master but are bumped independently by Hydra, the NixOS
build infrastructure. A branch is only advanced once all of its required builds
succeed, so either branch may lag behind master by hours or days.
The practical difference is that nixos-unstable requires additional NixOS
system-level tests to pass before advancing, which means it can lag behind
nixpkgs-unstable but offers slightly higher confidence for NixOS
configurations. For non-NixOS workflows — development shells, macOS, WSL —
nixpkgs-unstable is preferred.
The legacy nix-channel mechanism
Before flakes and pinning tools, nix-channel was how users subscribed to a
nixpkgs branch. It works by pointing at a URL that Hydra keeps updated to the
latest successful build of a branch:
https://nixos.org/channels/nixos-unstable
https://nixos.org/channels/nixos-24.11
Managing channels looks like this:
# Subscribe to a channel
nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
# Download the latest version of all subscribed channels
nix-channel --update
# List current channels
nix-channel --list
After an update, <nixpkgs> in Nix expressions resolves to the downloaded
channel. The major drawback is that nix-channel --update silently changes
the version of nixpkgs used by every expression on the machine, making builds
non-reproducible between updates. For this reason, channels have largely been
superseded by explicit pinning and flakes, which are covered in the following
sections.
Choosing a branch
Unstable (nixpkgs-unstable or nixos-unstable) is the right choice when
you want the latest package versions and are comfortable with occasional
breakage. It is the most popular choice among individual developers and is
generally more stable in practice than its name suggests, since Hydra gates
advancement on CI passing.
Stable (nixos-24.11, etc.) is the right choice when predictability
matters more than freshness — production servers, shared developer environments,
or anywhere that an unexpected package update would be disruptive.
Mixing both is also possible and common: pin most of your configuration to a stable branch, then selectively pull individual packages from unstable when you need a newer version.
Pinning Nixpkgs
When you write import <nixpkgs> {}, the version of nixpkgs you get depends
on whatever channel happens to be installed on the machine at the time. Two
developers running the same expression on different days, or on different
machines, may get different packages. Builds stop being reproducible.
Pinning solves this by recording an exact nixpkgs revision — a specific Git commit — alongside your code. Anyone who checks out your repository gets the same nixpkgs, regardless of what channels they have configured.
Avoid pkgs fetchers for pinning
It may be tempting to pin nixpkgs using a fetcher from the package set itself,
such as pkgs.fetchFromGitHub. This is strongly discouraged. Package set
fetchers are derivations — they are built during the build phase, not during
evaluation. Using one to fetch nixpkgs means nixpkgs cannot be imported until
that derivation has been built, which forces Nix to perform a build in the
middle of evaluation. This is called Import From Derivation (IFD) and it
significantly increases evaluation time, prevents some evaluation optimisations,
and can break tooling that assumes evaluation is pure. See the Nix manual
section on IFD
for a full explanation.
Conceptually, you would also need to import nixpkgs to be able to use its build time fetchers, so you now just importing multiple nixpkgs’ unnecessarily.
Always use builtins fetchers or a pinning tool for nixpkgs itself. The
builtins fetchers run at evaluation time and do not introduce IFD.
Pinning with fetchTarball
The simplest approach that requires no external tooling is builtins.fetchTarball.
GitHub exposes a tarball of any commit, and builtins.fetchTarball verifies it
against a hash:
let
nixpkgs = builtins.fetchTarball {
url = "https://github.com/NixOS/nixpkgs/archive/a3a3dda3bacf61e8a39258a0ed9c924eeca8e293.tar.gz";
sha256 = "0zb9sic985plq8mfs7sfibgbjapzqqxnqzzcsk41fnfxn0bh2qmv";
};
pkgs = import nixpkgs { };
in
pkgs.hello
The sha256 hash ensures the tarball has not changed. If it does not match,
evaluation fails immediately. To get the hash for a new commit, set it to an
empty string and let Nix report the correct value:
sha256 = ""; # Nix will print the correct hash and fail
To update the pin, replace the commit in url with the new one and update the
hash.
Pinning tools
Managing fetchTarball pins by hand becomes tedious across multiple inputs.
Several tools exist to automate this:
npins
npins is a minimal, file-based pinning tool. It stores pins in a
npins/ directory and generates a npins/default.nix you can import:
# Initialise npins in a new project
npins init
# Add nixpkgs at a specific branch
npins add github NixOS nixpkgs --branch nixpkgs-unstable
# Update all pins to their latest commits
npins update
# Update a single pin
npins update nixpkgs
# default.nix
let
sources = import ./npins;
pkgs = import sources.nixpkgs { };
in
pkgs.hello
Flakes
Flakes handle pinning natively through flake.lock, which records the exact
revision of every input. This is covered in full in the next section, but if
you are starting a new project and are comfortable enabling experimental
features, flakes are the most integrated pinning solution available.
Choosing an approach
| Approach | Requires | Best for |
|---|---|---|
fetchTarball | Nothing | Single input, minimal dependencies |
npins / niv (deprecated) | The tool installed | Multiple inputs, no flakes |
| Flakes | nix with experimental features | New projects, full reproducibility |
The key property all three share is that the pin is a file committed to your
repository. Updating nixpkgs becomes an explicit, reviewable change rather than
a silent side effect of nix-channel --update.
Flakes
Flakes are a Nix feature that standardizes how Nix projects declare their
inputs and outputs. They solve two problems at once: pinning (every input is
locked to an exact revision) and discoverability (every flake exposes a
uniform schema of outputs). A flake is just a repository or directory
containing a flake.nix file.
Flakes have been available behind an experimental feature flag since 2021 and remain technically experimental. Despite this status, they have been widely adopted and are now the dominant way to structure Nix projects. The “experimental” label reflects ongoing design work rather than instability in practice. There has been ongoing community debate about the pace of stabilization, but for new projects flakes are the recommended approach.
Enabling flakes
Flakes require the nix-command and flakes experimental features. Add the
following to /etc/nix/nix.conf or ~/.config/nix/nix.conf:
experimental-features = nix-command flakes
On NixOS this is done declaratively:
nix.settings.experimental-features = [ "nix-command" "flakes" ];
The flake.nix structure
A flake.nix is a Nix file that returns an attribute set with two required
keys:
{
description = "A short description of the flake";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
};
outputs = { self, nixpkgs }: {
# outputs go here
};
}
inputs
inputs declares the flake’s dependencies. Each input is fetched and locked
automatically. The most common input is nixpkgs:
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
};
Input URLs follow the format github:<owner>/<repo>/<branch-or-commit>. Other
supported URL schemes include gitlab:, sourcehut:, git+https://, and
plain path: references to local directories.
outputs
outputs is a function that receives the evaluated inputs and returns an
attribute set of whatever the flake produces. The argument names must match the
input names:
outputs = { self, nixpkgs }: {
packages.x86_64-linux.hello = nixpkgs.legacyPackages.x86_64-linux.hello;
devShells.x86_64-linux.default = ...;
nixosConfigurations.myhost = ...;
};
The output schema is not strictly enforced, but the Nix tooling recognises certain well-known attributes:
| Output attribute | Description |
|---|---|
packages.<system>.<name> | Buildable packages (nix build) |
devShells.<system>.<name> | Development shells (nix develop) |
apps.<system>.<name> | Runnable programs (nix run) |
checks.<system>.<name> | Test derivations (nix flake check) |
nixosConfigurations.<name> | NixOS system configurations |
nixosModules.<name> | Reusable NixOS modules |
overlays.<name> | Nixpkgs overlays |
lib | Library functions |
Accessing nixpkgs from a flake
Within outputs, nixpkgs is accessed through nixpkgs.legacyPackages:
outputs = { self, nixpkgs }:
let
pkgs = nixpkgs.legacyPackages.x86_64-linux;
in
{
packages.x86_64-linux.mytool = pkgs.callPackage ./mytool.nix { };
};
legacyPackages exists because the full nixpkgs package set does not fit
neatly into the packages output schema (which expects one derivation per
attribute, whereas nixpkgs contains nested sets). It is the standard way to
access nixpkgs packages from a flake output.
To pass config or overlays, import nixpkgs explicitly:
pkgs = import nixpkgs {
system = "x86_64-linux";
config = { allowUnfree = true; };
overlays = [ self.overlays.default ];
};
flake.lock
When you first run any nix command against a flake, Nix resolves all inputs
to their current revisions and writes flake.lock:
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1745000000,
"narHash": "sha256-...",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a3a3dda3bacf61e8a39258a0ed9c924eeca8e293",
"type": "github"
}
}
}
}
The lockfile pins every input to an exact commit and hash. Commit flake.lock
alongside flake.nix so that everyone using your repository gets the same
nixpkgs revision.
Updating pins
# Update all inputs to their latest revisions
nix flake update
# Update a single input
nix flake update nixpkgs
# Check what changed
git diff flake.lock
Updating is an explicit, reviewable operation — a git diff on flake.lock
shows exactly which commits changed.
follows
When a flake has multiple inputs that each depend on nixpkgs, you can end up
with several different nixpkgs versions in your closure. The follows keyword
redirects an input’s dependency to one you control:
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
home-manager = {
url = "github:nix-community/home-manager";
inputs.nixpkgs.follows = "nixpkgs"; # use our nixpkgs, not home-manager's
};
};
This ensures all inputs share a single nixpkgs version, reducing the number of packages that need to be built or downloaded.
Common issues
Pure evaluation
Flakes are evaluated in pure mode by default: access to <nixpkgs> angle
brackets, builtins.currentSystem, and impure environment variables is
blocked. Code that relies on these will fail under flakes and needs to be
updated to receive values explicitly through function arguments.
System argument
Because flakes evaluate purely, builtins.currentSystem is unavailable.
Outputs must be defined per-system explicitly, or a helper such as
flake-utils.lib.eachDefaultSystem can generate the boilerplate:
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
flake-utils.url = "github:numtide/flake-utils";
};
outputs = { self, nixpkgs, flake-utils }:
flake-utils.lib.eachDefaultSystem (system:
let pkgs = nixpkgs.legacyPackages.${system};
in {
packages.default = pkgs.hello;
}
);
Experimental status
Because flakes remain experimental, their interface could change before stabilization. In practice the core schema has been stable for several years and breaking changes are unlikely.
Nixpkgs Config
The config argument to nixpkgs is an attribute set of high-level policy
values that affect the entire package set. It controls things like which
licence categories are permitted, which packages are allowed despite known
issues, and how certain packages are built by default.
Internally, nixpkgs evaluates config through evalModules, the same module
system covered in the NixOS chapter. This means every config value has a
declared type and default, and the merged result is always available at
pkgs.config — you can inspect it at any time to see what policy is in effect.
Passing config
Config is passed at import time:
pkgs = import nixpkgs {
system = "x86_64-linux";
config = {
allowUnfree = true;
};
};
On NixOS, it is set declaratively and applies to the system’s nixpkgs instance:
nixpkgs.config.allowUnfree = true;
Common config options
allowUnfree
Nixpkgs marks packages with non-free licences as unfree. By default, attempting
to build them raises an error. Setting allowUnfree = true permits all unfree
packages:
config.allowUnfree = true;
For finer control, allowUnfreePredicate accepts a function that receives the
package and returns a bool. This lets you allowlist specific packages rather
than all unfree software:
config.allowUnfreePredicate = pkg: builtins.elem (pkg.pname or pkg.name) [
"steam"
"nvidia-x11"
];
allowBroken
Packages marked broken = true in nixpkgs are expected to fail to build.
Attempting to use them raises an error by default. This flag overrides that:
config.allowBroken = true;
This should be used with caution — packages are marked broken for a reason. Prefer finding an alternative or fixing the package rather than enabling this globally.
permittedInsecurePackages
Packages with known security vulnerabilities are marked insecure and blocked by default. Individual packages can be permitted by name and version:
config.permittedInsecurePackages = [
"openssl-1.1.1w"
"python-2.7.18.8"
];
allowAliases
Nixpkgs maintains aliases for renamed or removed packages so that old attribute
names still evaluate (to an error with a helpful message). Setting
allowAliases = false removes these aliases from the package set, which can
speed up evaluation in large configurations:
config.allowAliases = false;
packageOverrides
packageOverrides is an older mechanism for modifying packages that predates
overlays. It accepts a function from the final package set to an attribute set
of overrides:
config.packageOverrides = pkgs: {
hello = pkgs.hello.overrideAttrs (old: {
doCheck = false;
});
};
For anything beyond trivial one-off overrides, prefer overlays — they compose correctly, are applied in a defined order, and are the modern standard. See the Overlays section.
Inspecting the active config
Because config is evaluated as a module, the full resolved configuration is
always available on pkgs.config:
nix-repl> pkgs.config.allowUnfree
false
nix-repl> pkgs.config.permittedInsecurePackages
[ ]
This is useful when debugging unexpected build errors — checking pkgs.config
confirms whether a policy flag has been applied as intended.
Common issues
Unfree error in a flake
Flakes evaluate purely, so nixpkgs config cannot be read from ~/.config/nixpkgs/config.nix.
You must pass it explicitly when importing nixpkgs:
pkgs = import nixpkgs {
system = "x86_64-linux";
config.allowUnfree = true;
};
Forgetting this is the most common reason an unfree package builds in a non-flake context but fails inside a flake.
Overlays
Most package managers treat their package set as a fixed database — you can install what is in it, but modifying or replacing a package requires forking the manager or maintaining a separate repository. Nixpkgs takes a different approach: the entire package set is a Nix value, and overlays let you transform it with ordinary Nix functions. You can add new packages, replace existing ones, change build options, or patch sources, and the result is a first-class package set indistinguishable from the original.
What an overlay is
An overlay is a function that takes two arguments and returns an attribute set of packages:
final: prev: {
# packages to add or replace
}
previs the package set before this overlay was applied. Use it to access the original version of a package you are modifying. When changing a package, you always want to useprevto avoid infinite recursion.finalis the package set after all overlays have been applied. Use it when you need a dependency that may itself have been modified by another overlay. In almost all cases you want to consume packages fromfinal, generally in the form offinal.callPackage.
The distinction matters: reaching for prev.somePackage gives you the
unmodified package; reaching for final.somePackage gives you the
post-overlay version. Using final for your own package’s dependencies ensures
the full overlay chain is respected.
How overlays are applied
Nixpkgs applies overlays through a fixed-point fold. Each overlay’s result is
merged with the package set and the accumulation is passed as prev to the
next overlay. Once all overlays have been applied, the fully merged set is fed back as
final. This is the same fixed-point mechanism that makes the NixOS module
system work — apparent circular references resolve correctly because Nix is
lazy.
The practical consequence is that overlays compose: you can have multiple overlays that each build on each other’s results, and the order they are applied in is well-defined.
The (simplified) implementation of overlays:
{ overlays }:
let
# Fold over all overlays so that `prev:` is applied to each. Results in a single `self: { ... }` recursive function.
toFix = lib.foldl' (lib.flip lib.extends) (self: { }) overlays
# Apply the recursive function to itself, converging to a "fixed point", works because nix is lazy
fix = f: let x = f x in x;
# "Fixing" results in a single package set, which we commonly refer to as "pkgs"
pkgs = fix toFix;
in
pkgs
Adding a package
The simplest overlay adds a package that is not in nixpkgs:
final: prev: {
myapp = final.callPackage ./myapp.nix { };
}
Using final.callPackage (not prev.callPackage) ensures that myapp’s
dependencies are resolved from the fully overlaid package set.
Modifying an existing package
To change build options on an existing package, use overrideAttrs:
final: prev: {
hello = prev.hello.overrideAttrs (old: {
doCheck = false;
});
}
prev.hello is used here deliberately — you want the original package as the
base, not a potentially already-modified version. Doing hello = final.hello.overrideAttrs ( ... ); would result
in infinite recursion.
Overriding dependencies
override changes the inputs (generally other packages) that callPackage passed to a package:
final: prev: {
# Build ffmpeg against our custom version of x264
ffmpeg = prev.ffmpeg.override {
x264 = final.x264-custom;
};
}
Composing multiple overlays
Overlays are just values in a list, so composing them is straightforward:
pkgs = import nixpkgs {
system = "x86_64-linux";
overlays = [
(final: prev: { myapp = final.callPackage ./myapp.nix { }; })
(final: prev: { mytool = final.callPackage ./mytool.nix { }; })
];
};
They are applied in list order, left to right, so a later overlay can build on
packages introduced by an earlier one. lib.composeManyExtensions can be used to
fold many overlays into what appears to be a single overlay.
Overlays in a flake
The conventional way to expose an overlay from a flake is through the
overlays output:
outputs = { self, nixpkgs }: {
overlays.default = final: prev: {
myapp = final.callPackage ./myapp.nix { };
};
};
Consumers apply it when importing nixpkgs:
pkgs = import nixpkgs {
system = "x86_64-linux";
overlays = [ anotherFlake.overlays.default ];
};
Within the same flake, self.overlays.default refers to the overlay:
outputs = { self, nixpkgs }:
let
pkgs = import nixpkgs {
system = "x86_64-linux";
overlays = [ self.overlays.default ];
};
in {
overlays.default = final: prev: {
myapp = final.callPackage ./myapp.nix { };
};
packages.x86_64-linux.myapp = pkgs.myapp;
};
A note on naming: self/super
Older nixpkgs code and documentation uses self: super: for the overlay
arguments instead of final: prev:. They mean exactly the same thing — self
is final and super is prev. The final/prev naming was adopted later
as it is more descriptive, and is now the convention in new code.
Overrides
Overlays let you modify packages across the entire package set. Overrides are the lower-level mechanism that overlays use: functions attached to individual derivations that produce a modified copy. You can use overrides directly without an overlay when you only need to change a package in one place, or combine them with overlays when the change should propagate globally.
override
Every package produced by callPackage carries an override function.
It re-calls the package’s function with some of the original arguments replaced.
This is how you substitute a dependency:
# Use a custom openssl everywhere ffmpeg is called
pkgs.ffmpeg.override {
openssl = myCustomOpenssl;
}
override operates on the inputs to the package function — the arguments
that callPackage resolved from the package set. It does not touch the
derivation attributes (src, buildPhase, etc.) directly.
overrideAttrs
overrideAttrs modifies the attribute set passed to stdenv.mkDerivation
(or the equivalent builder). It receives a function from the old attributes to
the new ones:
pkgs.hello.overrideAttrs (old: {
# Disable the test suite
doCheck = false;
})
The old argument contains all attributes as they were before the override,
so you can extend rather than replace:
pkgs.curl.overrideAttrs (old: {
# Append to the existing configure flags
configureFlags = old.configureFlags ++ [ "--without-brotli" ];
})
overrideAttrs is the right tool when you need to change sources, build flags,
patches, phases, or any other derivation-level attribute.
Combining both
From a package expression perspective, overrides affect the following areas:
{
# These are affected by `.override`
stdenv,
openssl,
...
}:
stdenv.mkDerivation {
# These are affected by `.overrideAttrs`
buildInputs = [ openssl ];
}
A common pattern is to use override for dependency substitution and
overrideAttrs for build-level changes together:
pkgs.somePackage
.override { openssl = pkgs.openssl_3; }
.overrideAttrs (old: { doCheck = false; });
Inside an overlay this looks like:
final: prev: {
somePackage = prev.somePackage
.override { openssl = final.openssl_3; }
.overrideAttrs (old: { doCheck = false; });
}
Language-specific overrides
Some build helpers add their own override functions that understand the
language-specific structure of the package. The attrs passed to stdenv.mkDerivation are affected by overrideAttrs,
so packages with specialized builders built on top of mkDerivation will have behavior which isn’t reflected
in calls to overrideAttrs. In these cases, there is often a specialized
variant of overrideAttrs which is more reflective of the attrs passed to the original
package expression.
overridePythonAttrs
Python packages built with buildPythonPackage carry overridePythonAttrs.
It works like overrideAttrs but preserves the Python package metadata
(dependencies, Python version, wheel format) that the Python infrastructure
depends on:
pkgs.python3Packages.requests.overridePythonAttrs (old: {
doCheck = false;
})
Using plain overrideAttrs on a Python package will work for simple changes
but can break propagated dependencies and the withPackages mechanism.
overrideModAttrs
Go packages built with buildGoModule carry overrideModAttrs for modifying
the vendoring derivation specifically — the intermediate step that fetches and
caches Go module dependencies. This is useful when the vendor hash needs to
change independently of the main package attributes:
pkgs.someGoTool.overrideModAttrs (old: {
postPatch = old.postPatch or "" + ''
substituteInPlace go.mod --replace "..." "..."
'';
})
As a general rule, prefer the language-specific override function when one
exists. If you are unsure whether one exists, check whether the package
attribute set contains an override* function beyond override and
overrideAttrs.
finalAttrs
Derivations declared with mkDerivation can use finalAttrs to refer to
their own final attribute set, avoiding duplication or invariance when attributes depend on
each other:
stdenv.mkDerivation (finalAttrs: {
pname = "mytool";
version = "1.0.0";
src = fetchurl {
url = "https://example.com/${finalAttrs.pname}-${finalAttrs.version}.tar.gz";
sha256 = "...";
};
passthru.tests.version = runCommand "test-version" { } ''
${finalAttrs.finalPackage}/bin/mytool --version | grep ${finalAttrs.version}
'';
})
finalAttrs.finalPackage refers to the fully overridden derivation — not the
pre-override version. This matters for overrideAttrs: when a caller overrides
version, the src URL and the test both see the new value automatically,
because they reference finalAttrs rather than a closed-over local variable.
Without finalAttrs, overriding version alone would leave src pointing at
the old URL, a common source of subtle build failures.
Common mistakes
Shadowing with final in an overlay
Inside an overlay, using final to reference the package you are replacing
causes infinite recursion:
# Wrong — infinite recursion
final: prev: {
hello = final.hello.overrideAttrs (old: { doCheck = false; });
}
# Correct — use prev to get the original
final: prev: {
hello = prev.hello.overrideAttrs (old: { doCheck = false; });
}
Forgetting to thread old attributes
When extending a list or string attribute, always include the original value:
# Wrong — replaces all patches, discards the originals
overrideAttrs (old: { patches = [ ./my.patch ]; })
# Correct — appends to the existing list
overrideAttrs (old: { patches = old.patches ++ [ ./my.patch ]; })
Using overrideAttrs on language packages
Applying overrideAttrs to a Python, Go, or other language package when a
language-specific override exists will work for simple cases but risks dropping
metadata that the language infrastructure relies on. When in doubt, use the
language-specific function.
Alternative Package Sets
Importing nixpkgs with a different configuration produces an entirely different
package set. Nixpkgs takes advantage of this to expose several pre-configured
package sets as attributes directly on pkgs. These cover the most common
reasons you might want a non-default build: static linking, a different libc,
or a different target architecture.
Because these sets are just nixpkgs imported with different parameters, they compose naturally with the rest of the package set. You can mix packages from multiple sets in the same derivation, using the alternative set only where it is needed.
pkgsStatic
pkgs.pkgsStatic contains packages built with static linking against musl
libc. Statically linked binaries have no runtime dependencies on shared
libraries, making them easy to distribute and deploy to systems that may not
have the same libraries installed.
# A statically linked hello binary
pkgs.pkgsStatic.hello
# Use it as a build input
stdenv.mkDerivation {
name = "my-tool";
buildInputs = [ pkgs.pkgsStatic.zlib ];
}
Static builds work well for self-contained command-line tools but are unsupported or broken for many packages — particularly those with mandatory shared library dependencies, plugins, or C extensions. Expect to encounter build failures and treat them as exceptions to fix rather than the norm.
pkgsMusl
pkgs.pkgsMusl is similar to pkgsStatic but produces dynamically linked
binaries against musl libc instead of glibc. This is useful when you want the
size and compatibility characteristics of musl without full static linking:
pkgs.pkgsMusl.curl
pkgsCross
pkgs.pkgsCross contains package sets for cross-compilation targets. Each
attribute is a full nixpkgs instance configured to produce binaries for that
target while running on the host machine.
# Produce an aarch64 binary on an x86_64 machine
pkgs.pkgsCross.aarch64-multiplatform.hello
# Produce a RISC-V binary
pkgs.pkgsCross.riscv64.hello
# Produce a Windows binary (using mingw)
pkgs.pkgsCross.mingwW64.hello
Some commonly used targets:
| Attribute | Target |
|---|---|
aarch64-multiplatform | 64-bit ARM Linux |
aarch64-multiplatform-musl | 64-bit ARM Linux with musl |
riscv64 | 64-bit RISC-V Linux |
mingwW64 | 64-bit Windows (MinGW) |
raspberryPi | ARMv6 Linux (Raspberry Pi) |
s390x | IBM Z (s390x) Linux |
Cross-compilation support varies significantly by package. Well-maintained packages in nixpkgs generally cross-compile cleanly. Packages with complex build systems, bundled dependencies, or build-time execution of compiled code are more likely to fail. As with static builds, treat breakages as fixable exceptions rather than reasons to avoid cross-compilation entirely.
Combining alternative sets
Alternative sets can be used as build inputs in ordinary derivations, which is the most common pattern. You do not need to switch your entire build to a cross or static set — just reach into the alternative set for the specific package you need:
stdenv.mkDerivation {
name = "firmware-bundle";
# Compile the firmware for ARM, the rest of the build runs natively
buildInputs = [
pkgs.pkgsCross.aarch64-multiplatform.openssl
];
}
A note on polish
The native x86_64-linux and aarch64-linux package sets are the most
thoroughly tested and receive the most attention from the nixpkgs community.
Alternative sets — particularly cross-compilation targets and static builds —
are less consistently maintained. Some packages have never been tested in these
configurations and will fail. If you hit a build failure in an alternative set,
check the nixpkgs issue tracker and, if no fix exists, consider contributing
one — these configurations improve primarily through users reporting and fixing
breakages.
NixOS Modules
So far we have focused on Nix as a build tool: writing derivations, packaging software, and composing packages together. NixOS takes this further by using Nix to describe entire system configurations — the services that run, the users that exist, the kernel parameters, network interfaces, and everything else that defines an operating system.
The mechanism that makes this possible is the module system. Rather than one monolithic configuration file, a NixOS system is assembled from many small, composable modules. Each module declares the options it introduces and the configuration it contributes. The module system evaluates all of them together, merges their contributions, and produces a consistent system configuration.
This design has several practical benefits:
- Separation of concerns: each module is responsible for one aspect of the system. A module for a web server does not need to know anything about the firewall module, yet the two can interact through well-defined options.
- Reuse: modules can be shared across machines. A module that configures your preferred editor or your SSH hardening policy can be imported wherever it is needed.
- Discoverability: options are typed and documented.
nixos-optionand the online NixOS manual are generated directly from option declarations. - Safety: the type system catches many configuration mistakes at evaluation time, before any changes are applied to the system.
The module system is not exclusive to NixOS. The same machinery is used by Home Manager (for user-level configuration), NixOS container definitions, and a growing number of third-party tools. Understanding it once gives you a foundation that transfers across the whole ecosystem.
This chapter works through the module system from first principles. We start with how the system is structured, move through options and their types, cover how modules are composed and how conflicts are resolved, and finish by writing and testing a complete module from scratch.
The Module System
A NixOS configuration is not a single file — it is a collection of modules evaluated together by the module system. Understanding what a module is and how the system processes them is the foundation for everything else in this chapter.
What is a module?
At its core, a module is a Nix file that returns an attribute set with up to three keys:
{
imports = [ ... ]; # other modules to include
options = { ... }; # option declarations
config = { ... }; # option assignments
}
All three keys are optional. The simplest valid module is an empty attribute set:
{ }
More commonly, a module is a function that receives the evaluated system configuration and returns that attribute set:
{ config, lib, pkgs, ... }:
{
options = { ... };
config = { ... };
}
The function arguments are provided by the module system:
| Argument | Description |
|---|---|
config | The fully evaluated configuration (all modules merged) |
lib | The nixpkgs standard library |
pkgs | The nixpkgs package set |
options | The full set of declared options and their metadata |
The ... at the end of the argument list is important — it allows modules to
be called even when the module system passes arguments the module does not
declare.
How the module system evaluates modules
When NixOS builds a system configuration, the module system performs these steps:
- Collect all modules referenced directly or through
imports - Merge their
optionsdeclarations into a single option schema - Merge their
configassignments according to each option’s merge semantics - Evaluate
configlazily, resolving references between options
The result is a single config attribute set that represents the complete,
consistent system configuration.
Because Nix is lazy, options that are never referenced are never evaluated. This means a module can declare options that only take effect when another module assigns them.
The fixed-point
The module system builds config as a fixed-point: each module can reference
config in its own config block, creating apparent circular references.
{ config, lib, ... }:
{
config = {
# This references config.networking.hostName, which may be set
# by another module or by the user's configuration.
environment.etc."hostname".text = config.networking.hostName;
};
}
Nix’s lazy evaluation resolves these references: as long as there is no true cycle (A depends on B which depends on A with no base case), evaluation terminates correctly.
A minimal working example
The following is a complete, self-contained module that a NixOS configuration could import:
# motd.nix
{ config, lib, pkgs, ... }:
{
options.my.motd = lib.mkOption {
type = lib.types.str;
default = "Welcome.";
description = "Message of the day shown at login.";
};
config.environment.etc."motd".text = config.my.motd;
}
A user’s configuration.nix can then import this module and set the option:
{ ... }:
{
imports = [ ./motd.nix ];
my.motd = "Hello, ${config.networking.hostName}!";
}
The module system merges both files, resolves all references, and produces the
final /etc/motd file as part of the system closure.
Module locations in nixpkgs
NixOS ships with hundreds of modules covering nearly every aspect of a Linux
system. They live under nixos/modules/ in the nixpkgs repository and are
automatically imported through nixos/modules/module-list.nix. You never need
to import them manually — their options are always available.
Custom modules (your own, or from third-party flakes) must be explicitly listed
in imports.
Options
Options are the public interface of a module. They declare what can be
configured, what type of value is expected, what the default is, and how the
option is described in the documentation. A module that only sets config
without declaring options is valid, but modules that expose options allow
other modules and users to interact with them in a controlled way.
Declaring an option
Options are declared under the options key using lib.mkOption:
{ lib, ... }:
{
options.services.myapp.enable = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to enable myapp.";
};
}
The path under options becomes the path users set in their configuration:
services.myapp.enable = true;
mkOption arguments
| Argument | Required | Description |
|---|---|---|
type | yes | The option type (see the Option Types chapter) |
default | no | Value used when the option is not set |
defaultText | no | Human-readable description of the default, for documentation |
example | no | Example value shown in the manual |
description | no | Markdown description rendered in the manual |
internal | no | If true, hide from generated documentation |
visible | no | If false, hide from generated documentation |
readOnly | no | If true, disallow assignments from outside the declaring module |
apply | no | Function applied to the final value before it is exposed in config |
mkEnableOption
A boolean enable option is so common that nixpkgs provides a shorthand:
options.services.myapp.enable = lib.mkEnableOption "myapp";
This is equivalent to:
options.services.myapp.enable = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to enable myapp.";
};
mkPackageOption
Similarly, mkPackageOption declares a package option with a sensible default
drawn from pkgs:
options.services.myapp.package = lib.mkPackageOption pkgs "myapp" { };
This produces an option of type lib.types.package defaulting to
pkgs.myapp. An optional default override can be provided:
options.services.myapp.package = lib.mkPackageOption pkgs "myapp" {
default = [ "myapp" "full" ]; # pkgs.myapp.full
};
Grouping options with submodules
Related options are conventionally nested under a common prefix. The full path
to an option is just the attribute path from the top of options:
options.services.myapp = {
enable = lib.mkEnableOption "myapp";
package = lib.mkPackageOption pkgs "myapp" { };
port = lib.mkOption {
type = lib.types.port;
default = 8080;
description = "Port myapp listens on.";
};
dataDir = lib.mkOption {
type = lib.types.path;
default = "/var/lib/myapp";
description = "Directory for myapp state.";
};
};
Users then configure the service as a coherent group:
services.myapp = {
enable = true;
port = 9000;
dataDir = "/srv/myapp";
};
The apply argument
apply transforms the final merged value before it is exposed in config.
This is useful for normalisation or for converting a user-friendly type into
an internal representation:
options.services.myapp.logLevel = lib.mkOption {
type = lib.types.enum [ "debug" "info" "warn" "error" ];
default = "info";
apply = lib.toUpper; # config.services.myapp.logLevel will be "INFO"
};
The transformation is invisible to callers — they set the option as usual and
receive the transformed value when reading config.
readOnly options
Marking an option readOnly prevents assignments from anywhere other than the
module that declared it. This is useful for computed values that should be
observable but not overridden:
options.services.myapp.configFile = lib.mkOption {
type = lib.types.path;
readOnly = true;
description = "Path to the generated configuration file (read-only).";
};
config.services.myapp.configFile = pkgs.writeText "myapp.conf" "...";
Any attempt by another module to assign services.myapp.configFile will
produce an evaluation error.
Option Types
Every option has a type that controls what values are accepted, how multiple
assignments are merged, and what appears in the generated documentation. Types
are values found under lib.types.
Primitive types
bool
type = lib.types.bool;
Accepts true or false. Multiple assignments must agree; conflicting bool
values are an error. Use mkEnableOption for the common enable pattern.
int and variants
type = lib.types.int; # any integer
type = lib.types.ints.positive; # integer > 0
type = lib.types.ints.unsigned; # integer >= 0
type = lib.types.ints.between 1 65535; # inclusive range
type = lib.types.port; # alias for ints.between 0 65535
float
type = lib.types.float;
str
type = lib.types.str;
Accepts any string. Multiple assignments to a str option are an error; use
lines or commas if you need accumulation.
lines
type = lib.types.lines;
Like str, but multiple assignments are joined with newlines. Useful for
configuration file sections contributed by multiple modules.
commas
type = lib.types.commas;
Like lines but joined with commas.
path
type = lib.types.path;
Accepts a filesystem path (a string starting with / or a Nix path value).
enum
type = lib.types.enum [ "debug" "info" "warn" "error" ];
Accepts exactly one of the listed values. The list of valid values appears in the generated documentation.
anything
type = lib.types.anything;
Accepts any value. Merging follows the merge rules of the actual runtime type where possible, falling back to an error on conflict. Useful for pass-through options whose structure is not known at declaration time.
raw
type = lib.types.raw;
Like anything but explicitly opts out of merging — only one assignment is
permitted. Use this when a value must not be merged under any circumstances.
Compound types
nullOr
type = lib.types.nullOr lib.types.str;
Accepts either null or a value of the wrapped type. Useful for optional
values with no meaningful default:
options.services.myapp.apiKey = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "API key, or null to disable authentication.";
};
listOf
type = lib.types.listOf lib.types.str;
Accepts a list of values of the given type. Multiple assignments are concatenated, so several modules can each contribute items to the same list.
attrsOf
type = lib.types.attrsOf lib.types.int;
Accepts an attribute set where every value has the given type. Assignments from multiple modules are merged by attribute name; conflicting attributes for the same key are an error unless the value type allows merging.
lazyAttrsOf
type = lib.types.lazyAttrsOf lib.types.str;
Like attrsOf but evaluates values lazily. Prefer this for large attribute
sets where most entries may never be accessed.
package
type = lib.types.package;
Accepts a derivation. Conflicting assignments are an error. Use
mkPackageOption to declare package options with a sensible default.
Submodules
The submodule type allows an option to contain its own nested set of options.
This is how NixOS models structured configuration like
services.nginx.virtualHosts:
options.services.myapp.backends = lib.mkOption {
type = lib.types.attrsOf (lib.types.submodule {
options = {
host = lib.mkOption {
type = lib.types.str;
description = "Backend hostname.";
};
port = lib.mkOption {
type = lib.types.port;
default = 80;
description = "Backend port.";
};
};
});
default = { };
description = "Named backend servers.";
};
A user configures this as:
services.myapp.backends = {
primary = { host = "10.0.0.1"; port = 8080; };
secondary = { host = "10.0.0.2"; };
};
Each attribute is independently validated against the submodule’s options.
Submodule as a function
When values within a submodule need to be referenced, one can pass a function instead of an attribute set:
type = lib.types.submodule ({ config, pkgs, ... }: {
options = {
package = lib.mkPackageOption pkgs "nginx" { };
host = lib.mkOption {
type = lib.types.str;
description = "Backend hostname.";
};
configFile = lib.mkOption {
type = lib.types.path;
readOnly = true;
default = pkgs.writeText "nginx.conf" "... ${config.host} ...";
};
};
freeformType
Declaring an explicit option for every possible configuration key is sometimes
impractical — particularly when wrapping an upstream tool that has dozens of
settings, most of which users will never touch. freeformType solves this by
letting a submodule accept arbitrary undeclared attributes, merging them
according to a specified type, while still providing typed, documented options
for the settings that matter most.
It is set inside a submodule using a pkgs.formats value as the type.
pkgs.formats provides ready-made types for common configuration file formats,
and each format’s .type attribute is suitable for use as a freeformType:
{ config, lib, pkgs, ... }:
let
settingsFormat = pkgs.formats.json { };
in
{
options.services.myapp.settings = lib.mkOption {
type = lib.types.submodule {
freeformType = settingsFormat.type;
# Explicitly declared options are still fully typed and documented
options.port = lib.mkOption {
type = lib.types.port;
default = 8080;
description = "Port the server listens on.";
};
};
default = { };
description = "Settings passed directly to myapp's JSON configuration file.";
};
config = lib.mkIf config.services.myapp.enable {
# Serialise the entire settings attrset to JSON — declared and freeform alike
environment.etc."myapp/config.json".source =
settingsFormat.generate "myapp-config.json" config.services.myapp.settings;
};
}
A user can then set both declared and arbitrary keys:
services.myapp.settings = {
port = 9000; # declared option — type-checked
max_workers = 4; # undeclared — accepted via freeformType
log_format = "json"; # undeclared — accepted via freeformType
};
Declared options take precedence and provide type safety and documentation.
Undeclared attributes are merged using freeformType and passed through
transparently.
Using pkgs.formats is the idiomatic nixpkgs approach because the same format
value that defines the type also provides the generate function that
serialises settings to the correct file format. The module author does not
need to write a custom serialiser — the format handles both validation and
output. Other available formats include pkgs.formats.toml, pkgs.formats.yaml,
pkgs.formats.ini, and pkgs.formats.keyValue.
Limitations
Freeform attributes cannot reference other options or produce computed values — they are accepted as-is and merged by the freeform type. If a setting requires validation or interaction with the rest of the module, declare it as an explicit option instead.
Type composition patterns
Types compose freely:
# Optional list of strings
type = lib.types.nullOr (lib.types.listOf lib.types.str);
# Attribute set of optional ports
type = lib.types.attrsOf (lib.types.nullOr lib.types.port);
# List of structured records
type = lib.types.listOf (lib.types.submodule {
options = {
name = lib.mkOption { type = lib.types.str; };
enabled = lib.mkOption { type = lib.types.bool; default = true; };
};
});
Choosing a type
| Situation | Type |
|---|---|
| Feature flag | bool via mkEnableOption |
| Package selection | package via mkPackageOption |
| Single string, no merging | str |
| Multi-contributor string | lines |
| Filesystem path | path |
| Fixed set of values | enum [ ... ] |
| Optional value | nullOr T |
| List accumulated from modules | listOf T |
| Named records | attrsOf (submodule ...) |
| Structured nested config | submodule { ... } |
| Settings for config file | submodule { freeformType = ...; ... } |
| Arbitrary pass-through | anything |
Config
The config key in a module is where options are assigned values. While
options defines the schema, config is where the system actually gets
configured. Most of what you write in a NixOS module — and almost everything
in a user’s configuration.nix — lives under config.
Basic assignment
Assignments follow the option’s path directly:
{ pkgs, ... }:
{
config = {
networking.hostName = "myhost";
time.timeZone = "Europe/London";
environment.systemPackages = [ pkgs.git pkgs.vim ];
};
}
When a module contains only config and no options or imports, the
config wrapper can be omitted — the module system treats a bare attribute set
as if it were config:
{ pkgs, ... }:
{
networking.hostName = "myhost";
environment.systemPackages = [ pkgs.git pkgs.vim ];
}
This shorthand is common in user configurations but can cause confusion when a
module also declares options, so modules in shared libraries conventionally
always use the explicit config = { ... } form.
Conditional config with mkIf
It is common for a module’s configuration to apply only when an enable option
is set. lib.mkIf expresses this:
{ config, lib, pkgs, ... }:
{
options.services.myapp.enable = lib.mkEnableOption "myapp";
config = lib.mkIf config.services.myapp.enable {
systemd.services.myapp = {
description = "My Application";
wantedBy = [ "multi-user.target" ];
serviceConfig.ExecStart = "${pkgs.myapp}/bin/myapp";
};
};
}
mkIf is lazy: the body is not evaluated at all when the condition is false.
This avoids evaluation errors in the disabled branch and keeps the module
system efficient.
mkIf vs if-then-else
Prefer mkIf over a bare if in config:
# Correct — lazy, integrates with module system priority
config = lib.mkIf condition { ... };
# Also works, but eager — the body is always evaluated
config = if condition then { ... } else { };
The difference matters when the disabled branch references options that may not
be defined, or when you need mkMerge to combine conditional blocks.
mkMerge
lib.mkMerge combines multiple config attribute sets that would otherwise
conflict:
config = lib.mkMerge [
{
environment.systemPackages = [ pkgs.curl ];
}
(lib.mkIf config.services.myapp.enable {
environment.systemPackages = [ pkgs.myapp ];
networking.firewall.allowedTCPPorts = [ 8080 ];
})
];
Each element of the list is merged independently. This is the idiomatic way to express a module with several independent conditional blocks.
Priority: mkDefault, mkForce, and mkOverride
The module system allows multiple modules to assign the same option. For most mergeable types (lists, attribute sets) this just accumulates values. For scalar types (bool, str, int) there can only be one value — conflicts are resolved by priority.
Every assignment has a numeric priority. Lower numbers win. The module system defines named priorities:
| Function | Priority | Purpose |
|---|---|---|
lib.mkDefault | 1000 | Provide a fallback that users can easily override |
| (no wrapper) | 100 | Normal assignment |
lib.mkForce | 50 | Override user configuration |
lib.mkOverride n | n | Explicit numeric priority |
mkDefault
Use mkDefault for values that should be overridable:
config = lib.mkIf config.services.myapp.enable {
networking.firewall.allowedTCPPorts = lib.mkDefault [ 8080 ];
};
A user can override this without needing mkForce:
networking.firewall.allowedTCPPorts = [ 9000 ]; # overrides the mkDefault above
mkForce
Use mkForce when a value must not be changed by users or other modules:
# Security module: always disable root login
config.services.openssh.settings.PermitRootLogin = lib.mkForce "no";
Overriding a mkForce value requires another mkForce with equal or lower
priority, which makes the conflict explicit and intentional.
Conflict errors
If two modules assign the same scalar option at the same priority, the module system raises an error:
error: The option 'networking.hostName' has conflicting definition values:
- In '/etc/nixos/configuration.nix': "host-a"
- In '/etc/nixos/extra.nix': "host-b"
Resolve this by deciding which value should win and wrapping it in mkForce,
or by moving the common configuration to a shared location.
Assertions and warnings
Modules can validate configuration and surface problems early.
assertions
assertions is a list of { assertion = bool; message = str; } records. Any
false assertion aborts evaluation with the given message:
config = lib.mkIf config.services.myapp.enable {
assertions = [
{
assertion = config.services.myapp.port != 80 || config.services.nginx.enable;
message = "myapp on port 80 requires nginx to be enabled.";
}
];
};
warnings
warnings is a list of strings. Each string is printed as a warning during
nixos-rebuild but does not abort:
config.warnings = lib.optional
(config.services.myapp.dataDir == "/tmp")
"services.myapp.dataDir is set to /tmp; data will not persist across reboots.";
Imports
The imports list tells the module system which other modules to include. All
imported modules are merged together as if they had been written in a single
file. Imports are the primary mechanism for composing a NixOS configuration
from reusable pieces.
Basic imports
{ ... }:
{
imports = [
./hardware-configuration.nix
./networking.nix
./users.nix
];
}
Paths in imports can be:
- Relative paths — resolved relative to the file containing the
importslist - Absolute paths — used as-is
- Module values — a Nix expression that evaluates to a module (a function or attribute set); used when importing from flake inputs or nixpkgs
Importing from nixpkgs
NixOS modules from nixpkgs are imported automatically and do not need to be listed. For third-party modules shipped as a nixpkgs overlay or a separate path:
imports = [
"${inputs.some-flake}/modules/mymodule.nix"
];
Importing inline modules
An element of imports can be a module value directly, not just a path:
imports = [
# Inline module — useful for small conditional inclusions
({ lib, ... }: {
networking.firewall.enable = lib.mkDefault true;
})
];
Conditional imports
The module system evaluates imports before options, so you cannot use
config values to decide what to import — the import list must be statically
known. For conditional behaviour, use mkIf inside the imported module rather
than importing conditionally.
If you genuinely need to select between two modules based on some value, encode
the condition in a Nix if expression using only values available before
evaluation (such as pkgs.stdenv.isLinux):
imports = [
(if pkgs.stdenv.isLinux then ./linux.nix else ./darwin.nix)
];
Import order and duplicates
The module system deduplicates imports: if the same file or module value is imported more than once (from different places in the tree), it is only evaluated once. This means you can safely import a shared module from multiple places without worrying about double-application.
Import order does not affect the final result for most configuration — merging
is commutative for the common types. Order only matters for mkOrder, which is
covered in the merging chapter.
Structuring a configuration with imports
A common pattern is a top-level configuration.nix that delegates to
per-concern files:
/etc/nixos/
├── configuration.nix # imports everything else
├── hardware-configuration.nix
├── networking.nix
├── users.nix
├── services/
│ ├── nginx.nix
│ └── postgresql.nix
└── profiles/
└── workstation.nix
# configuration.nix
{ ... }:
{
imports = [
./hardware-configuration.nix
./networking.nix
./users.nix
./services/nginx.nix
./services/postgresql.nix
./profiles/workstation.nix
];
}
Each file focuses on one concern and can be added, removed, or shared between machines independently.
Importing modules from flakes
When using NixOS flakes, modules from external inputs are imported by
passing them through nixosSystem:
# flake.nix
{
inputs.home-manager.url = "github:nix-community/home-manager";
outputs = { nixpkgs, home-manager, ... }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
modules = [
./configuration.nix
home-manager.nixosModules.home-manager
];
};
};
}
The modules argument to nixosSystem is equivalent to a top-level imports
list. All modules passed here are merged together with the NixOS module
collection.
Merging and Precedence
When multiple modules assign values to the same option, the module system must combine them into a single result. The rules for how this happens depend on the option’s type and on the priority of each assignment. Understanding merging and precedence is essential for writing modules that interact correctly with the rest of a configuration.
How merging works by type
Each option type defines its own merge behaviour:
| Type | Merge behaviour |
|---|---|
bool | All assignments must be equal, or one must win by priority |
str | All assignments must be equal, or one must win by priority |
lines | Assignments are joined with newlines |
listOf T | Assignments are concatenated |
attrsOf T | Assignments are merged by key; conflicts per-key follow T’s rules |
submodule | Assignments are merged recursively as nested modules |
package | All assignments must be equal, or one must win by priority |
anything | Best-effort merge; conflicts raise errors |
Lists accumulate
Because listOf concatenates all assignments, multiple modules can each
contribute to the same list without conflict:
# module-a.nix
environment.systemPackages = [ pkgs.git ];
# module-b.nix
environment.systemPackages = [ pkgs.vim ];
# result
environment.systemPackages = [ pkgs.git pkgs.vim ];
The order of items within the final list reflects the order in which modules were imported, though this is usually not significant.
Attribute sets merge by key
Each key in an attrsOf value is merged independently. Two modules can both
assign to users.users as long as they use different keys:
# module-a.nix
users.users.alice = { isNormalUser = true; };
# module-b.nix
users.users.bob = { isNormalUser = true; };
If two modules assign the same key, merging falls back to the value type’s rules. For submodule values this means recursive merging, which usually works. For scalar values it is a conflict unless resolved by priority.
Priority
Every assignment carries a numeric priority. When a scalar option receives multiple assignments, the one with the lowest priority number wins. If two assignments share the same priority, evaluation fails with a conflict error.
The named priority helpers are:
lib.mkDefault value # priority 1000 — easy to override
value # priority 100 — normal assignment
lib.mkForce value # priority 50 — hard to override
lib.mkOverride n v # priority n — explicit
Typical use of mkDefault
Module authors use mkDefault for sensible defaults that user configuration
should be able to override without ceremony:
# In a module
config = lib.mkIf config.services.myapp.enable {
networking.firewall.allowedTCPPorts = lib.mkDefault [ 8080 ];
};
# In the user's configuration.nix — overrides the mkDefault silently
networking.firewall.allowedTCPPorts = [ 9000 ];
Typical use of mkForce
mkForce is used when a value must not be changed by user configuration — for
example, a security policy module that must enforce a setting regardless of
what else is imported:
config.services.openssh.settings.PermitRootLogin = lib.mkForce "no";
The user can still override a mkForce value, but only with another
mkForce, making the conflict visible:
# Explicit disagreement — both parties know this is an override
services.openssh.settings.PermitRootLogin = lib.mkForce "yes";
mkOrder
Within a listOf option, the order of items usually does not matter. When it
does, lib.mkOrder controls where a module’s contribution is placed in the
final list:
# Prepend to the list, regardless of import order
boot.kernelModules = lib.mkOrder 500 [ "vfio" "vfio_iommu_type1" ];
# Default order is 1000
boot.kernelModules = [ "kvm-intel" ];
Lower order numbers appear earlier in the final list. mkBefore and mkAfter
are convenience wrappers:
boot.kernelModules = lib.mkBefore [ "vfio" ]; # order 500
boot.kernelModules = lib.mkAfter [ "kvm-amd" ]; # order 1500
mkOverride for fine-grained control
When the named helpers are not specific enough, mkOverride accepts an
explicit priority number:
# Between mkDefault (1000) and a normal assignment (100)
networking.hostName = lib.mkOverride 500 "fallback-host";
This is rarely needed outside of framework code.
Debugging merge conflicts
When the module system reports a conflict, the error message shows which files contributed conflicting values:
error: The option 'services.openssh.settings.PermitRootLogin' has
conflicting definition values:
- In '/etc/nixos/configuration.nix': "no"
- In '/etc/nixos/hardening.nix': "prohibit-password"
The fix is to decide which value should take precedence and wrap it with
mkForce, or to remove the duplicate assignment. The nixos-option command
can show all assignments to an option and their priorities:
$ nixos-option services.openssh.settings.PermitRootLogin
Writing a Module
This chapter brings together everything covered so far by working through the
construction of a complete NixOS module. We will write a module for a
hypothetical service called myapp — a simple HTTP server — covering option
declaration, conditional configuration, systemd integration, and user/group
management.
The anatomy of a service module
Most NixOS service modules follow a predictable structure:
- Declare options under
services.<name> - Gate all configuration behind
mkIf config.services.<name>.enable - Create a dedicated user and group
- Write a configuration file from option values
- Define a systemd service unit
Step 1: declare options
# myapp.nix
{ config, lib, pkgs, ... }:
let
cfg = config.services.myapp;
in
{
options.services.myapp = {
enable = lib.mkEnableOption "myapp HTTP server";
package = lib.mkPackageOption pkgs "myapp" { };
port = lib.mkOption {
type = lib.types.port;
default = 8080;
description = "Port the server listens on.";
};
dataDir = lib.mkOption {
type = lib.types.path;
default = "/var/lib/myapp";
description = "Directory for persistent data.";
};
logLevel = lib.mkOption {
type = lib.types.enum [ "debug" "info" "warn" "error" ];
default = "info";
description = "Log verbosity.";
};
extraConfig = lib.mkOption {
type = lib.types.lines;
default = "";
description = "Additional lines appended to the configuration file.";
};
};
Binding cfg = config.services.myapp at the top of the let block avoids
repeating the full path throughout the module and is standard practice in
nixpkgs.
Step 2: gate config behind mkIf
config = lib.mkIf cfg.enable {
# everything below only applies when myapp is enabled
};
}
All of the following steps go inside this mkIf block.
Step 3: user and group
Dedicated system users improve security by limiting the privileges of the service process:
users.users.myapp = {
isSystemUser = true;
group = "myapp";
home = cfg.dataDir;
description = "myapp service user";
};
users.groups.myapp = { };
Step 4: generate a configuration file
Use pkgs.writeText or a template to produce the configuration file in the
Nix store, then reference it from the service unit:
environment.etc."myapp/myapp.conf".text = ''
port = ${toString cfg.port}
data_dir = ${cfg.dataDir}
log_level = ${cfg.logLevel}
${cfg.extraConfig}
'';
For larger or more structured configuration files, pkgs.writeTextFile or a
format-specific generator (such as pkgs.formats.toml) can be more
appropriate.
Step 5: systemd service
systemd.services.myapp = {
description = "myapp HTTP server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
ExecStart = "${cfg.package}/bin/myapp --config /etc/myapp/myapp.conf";
User = "myapp";
Group = "myapp";
Restart = "on-failure";
RestartSec = "5s";
# Hardening
NoNewPrivileges = true;
ProtectSystem = "strict";
ProtectHome = true;
ReadWritePaths = [ cfg.dataDir ];
PrivateTmp = true;
};
};
Step 6: assertions and firewall
Add assertions to catch configuration mistakes early, and optionally open the firewall:
assertions = [
{
assertion = cfg.port > 1024 || config.users.users.myapp.isSystemUser == false;
message = "myapp: ports below 1024 require running as root, which is not recommended.";
}
];
networking.firewall.allowedTCPPorts = lib.mkIf
config.networking.firewall.enable
(lib.mkDefault [ cfg.port ]);
The complete module
{ config, lib, pkgs, ... }:
let
cfg = config.services.myapp;
in
{
options.services.myapp = {
enable = lib.mkEnableOption "myapp HTTP server";
package = lib.mkPackageOption pkgs "myapp" { };
port = lib.mkOption {
type = lib.types.port;
default = 8080;
description = "Port the server listens on.";
};
dataDir = lib.mkOption {
type = lib.types.path;
default = "/var/lib/myapp";
description = "Directory for persistent data.";
};
logLevel = lib.mkOption {
type = lib.types.enum [ "debug" "info" "warn" "error" ];
default = "info";
description = "Log verbosity.";
};
extraConfig = lib.mkOption {
type = lib.types.lines;
default = "";
description = "Additional lines appended to the configuration file.";
};
};
config = lib.mkIf cfg.enable {
assertions = [
{
assertion = cfg.port > 1024;
message = "myapp: use a port above 1024 to avoid running as root.";
}
];
users.users.myapp = {
isSystemUser = true;
group = "myapp";
home = cfg.dataDir;
description = "myapp service user";
};
users.groups.myapp = { };
environment.etc."myapp/myapp.conf".text = ''
port = ${toString cfg.port}
data_dir = ${cfg.dataDir}
log_level = ${cfg.logLevel}
${cfg.extraConfig}
'';
systemd.services.myapp = {
description = "myapp HTTP server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
ExecStart = "${cfg.package}/bin/myapp --config /etc/myapp/myapp.conf";
User = "myapp";
Group = "myapp";
Restart = "on-failure";
RestartSec = "5s";
NoNewPrivileges = true;
ProtectSystem = "strict";
ProtectHome = true;
ReadWritePaths = [ cfg.dataDir ];
PrivateTmp = true;
};
};
networking.firewall.allowedTCPPorts = lib.mkDefault [ cfg.port ];
};
}
Using the module
Import the module and enable the service:
# configuration.nix
{ ... }:
{
imports = [ ./myapp.nix ];
services.myapp = {
enable = true;
port = 9000;
logLevel = "debug";
extraConfig = ''
max_connections = 100
'';
};
}
Common patterns
Passing secrets
Avoid putting secrets in the Nix store. Instead, reference a file path that will be present at runtime:
options.services.myapp.secretKeyFile = lib.mkOption {
type = lib.types.path;
example = "/run/secrets/myapp-key";
description = "Path to a file containing the secret key.";
};
Then load it in the service unit:
serviceConfig.EnvironmentFile = cfg.secretKeyFile;
Multiple instances
Use attrsOf (submodule ...) to allow multiple named instances of a service,
following the pattern used by services.nginx.virtualHosts and
services.postgresql.ensureDatabases.
Exposing the generated config path
If other modules need to reference the generated configuration file:
options.services.myapp.configFile = lib.mkOption {
type = lib.types.path;
readOnly = true;
description = "Path to the generated configuration file.";
};
config.services.myapp.configFile = "/etc/myapp/myapp.conf";
Testing Modules
NixOS includes a purpose-built testing framework that runs complete virtual machines, applies a NixOS configuration, and runs assertions against the live system. These are called NixOS tests (or VM tests). They are the standard way to verify that a module’s configuration actually produces the intended system behaviour.
How NixOS tests work
A NixOS test is a Nix expression that describes one or more virtual machines and a Python test script. The framework:
- Builds a NixOS system closure for each machine
- Starts the machines in QEMU
- Runs the Python script, which can interact with each machine via a serial console and check the system state
- Passes or fails based on assertions in the script
Tests are run with nix build or nix-build and produce a log. Because the
machines are fully isolated from the host, tests are reproducible and can be
run in CI without special privileges.
A minimal test
# test.nix
{ pkgs ? import <nixpkgs> { } }:
pkgs.nixosTest {
name = "myapp-test";
nodes.machine = { ... }: {
imports = [ ./myapp.nix ];
services.myapp.enable = true;
};
testScript = ''
machine.wait_for_unit("myapp.service")
machine.wait_for_open_port(8080)
response = machine.succeed("curl -s http://localhost:8080/health")
assert "ok" in response, f"unexpected response: {response}"
'';
}
Run with:
nix-build test.nix
nodes
The nodes attribute is an attribute set of machine definitions. Each value
is a NixOS module. Single-machine tests typically use nodes.machine; the
name is arbitrary.
nodes = {
server = { ... }: {
services.myapp.enable = true;
};
client = { ... }: {
environment.systemPackages = [ pkgs.curl ];
};
};
Each machine is independently addressable from the test script by name.
The test script
The test script is a Python program. The framework provides one global variable per machine, named after the node. Common methods:
| Method | Description |
|---|---|
machine.start() | Start the VM (done automatically) |
machine.wait_for_unit(unit) | Block until the systemd unit is active |
machine.wait_for_open_port(port) | Block until a TCP port accepts connections |
machine.succeed(cmd) | Run a shell command, assert exit 0, return stdout |
machine.fail(cmd) | Run a shell command, assert non-zero exit |
machine.execute(cmd) | Run a shell command, return (exit_code, stdout) |
machine.copy_from_host(src, dst) | Copy a file from the host into the VM |
machine.wait_until_succeeds(cmd) | Retry cmd until it exits 0 |
machine.wait_until_fails(cmd) | Retry cmd until it exits non-zero |
machine.shutdown() | Shut down the VM |
Multi-machine scripts
In a multi-machine test, each node is a separate Python variable:
server.wait_for_unit("myapp.service")
client.wait_until_succeeds("curl -s http://server:8080/health | grep ok")
Machine hostnames default to the node name, so server and client resolve
correctly inside the virtual network.
Interactive testing
During development it is useful to drop into the test environment interactively:
nix run .#checks.x86_64-linux.myapp-test.driverInteractive
Or with the legacy interface:
$(nix-build test.nix -A driverInteractive)/bin/nixos-test-driver
This starts the Python REPL with the machine variables available. You can call
machine.start() and then interact with the VM manually, which is helpful for
debugging failing assertions.
Testing option defaults and types
For modules with complex option interactions, it can be worth testing that defaults are correct and that invalid values are rejected. The latter requires checking that evaluation fails:
testScript = ''
# Verify the default port is in use
machine.wait_for_open_port(8080)
# Verify a custom port works
'';
nodes.custom = { ... }: {
imports = [ ./myapp.nix ];
services.myapp = {
enable = true;
port = 9000;
};
};
Evaluation failures (wrong types, failed assertions) can be tested using
pkgs.testers.testEqualContents or by building a configuration that is
expected to fail with nix-instantiate --eval.
Putting tests in a flake
When using flakes, expose tests as checks so they run with nix flake check:
# flake.nix
{
outputs = { self, nixpkgs }: {
checks.x86_64-linux.myapp = nixpkgs.legacyPackages.x86_64-linux.nixosTest {
name = "myapp";
nodes.machine = { ... }: {
imports = [ self.nixosModules.myapp ];
services.myapp.enable = true;
};
testScript = ''
machine.wait_for_unit("myapp.service")
machine.wait_for_open_port(8080)
'';
};
nixosModules.myapp = import ./myapp.nix;
};
}
Run all checks with:
nix flake check
Tips for writing reliable tests
- Use
wait_for_unitrather thansleep— it is both faster and more robust - Use
wait_until_succeedsfor external readiness checks (HTTP, database connections) rather than waiting for the unit, since a unit can be active before it is ready to serve requests - Keep test VMs lean: only enable the services under test to keep build times short
- Test failure cases too: assert that disabling a service really stops it and closes its port
- Store test helpers in a shared Python snippet if multiple tests repeat the same setup steps