Skip to content

Writing Nix Flakes

mvmctl uses Nix flakes to produce reproducible microVM images. Each build runs nix build inside the Linux environment (Lima VM on macOS, native on Linux), producing a kernel and rootfs. The same rootfs works on all backends (Firecracker, Apple Container).

{
inputs = {
mvm.url = "github:auser/mvm?dir=nix";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.11";
};
outputs = { mvm, nixpkgs, ... }:
let
system = "aarch64-linux";
pkgs = import nixpkgs { inherit system; };
in {
packages.${system}.default = mvm.lib.${system}.mkGuest {
name = "my-app";
packages = [ pkgs.curl ];
services.my-app = {
command = "${pkgs.python3}/bin/python3 -m http.server 8080";
};
healthChecks.my-app = {
healthCmd = "${pkgs.curl}/bin/curl -sf http://localhost:8080/";
healthIntervalSecs = 5;
};
};
};
}
ParameterDescription
nameVM name (used in image filename)
packagesNix packages to include in the rootfs
hostnameGuest hostname (default: same as name)
serviceGroupDefault service user/group name (default: "mvm"). Services run as this user; secrets are readable by this group.
users.<name>.uidUser ID (optional, auto-assigned from 1000)
users.<name>.groupGroup name (optional, defaults to user name)
users.<name>.homeHome directory (optional, defaults to /home/<name>)
services.<name>.commandLong-running service command (supervised with respawn)
services.<name>.preStartOptional setup script (runs as root before the service)
services.<name>.envOptional environment variables ({ KEY = "value"; })
services.<name>.userUser to run as (default: serviceGroup)
services.<name>.logFileOptional log file path (default: /dev/console)
healthChecks.<name>.healthCmdHealth check command (exit 0 = healthy)
healthChecks.<name>.healthIntervalSecsHow often to run the check (default: 30)
healthChecks.<name>.healthTimeoutSecsTimeout for each check (default: 10)

mkGuest handles everything automatically:

  • Firecracker kernel (vmlinux) — tuned for microVM workloads
  • Busybox init — sub-5s boot, no systemd overhead
  • Guest agent — vsock-based health checks, status reporting, snapshot coordination
  • Networking — eth0 configured via kernel boot args, NAT to host network
  • Drive mounting/mnt/config (ro), /mnt/secrets (ro), /mnt/data (rw)
  • Service supervision — automatic restart on failure with backoff

Services defined in services.<name> are supervised by the init system:

services.my-app = {
# Setup (runs once as root before the service starts)
preStart = "mkdir -p /tmp/data";
# Long-running process (supervised, auto-restart on failure)
command = "${pkgs.nodejs}/bin/node /app/server.js";
# Environment variables
env = {
PORT = "8080";
NODE_ENV = "production";
};
# Run as a specific user (default: serviceGroup, which defaults to "mvm")
user = "app";
# Log to a file instead of console
logFile = "/var/log/my-app.log";
};

Health checks defined in healthChecks are automatically written to /etc/mvm/integrations.d/ at build time. The guest agent picks them up on boot:

healthChecks.my-app = {
healthCmd = "${pkgs.curl}/bin/curl -sf http://localhost:8080/health";
healthIntervalSecs = 10;
healthTimeoutSecs = 5;
};

Query health status from the host:

Terminal window
mvmctl logs <name> # view guest console (includes health check results)
mvmctl logs <name> -f # follow in real time

All services run as a built-in non-root user (default: mvm, uid 900) — never as root. Secrets at /mnt/secrets are owned by root:<serviceGroup> with mode 0440, so only members of the service group can read them. Custom users are automatically added to this group.

To change the default service user/group name, set serviceGroup:

mvm.lib.${system}.mkGuest {
name = "my-app";
serviceGroup = "app"; # default: "mvm"
# ...
};

To run a service as a custom user, define it in users and reference it in the service. The custom user is automatically added to the service group for secrets access:

users.app = {
uid = 1000;
group = "app";
home = "/home/app";
};
services.my-app = {
command = "${pkgs.nodejs}/bin/node /app/server.js";
user = "app"; # overrides the default serviceGroup user
};

The preStart script always runs as root regardless of the user setting, so it can perform privileged setup like mounting filesystems or creating directories.

By default, mkGuest produces an ext4 rootfs. The build system also supports squashfs for smaller, read-only images (~76% smaller with LZ4 compression). When using squashfs, the init system mounts tmpfs overlays on /etc and /var automatically.

The guest library provides high-level helpers that return a { package, service, healthCheck } set. Compose them with mkGuest:

Build a Python HTTP service using python3.withPackages (nixpkgs packages only):

let
pythonApp = mvm.lib.${system}.mkPythonService {
name = "my-api";
src = ./.;
pythonPackages = ps: [ ps.flask ];
entrypoint = "app/main.py";
port = 8080;
env = { WORKERS = "2"; };
};
in
mvm.lib.${system}.mkGuest {
name = "my-api";
packages = [ pythonApp.package ];
services.app = pythonApp.service;
healthChecks.app = pythonApp.healthCheck;
};

Serve static files with busybox httpd (zero extra packages):

let
site = mvm.lib.${system}.mkStaticSite {
name = "docs";
src = ./public;
port = 8080;
};
in
mvm.lib.${system}.mkGuest {
name = "docs";
packages = [ site.package ];
services.www = site.service;
healthChecks.www = site.healthCheck;
};

Build a Node.js service with npm install + tsc:

let
app = mvm.lib.${system}.mkNodeService {
name = "my-app";
src = fetchGit { url = "..."; rev = "..."; };
npmHash = "sha256-...";
entrypoint = "dist/index.js";
port = 3000;
};
in
mvm.lib.${system}.mkGuest {
name = "my-app";
packages = [ app.package ];
services.app = app.service;
healthChecks.app = app.healthCheck;
};

All three helpers return the same shape: { package, service, healthCheck }. This makes it easy to swap between runtimes or compose multiple services in a single guest.

When you run mvmctl build --flake .:

  1. The flake is copied into the Linux environment (Lima VM on macOS, native on Linux)
  2. nix build runs inside that environment
  3. The resulting closure is packed into the rootfs
  4. Kernel and rootfs artifacts are cached
  5. Subsequent builds with unchanged flake.lock reuse the cache

The same rootfs works on all backends (Firecracker, Apple Container, microvm.nix, Docker).

The --profile flag selects which Nix output to build:

Terminal window
mvmctl build --flake . --profile minimal
mvmctl build --flake . --profile gateway

These map to packages.${system}.<profile> in the flake.