Skip to content

Writing Nix Flakes

mvm uses Nix flakes to produce reproducible microVM images. Each build runs nix build inside the Lima VM, producing a kernel and rootfs.

{
inputs = {
mvm.url = "github:auser/mvm?dir=nix";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.11";
};
outputs = { mvm, nixpkgs, ... }:
let
system = "aarch64-linux";
pkgs = import nixpkgs { inherit system; };
in {
packages.${system}.default = mvm.lib.${system}.mkGuest {
name = "my-app";
packages = [ pkgs.curl ];
services.my-app = {
command = "${pkgs.python3}/bin/python3 -m http.server 8080";
};
healthChecks.my-app = {
healthCmd = "${pkgs.curl}/bin/curl -sf http://localhost:8080/";
healthIntervalSecs = 5;
};
};
};
}
ParameterDescription
nameVM name (used in image filename)
packagesNix packages to include in the rootfs
hostnameGuest hostname (default: same as name)
serviceGroupDefault service user/group name (default: "mvm"). Services run as this user; secrets are readable by this group.
users.<name>.uidUser ID (optional, auto-assigned from 1000)
users.<name>.groupGroup name (optional, defaults to user name)
users.<name>.homeHome directory (optional, defaults to /home/<name>)
services.<name>.commandLong-running service command (supervised with respawn)
services.<name>.preStartOptional setup script (runs as root before the service)
services.<name>.envOptional environment variables ({ KEY = "value"; })
services.<name>.userUser to run as (default: serviceGroup)
services.<name>.logFileOptional log file path (default: /dev/console)
healthChecks.<name>.healthCmdHealth check command (exit 0 = healthy)
healthChecks.<name>.healthIntervalSecsHow often to run the check (default: 30)
healthChecks.<name>.healthTimeoutSecsTimeout for each check (default: 10)

mkGuest handles everything automatically:

  • Firecracker kernel (vmlinux) — tuned for microVM workloads
  • Busybox init — sub-5s boot, no systemd overhead
  • Guest agent — vsock-based health checks, status reporting, snapshot coordination
  • Networking — eth0 configured via kernel boot args, NAT through Lima
  • Drive mounting/mnt/config (ro), /mnt/secrets (ro), /mnt/data (rw)
  • Service supervision — automatic restart on failure with backoff

Services defined in services.<name> are supervised by the init system:

services.my-app = {
# Setup (runs once as root before the service starts)
preStart = "mkdir -p /tmp/data";
# Long-running process (supervised, auto-restart on failure)
command = "${pkgs.nodejs}/bin/node /app/server.js";
# Environment variables
env = {
PORT = "8080";
NODE_ENV = "production";
};
# Run as a specific user (default: serviceGroup, which defaults to "mvm")
user = "app";
# Log to a file instead of console
logFile = "/var/log/my-app.log";
};

Health checks defined in healthChecks are automatically written to /etc/mvm/integrations.d/ at build time. The guest agent picks them up on boot:

healthChecks.my-app = {
healthCmd = "${pkgs.curl}/bin/curl -sf http://localhost:8080/health";
healthIntervalSecs = 10;
healthTimeoutSecs = 5;
};

Query health status from the host:

Terminal window
mvmctl vm status
mvmctl vm inspect <name>

All services run as a built-in non-root user (default: mvm, uid 900) — never as root. Secrets at /mnt/secrets are owned by root:<serviceGroup> with mode 0440, so only members of the service group can read them. Custom users are automatically added to this group.

To change the default service user/group name, set serviceGroup:

mvm.lib.${system}.mkGuest {
name = "my-app";
serviceGroup = "app"; # default: "mvm"
# ...
};

To run a service as a custom user, define it in users and reference it in the service. The custom user is automatically added to the service group for secrets access:

users.app = {
uid = 1000;
group = "app";
home = "/home/app";
};
services.my-app = {
command = "${pkgs.nodejs}/bin/node /app/server.js";
user = "app"; # overrides the default serviceGroup user
};

The preStart script always runs as root regardless of the user setting, so it can perform privileged setup like mounting filesystems or creating directories.

By default, mkGuest produces an ext4 rootfs. For smaller images, use squashfs:

mvm.lib.${system}.mkGuest {
name = "my-app";
rootfsType = "squashfs"; # LZ4-compressed, ~76% smaller
# ...
};

Squashfs images are read-only — the init system mounts tmpfs overlays on /etc and /var automatically.

When you run mvmctl build --flake .:

  1. The flake is copied into the Lima VM
  2. nix build runs inside the Lima VM
  3. The resulting closure is packed into the rootfs
  4. Kernel and rootfs artifacts are cached
  5. Subsequent builds with unchanged flake.lock reuse the cache

The --profile flag selects which Nix output to build:

Terminal window
mvmctl build --flake . --profile minimal
mvmctl build --flake . --profile gateway

These map to packages.${system}.<profile> in the flake.