From 539b6f822ace4fcc57b0f8dc9eee63ad742d5002 Mon Sep 17 00:00:00 2001 From: Asmir Avdicevic Date: Tue, 24 Mar 2026 12:28:11 +0100 Subject: [PATCH 1/6] feat: add Apple container backend for patchbay-vm Add an alternative VM backend using Apple's Containerization framework (macOS 26 + Apple Silicon). The container backend provides sub-second boot times and native VirtioFS mounts, replacing QEMU's SSH-based interaction with `container exec`. - Split vm.rs into common.rs (shared types/helpers), qemu.rs (QEMU backend), and container.rs (Apple container backend) - Add --backend auto|qemu|container flag with auto-detection - Default to all host CPUs and 8GB RAM for both backends - Update docs/guide/vm.md with setup guides for both backends - Add .container-vm and .tmp to .gitignore --- .cargo/config.toml | 2 + .gitignore | 2 + docs/guide/vm.md | 181 +++- patchbay-vm/Cargo.toml | 4 +- patchbay-vm/src/common.rs | 873 ++++++++++++++++++ patchbay-vm/src/container.rs | 514 +++++++++++ patchbay-vm/src/main.rs | 84 +- patchbay-vm/src/{vm.rs => qemu.rs} | 1338 ++++++---------------------- 8 files changed, 1904 insertions(+), 1094 deletions(-) create mode 100644 .cargo/config.toml create mode 100644 patchbay-vm/src/common.rs create mode 100644 patchbay-vm/src/container.rs rename patchbay-vm/src/{vm.rs => qemu.rs} (59%) diff --git a/.cargo/config.toml b/.cargo/config.toml new file mode 100644 index 0000000..67017c2 --- /dev/null +++ b/.cargo/config.toml @@ -0,0 +1,2 @@ +[target.aarch64-unknown-linux-musl] +linker = "aarch64-linux-musl-gcc" \ No newline at end of file diff --git a/.gitignore b/.gitignore index 6034cf2..28f9b09 100644 --- a/.gitignore +++ b/.gitignore @@ -2,6 +2,8 @@ /target /log-* /.qemu-vm +/.container-vm +/.tmp /.netsim-work /resources /docs/book diff --git a/docs/guide/vm.md b/docs/guide/vm.md index f5621dd..c20dd81 100644 --- a/docs/guide/vm.md +++ b/docs/guide/vm.md @@ -2,8 +2,25 @@ patchbay requires Linux network namespaces, which means it cannot run natively on macOS or Windows. The `patchbay-vm` crate solves this by -wrapping your simulations and tests in a QEMU Linux VM, giving you the -same experience on any development machine. +wrapping your simulations and tests in a Linux VM, giving you the same +experience on any development machine. + +Two VM backends are available: + +| Backend | Platform | Boot time | How it works | +|---------|----------|-----------|--------------| +| **QEMU** | Linux, macOS (Intel and Apple Silicon) | 30-60s | Full Debian cloud image with SSH access | +| **Apple container** | macOS 26+ Apple Silicon only | Sub-second | Lightweight VM via Apple's [Containerization](https://github.com/apple/containerization) framework | + +By default, `patchbay-vm` auto-detects the best backend. On macOS 26 +with Apple Silicon and the `container` CLI installed it picks the +container backend; everywhere else it falls back to QEMU. You can force +a backend with `--backend`: + +```bash +patchbay-vm --backend container run sim.toml +patchbay-vm --backend qemu run sim.toml +``` ## Installing patchbay-vm @@ -85,11 +102,7 @@ patchbay-vm ssh -- nft list ruleset ## How it works -`patchbay-vm` downloads a Debian cloud image (cached in -`~/.local/share/patchbay/qemu-images/`), creates a COW disk backed by -it, and boots QEMU with cloud-init for initial provisioning. The guest -gets SSH access via a host-forwarded port (default 2222) and three shared -mount points: +Both backends share the same three mount points inside the guest: | Guest path | Host path | Access | Purpose | |------------|-----------|--------|---------| @@ -97,19 +110,155 @@ mount points: | `/target` | Cargo target dir | Read-only | Build artifacts | | `/work` | Work directory | Read-write | Simulation output and logs | +### QEMU backend + +`patchbay-vm` downloads a Debian cloud image (cached in +`~/.local/share/patchbay/qemu-images/`), creates a COW disk backed by +it, and boots QEMU with cloud-init for initial provisioning. The guest +gets SSH access via a host-forwarded port (default 2222). + File sharing uses virtiofs when available (faster, requires virtiofsd on the host) and falls back to 9p. Hardware acceleration is auto-detected: KVM on Linux, HVF on macOS, TCG emulation as a last resort. +### Apple container backend + +The container backend uses Apple's +[Containerization](https://github.com/apple/containerization) framework, +which runs each container inside its own lightweight Linux VM powered by +the Virtualization.framework hypervisor. Apple's default kernel ships +with everything patchbay needs built-in: network namespaces, nftables, +netem/HTB/TBF qdiscs, veth pairs, and bridges. + +Instead of SSH, commands execute through `container exec`. Directories +are shared via native VirtioFS mounts (no separate virtiofsd process). +On first boot the guest installs required userspace tools (iproute2, +nftables, etc.) from the Debian repositories; subsequent runs skip this +step. + +## Setting up the QEMU backend on macOS + +1. Install QEMU: + +```bash +brew install qemu +``` + +2. For faster file sharing, install virtiofsd (optional but recommended): + +```bash +brew install virtiofsd +``` + +3. Build the musl runner binary. On Apple Silicon: + +```bash +rustup target add aarch64-unknown-linux-musl +brew install filosottile/musl-cross/musl-cross +``` + +Add to `.cargo/config.toml`: + +```toml +[target.aarch64-unknown-linux-musl] +linker = "aarch64-linux-musl-gcc" +``` + +Then build: + +```bash +cargo build --release --target aarch64-unknown-linux-musl -p patchbay-runner --bin patchbay +``` + +On Intel Macs, replace `aarch64` with `x86_64` throughout. + +4. Run: + +```bash +patchbay-vm --backend qemu run \ + --patchbay-version "path:target/aarch64-unknown-linux-musl/release/patchbay" \ + ./path/to/sim.toml +``` + +The first boot downloads a Debian cloud image and provisions the VM, +which takes 1-2 minutes. Subsequent runs reuse the running VM. + +## Setting up the Apple container backend + +### Requirements + +- Mac with Apple Silicon (M1 or later) +- macOS 26 (Tahoe) or later +- [container CLI](https://github.com/apple/container) installed + +### Installation + +1. Download the latest signed installer package from the + [container releases page](https://github.com/apple/container/releases). + +2. Double-click the package and follow the prompts. The installer places + binaries under `/usr/local`. + +3. Start the system service: + +```bash +container system start +``` + +4. Verify it works: + +```bash +container run --rm debian:trixie-slim echo "hello from container" +``` + +### Building the musl target + +Simulations run inside an ARM64 Linux VM, so the patchbay runner binary +must be cross-compiled for `aarch64-unknown-linux-musl`. + +1. Install the Rust target and a musl cross-compiler: + +```bash +rustup target add aarch64-unknown-linux-musl +brew install filosottile/musl-cross/musl-cross +``` + +2. Tell Cargo which linker to use. Add to `.cargo/config.toml` (create + it if it does not exist): + +```toml +[target.aarch64-unknown-linux-musl] +linker = "aarch64-linux-musl-gcc" +``` + +3. Build the runner binary: + +```bash +cargo build --release --target aarch64-unknown-linux-musl -p patchbay-runner --bin patchbay +``` + +### Running a simulation + +```bash +patchbay-vm --backend container run \ + --patchbay-version "path:target/aarch64-unknown-linux-musl/release/patchbay" ./path/to/sim.toml +``` + +On the first run the container backend pulls the Debian base image and +installs packages (takes about 15 seconds). Subsequent runs reuse the +existing container and skip provisioning entirely. + ## Configuration All settings have sensible defaults. Override them through environment -variables when needed: +variables when needed. + +### QEMU backend | Variable | Default | Description | |----------|---------|-------------| -| `QEMU_VM_MEM_MB` | 4096 | Guest RAM in megabytes | -| `QEMU_VM_CPUS` | 4 | Guest CPU count | +| `QEMU_VM_MEM_MB` | 8192 | Guest RAM in megabytes | +| `QEMU_VM_CPUS` | all | Guest CPU count (defaults to all host CPUs) | | `QEMU_VM_SSH_PORT` | 2222 | Host port forwarded to guest SSH | | `QEMU_VM_NAME` | patchbay-vm | VM instance name | | `QEMU_VM_DISK_GB` | 40 | Disk size in gigabytes | @@ -117,3 +266,15 @@ variables when needed: VM state lives in `.qemu-vm//` in your project directory. The disk image uses COW backing, so it only consumes space for blocks that differ from the base image. + +### Apple container backend + +| Variable | Default | Description | +|----------|---------|-------------| +| `CONTAINER_VM_MEM_MB` | 8192 | Guest RAM in megabytes | +| `CONTAINER_VM_CPUS` | all | Guest CPU count (defaults to all host CPUs) | +| `CONTAINER_VM_IMAGE` | debian:trixie-slim | OCI image to use | +| `CONTAINER_VM_NAME` | patchbay | Container instance name | + +Container state lives in `.container-vm//` in your project +directory. diff --git a/patchbay-vm/Cargo.toml b/patchbay-vm/Cargo.toml index 82b74ee..620b6f2 100644 --- a/patchbay-vm/Cargo.toml +++ b/patchbay-vm/Cargo.toml @@ -1,8 +1,8 @@ [package] name = "patchbay-vm" version = "0.1.0" -description = "QEMU VM wrapper for running patchbay simulations on macOS" -keywords = ["network", "simulation", "qemu", "vm"] +description = "VM wrapper for running patchbay simulations (QEMU or Apple container backend)" +keywords = ["network", "simulation", "qemu", "vm", "container"] edition.workspace = true license.workspace = true authors.workspace = true diff --git a/patchbay-vm/src/common.rs b/patchbay-vm/src/common.rs new file mode 100644 index 0000000..36be194 --- /dev/null +++ b/patchbay-vm/src/common.rs @@ -0,0 +1,873 @@ +use std::{ + collections::HashMap, + path::{Path, PathBuf}, + process::{Command, Stdio}, +}; + +use anyhow::{anyhow, bail, Context, Result}; +use patchbay_utils::{ + assets::{infer_binary_mode, parse_binary_overrides, BinarySpec}, + binary_cache::set_executable, +}; +use serde::Deserialize; + +// --------------------------------------------------------------------------- +// Shared constants +// --------------------------------------------------------------------------- + +const RELEASE_MUSL_ASSET_X86: &str = "patchbay-x86_64-unknown-linux-musl.tar.gz"; +const RELEASE_MUSL_ASSET_ARM64: &str = "patchbay-aarch64-unknown-linux-musl.tar.gz"; +const GITHUB_REPO: &str = "https://github.com/n0-computer/patchbay.git"; +const DEFAULT_MUSL_TARGET_X86: &str = "x86_64-unknown-linux-musl"; +const DEFAULT_MUSL_TARGET_ARM64: &str = "aarch64-unknown-linux-musl"; + +pub const DEFAULT_MEM_MB: &str = "8192"; + +/// Returns the number of logical CPUs as a string, for use as the default guest CPU count. +pub fn default_cpus() -> String { + std::thread::available_parallelism() + .map(|n| n.get().to_string()) + .unwrap_or_else(|_| "4".to_string()) +} + +// --------------------------------------------------------------------------- +// Shared types +// --------------------------------------------------------------------------- + +#[derive(Debug, Clone)] +pub struct RunVmArgs { + pub sim_inputs: Vec, + pub work_dir: PathBuf, + pub binary_overrides: Vec, + pub verbose: bool, + pub recreate: bool, + pub patchbay_version: String, +} + +#[derive(Debug, Clone)] +pub struct TestVmArgs { + pub filter: Option, + pub target: String, + pub packages: Vec, + pub tests: Vec, + pub recreate: bool, + pub cargo_args: Vec, +} + +#[derive(Debug, Clone, Deserialize, Default)] +pub struct VmExtends { + pub file: String, +} + +#[derive(Debug, Clone, Deserialize, Default)] +pub struct VmSimMeta { + pub binaries: Option, +} + +#[derive(Debug, Clone, Deserialize, Default)] +pub struct VmSimFile { + #[serde(default)] + pub sim: VmSimMeta, + #[serde(default)] + pub extends: Vec, + #[serde(default, rename = "binary")] + pub binaries: Vec, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct VmBuildRequest { + pub source_dir: PathBuf, + pub example: Option, + pub bin: Option, + pub features: Vec, + pub all_features: bool, +} + +// --------------------------------------------------------------------------- +// Host arch helpers +// --------------------------------------------------------------------------- + +pub fn is_arm64_host() -> bool { + std::env::consts::ARCH == "aarch64" +} + +pub fn default_musl_target() -> &'static str { + if is_arm64_host() { + DEFAULT_MUSL_TARGET_ARM64 + } else { + DEFAULT_MUSL_TARGET_X86 + } +} + +pub fn release_musl_asset() -> &'static str { + if is_arm64_host() { + RELEASE_MUSL_ASSET_ARM64 + } else { + RELEASE_MUSL_ASSET_X86 + } +} + +// --------------------------------------------------------------------------- +// Build / staging helpers +// --------------------------------------------------------------------------- + +/// Resolve and stage the patchbay runner binary for use inside the guest. +pub fn ensure_guest_runner_binary( + work_dir: &Path, + target_dir: &Path, + version: &str, +) -> Result { + let source = resolve_vm_runner_binary(work_dir, target_dir, version)?; + let staged_dir = work_dir.join(".patchbay-bin"); + std::fs::create_dir_all(&staged_dir) + .with_context(|| format!("create {}", staged_dir.display()))?; + let staged = staged_dir.join("patchbay"); + std::fs::copy(&source, &staged) + .with_context(|| format!("copy {} -> {}", source.display(), staged.display()))?; + set_executable(&staged)?; + Ok("/work/.patchbay-bin/patchbay".to_string()) +} + +fn resolve_vm_runner_binary( + work_dir: &Path, + _target_dir: &Path, + version: &str, +) -> Result { + match std::env::consts::OS { + "linux" | "macos" => { + if let Some(path) = version.strip_prefix("path:") { + let bin = PathBuf::from(path); + if !bin.exists() { + bail!("--patchbay-version path does not exist: {}", bin.display()); + } + if bin.is_dir() { + bail!( + "--patchbay-version path points to a directory, expected executable file: {}", + bin.display() + ); + } + return Ok(bin); + } + if let Some(git_ref) = version.strip_prefix("git:") { + build_musl_from_git_ref(work_dir, git_ref) + } else { + download_release_runner(work_dir, version) + } + } + other => bail!("run-vm is not supported on host OS '{}'", other), + } +} + +fn download_release_runner(work_dir: &Path, version: &str) -> Result { + need_cmd("curl")?; + need_cmd("tar")?; + let cache_root = work_dir.join(".vm-cache"); + std::fs::create_dir_all(&cache_root) + .with_context(|| format!("create {}", cache_root.display()))?; + let version_key = if version == "latest" { + "latest".to_string() + } else { + normalize_release_tag(version) + }; + let archive = cache_root.join(format!( + "{}-{}", + version_key.replace('/', "_"), + release_musl_asset() + )); + let unpack = cache_root.join(format!( + "release-{}-{}", + version_key.replace('/', "_"), + default_musl_target() + )); + let cached_bin = unpack.join("patchbay"); + if cached_bin.exists() { + return Ok(cached_bin); + } + + let url = if version == "latest" { + format!( + "https://github.com/n0-computer/patchbay/releases/latest/download/{}", + release_musl_asset() + ) + } else { + format!( + "https://github.com/n0-computer/patchbay/releases/download/{}/{}", + normalize_release_tag(version), + release_musl_asset() + ) + }; + + run_checked( + Command::new("curl").args(["-fL", &url, "-o"]).arg(&archive), + "download patchbay musl release", + )?; + + if unpack.exists() { + std::fs::remove_dir_all(&unpack).with_context(|| format!("remove {}", unpack.display()))?; + } + std::fs::create_dir_all(&unpack).with_context(|| format!("create {}", unpack.display()))?; + run_checked( + Command::new("tar") + .arg("-xzf") + .arg(&archive) + .arg("-C") + .arg(&unpack), + "extract patchbay musl release", + )?; + + let bin = find_file_named(&unpack, "patchbay") + .with_context(|| format!("find patchbay binary under {}", unpack.display()))?; + set_executable(&bin)?; + Ok(bin) +} + +fn build_musl_from_git_ref(work_dir: &Path, git_ref: &str) -> Result { + let checkout_root = work_dir.join(".vm-cache").join("git"); + std::fs::create_dir_all(&checkout_root) + .with_context(|| format!("create {}", checkout_root.display()))?; + let checkout = checkout_root.join("patchbay"); + + if !checkout.exists() { + run_checked( + Command::new("git") + .args(["clone", "--no-tags", GITHUB_REPO]) + .arg(&checkout), + "clone patchbay repo", + )?; + } + + run_checked( + Command::new("git") + .arg("-C") + .arg(&checkout) + .args(["fetch", "--all", "--prune"]), + "git fetch patchbay repo", + )?; + run_checked( + Command::new("git") + .arg("-C") + .arg(&checkout) + .args(["checkout", git_ref]), + "git checkout requested ref", + )?; + + let target_dir = work_dir.join(".vm-cache").join("git-target"); + std::fs::create_dir_all(&target_dir)?; + run_checked( + Command::new("cargo") + .args([ + "build", + "--release", + "--target", + default_musl_target(), + "--bin", + "patchbay", + ]) + .env("CARGO_TARGET_DIR", &target_dir) + .current_dir(&checkout), + "build patchbay from git ref", + )?; + let bin = target_dir + .join(default_musl_target()) + .join("release") + .join("patchbay"); + if !bin.exists() { + bail!("built patchbay binary missing at {}", bin.display()); + } + Ok(bin) +} + +/// Assemble `--binary` overrides for binaries that need building on the host. +pub fn assemble_guest_build_overrides( + target_dir: &Path, + args: &RunVmArgs, +) -> Result> { + let user_override_names = parse_binary_overrides(&args.binary_overrides)? + .into_keys() + .collect::>(); + let sim_files = expand_vm_sim_inputs(&args.sim_inputs)?; + let mut requested: HashMap = HashMap::new(); + let mut first_seen: HashMap = HashMap::new(); + + for sim_path in sim_files { + let (sim, sim_root) = load_vm_sim(&sim_path)?; + let merged = merged_vm_binary_specs(&sim, &sim_path)?; + for spec in merged.into_values() { + if user_override_names.contains(&spec.name) { + continue; + } + if infer_binary_mode(&spec)? != "build" { + continue; + } + if spec.repo.is_some() { + bail!( + "VM auto-override does not support repo-based build spec '{}' in {}", + spec.name, + sim_path.display() + ); + } + let source_dir = resolve_vm_build_source_dir(&spec, &sim_root)?; + let req = VmBuildRequest { + source_dir, + example: spec.example.clone(), + bin: spec.bin.clone(), + features: spec.features.clone(), + all_features: spec.all_features, + }; + if let Some(existing) = requested.get(&spec.name) { + if existing != &req { + let first = first_seen + .get(&spec.name) + .map(|p| p.display().to_string()) + .unwrap_or_else(|| "".to_string()); + bail!( + "duplicate build spec '{}' differs across sims: {} vs {}", + spec.name, + first, + sim_path.display() + ); + } + continue; + } + first_seen.insert(spec.name.clone(), sim_path.clone()); + requested.insert(spec.name.clone(), req); + } + } + + let mut names: Vec = requested.keys().cloned().collect(); + names.sort(); + let mut out = Vec::new(); + for name in names { + let req = requested + .get(&name) + .ok_or_else(|| anyhow!("missing build request for '{}'", name))?; + let guest_path = build_vm_binary_and_guest_path(target_dir, &name, req)?; + out.push(format!("{name}:path:{guest_path}")); + } + Ok(out) +} + +pub fn expand_vm_sim_inputs(inputs: &[PathBuf]) -> Result> { + let mut sims = Vec::new(); + for input in inputs { + if input.is_file() { + if input.extension().and_then(|s| s.to_str()) == Some("toml") { + sims.push(input.clone()); + } + continue; + } + if input.is_dir() { + for entry in std::fs::read_dir(input) + .with_context(|| format!("read sim dir {}", input.display()))? + { + let entry = entry?; + let path = entry.path(); + if path.is_file() && path.extension().and_then(|s| s.to_str()) == Some("toml") { + sims.push(path); + } + } + continue; + } + bail!("sim input path does not exist: {}", input.display()); + } + sims.sort(); + sims.dedup(); + Ok(sims) +} + +pub fn load_vm_sim(sim_path: &Path) -> Result<(VmSimFile, PathBuf)> { + let text = std::fs::read_to_string(sim_path) + .with_context(|| format!("read sim {}", sim_path.display()))?; + let sim: VmSimFile = + toml::from_str(&text).with_context(|| format!("parse sim {}", sim_path.display()))?; + let root = find_ancestor_with_file(sim_path, "Cargo.toml") + .unwrap_or_else(|| sim_path.parent().unwrap_or(Path::new(".")).to_path_buf()); + Ok((sim, root)) +} + +fn merged_vm_binary_specs(sim: &VmSimFile, sim_path: &Path) -> Result> { + let mut merged = HashMap::new(); + for spec in load_vm_extends_binaries(sim, sim_path)? + .into_iter() + .chain(load_vm_shared_binaries(sim, sim_path)?) + .chain(sim.binaries.clone()) + { + merged.insert(spec.name.clone(), spec); + } + Ok(merged) +} + +fn load_vm_extends_binaries(sim: &VmSimFile, sim_path: &Path) -> Result> { + let sim_dir = sim_path.parent().unwrap_or(Path::new(".")); + let mut out = Vec::new(); + for ext in &sim.extends { + let path = sim_dir.join(&ext.file); + let text = std::fs::read_to_string(&path) + .with_context(|| format!("read extends file {}", path.display()))?; + let parsed: VmSimFile = toml::from_str(&text) + .with_context(|| format!("parse extends file {}", path.display()))?; + out.extend(parsed.binaries); + } + Ok(out) +} + +fn load_vm_shared_binaries(sim: &VmSimFile, sim_path: &Path) -> Result> { + #[derive(Deserialize, Default)] + struct BinaryFile { + #[serde(default, rename = "binary")] + binaries: Vec, + } + + let Some(ref_name) = sim.sim.binaries.as_deref() else { + return Ok(vec![]); + }; + let sim_dir = sim_path.parent().unwrap_or(Path::new(".")); + let path = sim_dir.join(ref_name); + let text = std::fs::read_to_string(&path) + .with_context(|| format!("read shared binaries file {}", path.display()))?; + let parsed: BinaryFile = toml::from_str(&text).context("parse shared binaries file")?; + Ok(parsed.binaries) +} + +fn resolve_vm_build_source_dir(spec: &BinarySpec, default_root: &Path) -> Result { + if let Some(path) = &spec.path { + let resolved = if path.is_absolute() { + path.clone() + } else { + default_root.join(path) + }; + if resolved.is_file() { + bail!( + "binary '{}' mode=build path must be a directory, got file {}", + spec.name, + resolved.display() + ); + } + return Ok(resolved); + } + Ok(default_root.to_path_buf()) +} + +fn build_vm_binary_and_guest_path( + target_dir: &Path, + name: &str, + req: &VmBuildRequest, +) -> Result { + let mut base_args: Vec = vec![ + "build".into(), + "--release".into(), + "--target".into(), + default_musl_target().into(), + ]; + if req.all_features { + base_args.push("--all-features".into()); + } else if !req.features.is_empty() { + base_args.push("--features".into()); + base_args.push(req.features.join(",")); + } + + if let Some(example) = req.example.as_deref() { + let mut args = base_args.clone(); + args.push("--example".into()); + args.push(example.to_string()); + run_checked( + Command::new("cargo") + .args(args) + .env("CARGO_TARGET_DIR", target_dir) + .current_dir(&req.source_dir), + "build VM example binary", + )?; + return Ok(format!( + "/target/{}/release/examples/{}", + default_musl_target(), + example + )); + } + + if let Some(bin) = req.bin.as_deref() { + let mut args = base_args; + args.push("--bin".into()); + args.push(bin.to_string()); + run_checked( + Command::new("cargo") + .args(args) + .env("CARGO_TARGET_DIR", target_dir) + .current_dir(&req.source_dir), + "build VM bin binary", + )?; + return Ok(format!("/target/{}/release/{}", default_musl_target(), bin)); + } + + let mut example_args = base_args.clone(); + example_args.push("--example".into()); + example_args.push(name.to_string()); + let example_status = Command::new("cargo") + .args(example_args) + .env("CARGO_TARGET_DIR", target_dir) + .current_dir(&req.source_dir) + .status() + .context("spawn cargo build --example for VM")?; + if example_status.success() { + return Ok(format!( + "/target/{}/release/examples/{}", + default_musl_target(), + name + )); + } + + let mut bin_args = base_args; + bin_args.push("--bin".into()); + bin_args.push(name.to_string()); + run_checked( + Command::new("cargo") + .args(bin_args) + .env("CARGO_TARGET_DIR", target_dir) + .current_dir(&req.source_dir), + "build VM fallback bin", + )?; + Ok(format!( + "/target/{}/release/{}", + default_musl_target(), + name + )) +} + +/// Build test binaries on the host and return their paths. +pub fn build_and_collect_test_binaries( + target_dir: &Path, + target: &str, + packages: &[String], + tests: &[String], + cargo_args: &[String], +) -> Result> { + use std::io::{BufRead, BufReader}; + + let mut cmd = Command::new("cargo"); + cmd.args([ + "test", + "--no-run", + "--target", + target, + "--message-format", + "json", + ]); + for pkg in packages { + cmd.args(["-p", pkg]); + } + for test in tests { + cmd.args(["--test", test]); + } + if !cargo_args.is_empty() { + cmd.args(cargo_args); + } + cmd.env("CARGO_TARGET_DIR", target_dir) + .env("CARGO_TERM_COLOR", "always"); + eprintln!("[cargo] {cmd:?}"); + cmd.stdout(Stdio::piped()); + cmd.stderr(Stdio::piped()); + let mut child = cmd.spawn().context("spawn cargo test --no-run")?; + + let stderr = child.stderr.take().unwrap(); + let stderr_thread = std::thread::spawn(move || { + for line in BufReader::new(stderr).lines() { + let Ok(line) = line else { break }; + eprintln!("[cargo] {line}"); + } + }); + + let stdout = child.stdout.take().unwrap(); + let mut stdout_lines = Vec::new(); + for line in BufReader::new(stdout).lines() { + let Ok(line) = line else { break }; + stdout_lines.push(line); + } + + let status = child.wait().context("wait cargo test --no-run")?; + let _ = stderr_thread.join(); + if !status.success() { + for line in &stdout_lines { + let Ok(v) = serde_json::from_str::(line) else { + continue; + }; + if v.get("reason").and_then(|x| x.as_str()) == Some("compiler-message") { + if let Some(rendered) = v + .get("message") + .and_then(|m| m.get("rendered")) + .and_then(|r| r.as_str()) + { + eprint!("{rendered}"); + } + } + } + bail!("cargo test --no-run failed"); + } + + let mut bins = Vec::new(); + for line in &stdout_lines { + let Ok(v) = serde_json::from_str::(line) else { + continue; + }; + if v.get("reason").and_then(|x| x.as_str()) != Some("compiler-artifact") { + continue; + } + if !v + .get("profile") + .and_then(|p| p.get("test")) + .and_then(|b| b.as_bool()) + .unwrap_or(false) + { + continue; + } + let Some(exe) = v.get("executable").and_then(|e| e.as_str()) else { + continue; + }; + let path = PathBuf::from(exe); + if path.exists() && path.is_file() { + bins.push(path); + } + } + bins.sort(); + bins.dedup(); + Ok(bins) +} + +/// Copy test binaries into the work directory and return their guest paths. +pub fn stage_test_binaries(work_dir: &Path, bins: &[PathBuf]) -> Result> { + let stage_dir = work_dir.join("binaries").join("tests"); + std::fs::create_dir_all(&stage_dir) + .with_context(|| format!("create {}", stage_dir.display()))?; + #[cfg(unix)] + { + use std::os::unix::fs::PermissionsExt; + std::fs::set_permissions(&stage_dir, std::fs::Permissions::from_mode(0o777))?; + } + let mut staged_guest = Vec::new(); + for bin in bins { + let file = bin + .file_name() + .and_then(|s| s.to_str()) + .ok_or_else(|| anyhow!("bad test binary name: {}", bin.display()))?; + let dest = stage_dir.join(file); + std::fs::copy(bin, &dest) + .with_context(|| format!("copy {} -> {}", bin.display(), dest.display()))?; + set_executable(&dest)?; + staged_guest.push(format!("/work/binaries/tests/{file}")); + } + Ok(staged_guest) +} + +// --------------------------------------------------------------------------- +// Pure utility functions +// --------------------------------------------------------------------------- + +pub fn log_msg(prefix: &str, msg: &str) { + eprintln!("[{prefix}] {msg}"); +} + +pub fn run_checked(cmd: &mut Command, label: &str) -> Result<()> { + let status = cmd.status().with_context(|| format!("run '{label}'"))?; + if status.success() { + Ok(()) + } else { + bail!("command failed: {label} (status {status})") + } +} + +pub fn need_cmd(name: &str) -> Result<()> { + if command_exists(name)? { + Ok(()) + } else { + bail!("missing required command: {name}") + } +} + +pub fn command_exists(name: &str) -> Result { + Ok(Command::new("sh") + .arg("-lc") + .arg(format!("command -v {name} >/dev/null 2>&1")) + .status() + .context("check command")? + .success()) +} + +pub fn env_or(name: &str, default: &str) -> String { + std::env::var(name).unwrap_or_else(|_| default.to_string()) +} + +pub fn cargo_target_dir() -> Result { + let out = Command::new("cargo") + .args(["metadata", "--format-version", "1", "--no-deps"]) + .output() + .context("run cargo metadata for target dir")?; + if !out.status.success() { + bail!("cargo metadata failed while resolving target dir"); + } + let v: serde_json::Value = + serde_json::from_slice(&out.stdout).context("parse cargo metadata json")?; + let dir = v + .get("target_directory") + .and_then(|s| s.as_str()) + .context("cargo metadata missing target_directory")?; + Ok(PathBuf::from(dir)) +} + +pub fn remove_if_exists(path: &Path) -> Result<()> { + if !path.exists() { + return Ok(()); + } + if path.is_dir() { + std::fs::remove_dir_all(path).with_context(|| format!("remove {}", path.display())) + } else { + std::fs::remove_file(path).with_context(|| format!("remove {}", path.display())) + } +} + +pub fn read_pid(path: &Path) -> Result> { + if !path.exists() { + return Ok(None); + } + let text = std::fs::read_to_string(path).with_context(|| format!("read {}", path.display()))?; + Ok(text.trim().parse::().ok()) +} + +pub fn pid_alive(pid: i32) -> bool { + // SAFETY: kill with signal 0 is side-effect free and used only for liveness probing. + let rc = unsafe { nix::libc::kill(pid, 0) }; + if rc == 0 { + true + } else { + let errno = nix::errno::Errno::last_raw(); + errno == nix::libc::EPERM + } +} + +pub fn kill_pid(pid: i32) { + // SAFETY: best-effort process signal for known pid. + let _ = unsafe { nix::libc::kill(pid, nix::libc::SIGTERM) }; +} + +pub fn force_kill_pid(pid: i32) { + // SAFETY: best-effort forced process signal for known pid. + let _ = unsafe { nix::libc::kill(pid, nix::libc::SIGKILL) }; +} + +pub fn abspath(path: &Path) -> Result { + if path.is_absolute() { + Ok(path.to_path_buf()) + } else { + Ok(std::env::current_dir()?.join(path)) + } +} + +pub fn to_guest_sim_path(workspace: &Path, sim: &Path) -> Result { + let sim_abs = if sim.is_absolute() { + sim.to_path_buf() + } else { + std::env::current_dir()?.join(sim) + }; + let rel = sim_abs.strip_prefix(workspace).with_context(|| { + format!( + "sim path {} must be under workspace {}", + sim_abs.display(), + workspace.display() + ) + })?; + Ok(format!("/app/{}", rel.display())) +} + +pub fn shell_join>(parts: &[T]) -> String { + parts + .iter() + .map(|p| shell_escape(p.as_ref())) + .collect::>() + .join(" ") +} + +pub fn shell_escape(s: &str) -> String { + if s.is_empty() { + return "''".to_string(); + } + if s.bytes().all(|b| { + matches!( + b, + b'A'..=b'Z' | b'a'..=b'z' | b'0'..=b'9' | b'-' | b'_' | b'.' | b'/' | b':' + ) + }) { + return s.to_string(); + } + format!("'{}'", s.replace('\'', "'\"'\"'")) +} + +pub fn find_file_named(root: &Path, file_name: &str) -> Result { + let mut stack = vec![root.to_path_buf()]; + while let Some(dir) = stack.pop() { + for ent in std::fs::read_dir(&dir).with_context(|| format!("read {}", dir.display()))? { + let ent = ent?; + let path = ent.path(); + if path.is_dir() { + stack.push(path); + continue; + } + if path.file_name().and_then(|s| s.to_str()) == Some(file_name) { + return Ok(path); + } + } + } + bail!("file '{}' not found under {}", file_name, root.display()) +} + +pub fn find_ancestor_with_file(path: &Path, file_name: &str) -> Option { + let mut cur = if path.is_dir() { + path.to_path_buf() + } else { + path.parent()?.to_path_buf() + }; + loop { + if cur.join(file_name).is_file() { + return Some(cur); + } + if !cur.pop() { + return None; + } + } +} + +pub fn normalize_release_tag(version: &str) -> String { + if version.starts_with('v') { + version.to_string() + } else { + format!("v{version}") + } +} + +pub fn sanitize_filename(s: &str) -> String { + let out: String = s + .chars() + .map(|c| { + if c.is_ascii_alphanumeric() || c == '.' || c == '-' || c == '_' { + c + } else { + '_' + } + }) + .collect(); + if out.is_empty() { + "base-image".to_string() + } else { + out + } +} + +pub fn fnv1a64(bytes: &[u8]) -> u64 { + const OFFSET: u64 = 0xcbf29ce484222325; + const PRIME: u64 = 0x100000001b3; + let mut h = OFFSET; + for b in bytes { + h ^= u64::from(*b); + h = h.wrapping_mul(PRIME); + } + h +} + +/// The guest package installation script shared by both backends. +pub const GUEST_PREPARE_SCRIPT: &str = "set -euo pipefail; export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH; export DEBIAN_FRONTEND=noninteractive; if ! command -v ip >/dev/null 2>&1 || ! command -v tc >/dev/null 2>&1 || ! command -v nft >/dev/null 2>&1 || ! command -v modprobe >/dev/null 2>&1 || ! command -v sysctl >/dev/null 2>&1; then apt-get update; apt-get install -y bridge-utils iproute2 iputils-ping iptables nftables net-tools curl iperf3 jq kmod procps; fi; modprobe sch_netem || true; sysctl -w net.ipv4.ip_forward=1"; diff --git a/patchbay-vm/src/container.rs b/patchbay-vm/src/container.rs new file mode 100644 index 0000000..2aaadff --- /dev/null +++ b/patchbay-vm/src/container.rs @@ -0,0 +1,514 @@ +//! Apple `container` CLI backend for patchbay-vm. +//! +//! Uses Apple's Containerization framework (macOS 26 + Apple Silicon) to run a +//! lightweight Linux VM via `container run`. Guest commands execute through +//! `container exec` instead of SSH. + +use std::{ + path::PathBuf, + process::{Command, Stdio}, + thread, + time::Duration, +}; + +use anyhow::{anyhow, bail, Context, Result}; + +use crate::common::{ + self, abspath, assemble_guest_build_overrides, build_and_collect_test_binaries, + cargo_target_dir, default_musl_target, ensure_guest_runner_binary, env_or, log_msg, + need_cmd, run_checked, stage_test_binaries, to_guest_sim_path, RunVmArgs, TestVmArgs, + GUEST_PREPARE_SCRIPT, +}; +use crate::util::stage_binary_overrides; + +// --------------------------------------------------------------------------- +// Constants +// --------------------------------------------------------------------------- + +const CONTAINER_STATE_DIR: &str = ".container-vm"; +const DEFAULT_CONTAINER_NAME: &str = "patchbay"; +const DEFAULT_IMAGE: &str = "debian:trixie-slim"; + +// --------------------------------------------------------------------------- +// Config +// --------------------------------------------------------------------------- + +#[derive(Debug, Clone)] +struct ContainerConfig { + name: String, + image: String, + mem_mb: String, + cpus: String, + workspace: PathBuf, + target_dir: PathBuf, + work_dir: PathBuf, + state_root: PathBuf, + recreate: bool, +} + +impl ContainerConfig { + fn from_args(args: &RunVmArgs) -> Result { + let cwd = std::env::current_dir().context("get cwd")?; + let target_dir = match cargo_target_dir() { + Ok(dir) => dir, + Err(_) => { + let current_exe = std::env::current_exe().context("resolve current executable")?; + let profile_dir = current_exe + .parent() + .context("current executable has no parent")?; + let base = profile_dir + .parent() + .context("current executable profile dir has no parent")?; + base.to_path_buf() + } + }; + + Ok(Self { + name: env_or("CONTAINER_VM_NAME", DEFAULT_CONTAINER_NAME), + image: env_or("CONTAINER_VM_IMAGE", DEFAULT_IMAGE), + mem_mb: env_or("CONTAINER_VM_MEM_MB", common::DEFAULT_MEM_MB), + cpus: env_or("CONTAINER_VM_CPUS", &common::default_cpus()), + workspace: cwd, + target_dir, + work_dir: abspath(&args.work_dir)?, + state_root: std::env::current_dir()?.join(CONTAINER_STATE_DIR), + recreate: args.recreate, + }) + } + + fn from_defaults() -> Result { + let cwd = std::env::current_dir().context("get cwd")?; + let target_dir = match cargo_target_dir() { + Ok(dir) => dir, + Err(_) => cwd.join("target"), + }; + let default_work = cwd.join(".patchbay-work"); + + Ok(Self { + name: env_or("CONTAINER_VM_NAME", DEFAULT_CONTAINER_NAME), + image: env_or("CONTAINER_VM_IMAGE", DEFAULT_IMAGE), + mem_mb: env_or("CONTAINER_VM_MEM_MB", common::DEFAULT_MEM_MB), + cpus: env_or("CONTAINER_VM_CPUS", &common::default_cpus()), + workspace: cwd, + target_dir, + work_dir: PathBuf::from(env_or( + "CONTAINER_VM_WORK_DIR", + &default_work.display().to_string(), + )), + state_root: std::env::current_dir()?.join(CONTAINER_STATE_DIR), + recreate: false, + }) + } + + fn state_dir(&self) -> PathBuf { + self.state_root.join(&self.name) + } + + fn runtime_file(&self) -> PathBuf { + self.state_dir().join("runtime.env") + } +} + +fn log(msg: &str) { + log_msg("container", msg); +} + +// --------------------------------------------------------------------------- +// Public entrypoints +// --------------------------------------------------------------------------- + +pub fn up_cmd(recreate: bool) -> Result<()> { + let mut cfg = ContainerConfig::from_defaults()?; + cfg.recreate = recreate; + up(&mut cfg) +} + +pub fn down_cmd() -> Result<()> { + let cfg = ContainerConfig::from_defaults()?; + down(&cfg) +} + +pub fn status_cmd() -> Result<()> { + let cfg = ContainerConfig::from_defaults()?; + println!("backend: container"); + println!("container-name: {}", cfg.name); + println!( + "running: {}", + if is_running(&cfg)? { "yes" } else { "no" } + ); + if cfg.runtime_file().exists() { + println!("runtime: {}", cfg.runtime_file().display()); + let text = std::fs::read_to_string(cfg.runtime_file())?; + print!("{text}"); + } + Ok(()) +} + +pub fn cleanup_cmd() -> Result<()> { + let cfg = ContainerConfig::from_defaults()?; + if is_running(&cfg)? { + down(&cfg)?; + } + common::remove_if_exists(&cfg.state_dir())?; + Ok(()) +} + +/// Maps the `Ssh` subcommand to `container exec`. +pub fn exec_cmd_cli(cmd: Vec) -> Result<()> { + let cfg = ContainerConfig::from_defaults()?; + if cmd.is_empty() { + bail!("exec: missing command"); + } + let refs: Vec<&str> = cmd.iter().map(String::as_str).collect(); + exec_cmd(&cfg, &refs) +} + +pub fn run_sims(args: RunVmArgs) -> Result<()> { + let mut cfg = ContainerConfig::from_args(&args)?; + up(&mut cfg)?; + prepare_guest(&cfg)?; + run_in_guest(&cfg, &args)?; + Ok(()) +} + +pub fn run_tests(args: TestVmArgs) -> Result<()> { + let mut cfg = ContainerConfig::from_defaults()?; + cfg.recreate = args.recreate; + let target_dir = cargo_target_dir()?; + + let (test_bins, vm_result) = std::thread::scope(|s| { + let build = s.spawn(|| { + build_and_collect_test_binaries( + &target_dir, + &args.target, + &args.packages, + &args.tests, + &args.cargo_args, + ) + }); + let setup = s.spawn(|| { + up(&mut cfg)?; + prepare_guest(&cfg) + }); + (build.join(), setup.join()) + }); + let test_bins = test_bins + .map_err(|_| anyhow!("build thread panicked"))? + .context("building test binaries")?; + vm_result + .map_err(|_| anyhow!("container setup thread panicked"))? + .context("container setup")?; + + if test_bins.is_empty() { + bail!("no test binaries were built for target {}", args.target); + } + + let staged = stage_test_binaries(&cfg.work_dir, &test_bins)?; + + let forward_envs: &[&str] = &[ + "RUST_LOG", + "RUST_BACKTRACE", + "PATCHBAY_OUTDIR", + "PATCHBAY_SIM", + ]; + let mut env_pairs: Vec = forward_envs + .iter() + .filter_map(|name| std::env::var(name).ok().map(|val| format!("{name}={val}"))) + .collect(); + env_pairs.push("PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin".into()); + + let mut passed = 0usize; + let mut failed = 0usize; + for guest_bin in staged { + let mut run_args: Vec = Vec::new(); + if !env_pairs.is_empty() { + run_args.push("env".into()); + run_args.extend(env_pairs.iter().cloned()); + } + run_args.push(guest_bin.clone()); + if let Some(ref f) = args.filter { + run_args.push(f.clone()); + } + let run_refs: Vec<&str> = run_args.iter().map(|s| s.as_str()).collect(); + match exec_cmd(&cfg, &run_refs) { + Ok(()) => { + passed += 1; + println!("[test] PASS {guest_bin}"); + } + Err(err) => { + failed += 1; + println!("[test] FAIL {guest_bin}: {err}"); + } + } + } + println!("[test] summary: passed={passed} failed={failed}"); + if failed > 0 { + bail!("{} test binaries failed in container", failed); + } + Ok(()) +} + +// --------------------------------------------------------------------------- +// Lifecycle +// --------------------------------------------------------------------------- + +fn up(cfg: &mut ContainerConfig) -> Result<()> { + need_cmd("container")?; + std::fs::create_dir_all(cfg.state_dir()) + .with_context(|| format!("create {}", cfg.state_dir().display()))?; + std::fs::create_dir_all(&cfg.target_dir)?; + std::fs::create_dir_all(&cfg.work_dir)?; + + log(&format!("workspace={}", cfg.workspace.display())); + log(&format!("target={}", cfg.target_dir.display())); + log(&format!("work={}", cfg.work_dir.display())); + + if cfg.recreate && is_running(cfg)? { + log("recreate requested; stopping existing container"); + down(cfg)?; + } + + if is_running(cfg)? { + check_running_mount_paths(cfg)?; + log("container already running"); + return Ok(()); + } + + log(&format!("starting container from {}", cfg.image)); + start_container(cfg)?; + wait_for_ready(cfg)?; + persist_runtime(cfg)?; + log(&format!("{} ready", cfg.name)); + Ok(()) +} + +fn down(cfg: &ContainerConfig) -> Result<()> { + if !is_running(cfg)? { + log(&format!("{} is not running", cfg.name)); + return Ok(()); + } + log(&format!("stopping {}", cfg.name)); + let _ = Command::new("container") + .args(["stop", &cfg.name]) + .status(); + // Remove the stopped container so the name can be reused. + let _ = Command::new("container") + .args(["rm", &cfg.name]) + .status(); + common::remove_if_exists(&cfg.runtime_file())?; + log(&format!("{} stopped", cfg.name)); + Ok(()) +} + +fn is_running(cfg: &ContainerConfig) -> Result { + let output = Command::new("container") + .args(["inspect", &cfg.name]) + .stdout(Stdio::piped()) + .stderr(Stdio::null()) + .output(); + match output { + Ok(out) => { + if !out.status.success() { + return Ok(false); + } + let text = String::from_utf8_lossy(&out.stdout); + Ok(text.contains("running") || text.contains("Running")) + } + Err(_) => Ok(false), + } +} + +fn start_container(cfg: &ContainerConfig) -> Result<()> { + // Remove any stopped container with the same name. + let _ = Command::new("container") + .args(["rm", &cfg.name]) + .stdout(Stdio::null()) + .stderr(Stdio::null()) + .status(); + + let mut cmd = Command::new("container"); + cmd.args(["run", "-d", "--name", &cfg.name]); + + cmd.args(["--cpus", &cfg.cpus]); + cmd.args(["--memory", &format!("{}M", cfg.mem_mb)]); + + // Mount workspace (read-only) at /app. + cmd.args([ + "--mount", + &format!( + "type=bind,source={},target=/app,readonly", + cfg.workspace.display() + ), + ]); + // Mount target dir (read-only) at /target. + cmd.args([ + "--mount", + &format!( + "type=bind,source={},target=/target,readonly", + cfg.target_dir.display() + ), + ]); + // Mount work dir (read-write) at /work. + cmd.args([ + "--mount", + &format!( + "type=bind,source={},target=/work", + cfg.work_dir.display() + ), + ]); + + cmd.arg(&cfg.image); + // Keep the container alive with a long sleep so we can exec into it. + cmd.args(["sleep", "infinity"]); + + run_checked(&mut cmd, "container run") +} + +fn wait_for_ready(cfg: &ContainerConfig) -> Result<()> { + log("waiting for container to be ready..."); + for _ in 0..60 { + if is_running(cfg)? { + // Verify we can actually exec into it. + let ok = Command::new("container") + .args(["exec", &cfg.name, "true"]) + .stdout(Stdio::null()) + .stderr(Stdio::null()) + .status() + .map(|s| s.success()) + .unwrap_or(false); + if ok { + log("container is ready"); + return Ok(()); + } + } + thread::sleep(Duration::from_millis(500)); + } + bail!( + "container '{}' did not become ready within 30 seconds", + cfg.name + ) +} + +// --------------------------------------------------------------------------- +// Guest interaction +// --------------------------------------------------------------------------- + +fn exec_cmd(cfg: &ContainerConfig, args: &[&str]) -> Result<()> { + let mut cmd = Command::new("container"); + cmd.args(["exec", &cfg.name]); + cmd.args(args); + run_checked(&mut cmd, "container exec") +} + +fn prepare_guest(cfg: &ContainerConfig) -> Result<()> { + exec_cmd(cfg, &["bash", "-lc", GUEST_PREPARE_SCRIPT]) +} + +fn run_in_guest(cfg: &ContainerConfig, args: &RunVmArgs) -> Result<()> { + let guest_exe = + ensure_guest_runner_binary(&cfg.work_dir, &cfg.target_dir, &args.patchbay_version)?; + let auto_build_overrides = assemble_guest_build_overrides(&cfg.target_dir, args)?; + let staged_overrides = stage_binary_overrides( + &args.binary_overrides, + &cfg.work_dir, + &cfg.target_dir, + default_musl_target(), + )?; + + // No `sudo` needed; container exec runs as root by default. + let mut parts = vec![ + "env".to_string(), + "NETSIM_IN_VM=1".to_string(), + "NETSIM_TARGET_DIR=/target".to_string(), + ]; + if let Ok(rust_log) = std::env::var("NETSIM_RUST_LOG") { + parts.push(format!("NETSIM_RUST_LOG={rust_log}")); + } + if let Ok(rust_log) = std::env::var("RUST_LOG") { + parts.push(format!("RUST_LOG={rust_log}")); + } + parts.extend([ + guest_exe, + "run".to_string(), + "--work-dir".to_string(), + "/work".to_string(), + ]); + + for ov in &auto_build_overrides { + parts.push("--binary".to_string()); + parts.push(ov.clone()); + } + for ov in &staged_overrides { + parts.push("--binary".to_string()); + parts.push(ov.clone()); + } + if args.verbose { + parts.push("-v".to_string()); + } + for sim in &args.sim_inputs { + parts.push(to_guest_sim_path(&cfg.workspace, sim)?); + } + + let refs: Vec<&str> = parts.iter().map(String::as_str).collect(); + exec_cmd(cfg, &refs) +} + +// --------------------------------------------------------------------------- +// State persistence +// --------------------------------------------------------------------------- + +fn persist_runtime(cfg: &ContainerConfig) -> Result<()> { + let text = format!( + "backend=container\nworkspace={}\ntarget_dir={}\nwork_dir={}\n", + cfg.workspace.display(), + cfg.target_dir.display(), + cfg.work_dir.display(), + ); + std::fs::write(cfg.runtime_file(), text) + .with_context(|| format!("write {}", cfg.runtime_file().display())) +} + +fn check_running_mount_paths(cfg: &ContainerConfig) -> Result<()> { + if !cfg.runtime_file().exists() { + return Ok(()); + } + let text = std::fs::read_to_string(cfg.runtime_file()) + .with_context(|| format!("read {}", cfg.runtime_file().display()))?; + let mut running_workspace = None; + let mut running_target = None; + let mut running_work = None; + for line in text.lines() { + if let Some(v) = line.strip_prefix("workspace=") { + running_workspace = Some(v.to_string()); + } + if let Some(v) = line.strip_prefix("target_dir=") { + running_target = Some(v.to_string()); + } + if let Some(v) = line.strip_prefix("work_dir=") { + running_work = Some(v.to_string()); + } + } + + if running_workspace.as_deref() != Some(cfg.workspace.to_string_lossy().as_ref()) { + bail!( + "container already running with workspace '{}', requested '{}' (use --recreate)", + running_workspace.unwrap_or_default(), + cfg.workspace.display() + ); + } + if running_target.as_deref() != Some(cfg.target_dir.to_string_lossy().as_ref()) { + bail!( + "container already running with target dir '{}', requested '{}' (use --recreate)", + running_target.unwrap_or_default(), + cfg.target_dir.display() + ); + } + if running_work.as_deref() != Some(cfg.work_dir.to_string_lossy().as_ref()) { + bail!( + "container already running with work dir '{}', requested '{}' (use --recreate)", + running_work.unwrap_or_default(), + cfg.work_dir.display() + ); + } + Ok(()) +} diff --git a/patchbay-vm/src/main.rs b/patchbay-vm/src/main.rs index b4c61a3..708a0a6 100644 --- a/patchbay-vm/src/main.rs +++ b/patchbay-vm/src/main.rs @@ -1,5 +1,7 @@ +mod common; +mod container; +mod qemu; mod util; -mod vm; fn default_test_target() -> String { if std::env::consts::ARCH == "aarch64" { @@ -12,12 +14,28 @@ fn default_test_target() -> String { use std::path::PathBuf; use anyhow::Result; -use clap::{Parser, Subcommand}; +use clap::{Parser, Subcommand, ValueEnum}; use patchbay_server::DEFAULT_UI_BIND; +use common::{RunVmArgs, TestVmArgs}; + +/// VM backend selection. +#[derive(Clone, Debug, ValueEnum)] +enum Backend { + /// Auto-detect: prefer `container` on macOS Apple Silicon, fall back to QEMU. + Auto, + /// QEMU with a full Debian cloud image and SSH access. + Qemu, + /// Apple `container` CLI (macOS 26 + Apple Silicon only). + Container, +} + #[derive(Parser)] #[command(name = "patchbay-vm", about = "Standalone VM runner for patchbay")] struct Cli { + /// Which VM backend to use. + #[arg(long, default_value = "auto", global = true)] + backend: Backend, #[command(subcommand)] command: Command, } @@ -35,7 +53,7 @@ enum Command { Status, /// Best-effort cleanup of VM helper artifacts/processes. Cleanup, - /// Execute command over guest SSH. + /// Execute command in the guest (SSH for QEMU, exec for container). Ssh { #[arg(trailing_var_arg = true, allow_hyphen_values = true)] cmd: Vec, @@ -106,16 +124,50 @@ enum Command { }, } +/// Resolve `Backend::Auto` into a concrete backend. +fn resolve_backend(b: Backend) -> Backend { + match b { + Backend::Auto => { + if std::env::consts::OS == "macos" + && std::env::consts::ARCH == "aarch64" + && common::command_exists("container").unwrap_or(false) + { + Backend::Container + } else { + Backend::Qemu + } + } + other => other, + } +} + #[tokio::main(flavor = "current_thread")] async fn main() -> Result<()> { patchbay_utils::init_tracing(); let cli = Cli::parse(); + let backend = resolve_backend(cli.backend); + match cli.command { - Command::Up { recreate } => vm::up_cmd(recreate), - Command::Down => vm::down_cmd(), - Command::Status => vm::status_cmd(), - Command::Cleanup => vm::cleanup_cmd(), - Command::Ssh { cmd } => vm::ssh_cmd_cli(cmd), + Command::Up { recreate } => match backend { + Backend::Container => container::up_cmd(recreate), + _ => qemu::up_cmd(recreate), + }, + Command::Down => match backend { + Backend::Container => container::down_cmd(), + _ => qemu::down_cmd(), + }, + Command::Status => match backend { + Backend::Container => container::status_cmd(), + _ => qemu::status_cmd(), + }, + Command::Cleanup => match backend { + Backend::Container => container::cleanup_cmd(), + _ => qemu::cleanup_cmd(), + }, + Command::Ssh { cmd } => match backend { + Backend::Container => container::exec_cmd_cli(cmd), + _ => qemu::ssh_cmd_cli(cmd), + }, Command::Run { sims, work_dir, @@ -137,14 +189,18 @@ async fn main() -> Result<()> { } }); } - let res = vm::run_sims_in_vm(vm::RunVmArgs { + let args = RunVmArgs { sim_inputs: sims, work_dir, binary_overrides, verbose, recreate, patchbay_version, - }); + }; + let res = match backend { + Backend::Container => container::run_sims(args), + _ => qemu::run_sims_in_vm(args), + }; if open && res.is_ok() { println!("run finished; server still running (Ctrl-C to exit)"); loop { @@ -202,14 +258,18 @@ async fn main() -> Result<()> { if no_fail_fast { cargo_args.push("--no-fail-fast".into()); } - vm::run_tests_in_vm(vm::TestVmArgs { + let args = TestVmArgs { filter, target, packages, tests, recreate, cargo_args, - }) + }; + match backend { + Backend::Container => container::run_tests(args), + _ => qemu::run_tests_in_vm(args), + } } } } diff --git a/patchbay-vm/src/vm.rs b/patchbay-vm/src/qemu.rs similarity index 59% rename from patchbay-vm/src/vm.rs rename to patchbay-vm/src/qemu.rs index acdda36..eabdac9 100644 --- a/patchbay-vm/src/vm.rs +++ b/patchbay-vm/src/qemu.rs @@ -1,5 +1,4 @@ use std::{ - collections::HashMap, fs::File, net::TcpListener, path::{Path, PathBuf}, @@ -9,22 +8,25 @@ use std::{ }; use anyhow::{anyhow, bail, Context, Result}; -use patchbay_utils::{ - assets::{infer_binary_mode, parse_binary_overrides, BinarySpec}, - binary_cache::set_executable, +use crate::common::{ + self, abspath, assemble_guest_build_overrides, build_and_collect_test_binaries, + cargo_target_dir, default_musl_target, ensure_guest_runner_binary, env_or, force_kill_pid, + is_arm64_host, kill_pid, log_msg, need_cmd, pid_alive, read_pid, remove_if_exists, + run_checked, sanitize_filename, fnv1a64, shell_join, stage_test_binaries, to_guest_sim_path, + RunVmArgs, TestVmArgs, GUEST_PREPARE_SCRIPT, }; -use serde::Deserialize; - use crate::util::stage_binary_overrides; +// --------------------------------------------------------------------------- +// QEMU-specific constants +// --------------------------------------------------------------------------- + const VM_STATE_DIR: &str = ".qemu-vm"; const DEFAULT_VM_NAME: &str = "patchbay-vm"; const DEFAULT_IMAGE_URL_X86: &str = "https://cloud.debian.org/images/cloud/trixie/latest/debian-13-genericcloud-amd64.qcow2"; const DEFAULT_IMAGE_URL_ARM64: &str = "https://cloud.debian.org/images/cloud/trixie/latest/debian-13-genericcloud-arm64.qcow2"; -const DEFAULT_MEM_MB: &str = "4096"; -const DEFAULT_CPUS: &str = "4"; const DEFAULT_DISK_GB: &str = "40"; const DEFAULT_SSH_USER: &str = "dev"; const DEFAULT_QEMU_BIN_X86: &str = "qemu-system-x86_64"; @@ -32,25 +34,6 @@ const DEFAULT_QEMU_BIN_ARM64: &str = "qemu-system-aarch64"; const DEFAULT_SSH_PORT: &str = "2222"; const DEFAULT_SEED_PORT: &str = "8555"; -fn is_arm64_host() -> bool { - std::env::consts::ARCH == "aarch64" -} - -fn default_qemu_bin() -> &'static str { - if is_arm64_host() { - DEFAULT_QEMU_BIN_ARM64 - } else { - DEFAULT_QEMU_BIN_X86 - } -} - -fn default_image_url() -> &'static str { - if is_arm64_host() { - DEFAULT_IMAGE_URL_ARM64 - } else { - DEFAULT_IMAGE_URL_X86 - } -} const DEFAULT_VIRTIOFSD: [&str; 5] = [ "/usr/lib/virtiofsd", "/usr/libexec/virtiofsd", @@ -78,47 +61,34 @@ const SERIAL_LOG: &str = "serial.log"; const SSH_KEY: &str = "id_ed25519"; const KNOWN_HOSTS: &str = "known_hosts"; const RUNTIME_ENV: &str = "runtime.env"; -const RELEASE_MUSL_ASSET_X86: &str = "patchbay-x86_64-unknown-linux-musl.tar.gz"; -const RELEASE_MUSL_ASSET_ARM64: &str = "patchbay-aarch64-unknown-linux-musl.tar.gz"; -const GITHUB_REPO: &str = "https://github.com/n0-computer/patchbay.git"; -const DEFAULT_MUSL_TARGET_X86: &str = "x86_64-unknown-linux-musl"; -const DEFAULT_MUSL_TARGET_ARM64: &str = "aarch64-unknown-linux-musl"; -fn default_musl_target() -> &'static str { +// --------------------------------------------------------------------------- +// QEMU-specific helpers +// --------------------------------------------------------------------------- + +fn default_qemu_bin() -> &'static str { if is_arm64_host() { - DEFAULT_MUSL_TARGET_ARM64 + DEFAULT_QEMU_BIN_ARM64 } else { - DEFAULT_MUSL_TARGET_X86 + DEFAULT_QEMU_BIN_X86 } } -fn release_musl_asset() -> &'static str { +fn default_image_url() -> &'static str { if is_arm64_host() { - RELEASE_MUSL_ASSET_ARM64 + DEFAULT_IMAGE_URL_ARM64 } else { - RELEASE_MUSL_ASSET_X86 + DEFAULT_IMAGE_URL_X86 } } -#[derive(Debug, Clone)] -pub struct RunVmArgs { - pub sim_inputs: Vec, - pub work_dir: PathBuf, - pub binary_overrides: Vec, - pub verbose: bool, - pub recreate: bool, - pub patchbay_version: String, +fn log(msg: &str) { + log_msg("qemu", msg); } -#[derive(Debug, Clone)] -pub struct TestVmArgs { - pub filter: Option, - pub target: String, - pub packages: Vec, - pub tests: Vec, - pub recreate: bool, - pub cargo_args: Vec, -} +// --------------------------------------------------------------------------- +// VmConfig +// --------------------------------------------------------------------------- #[derive(Debug, Clone)] struct VmConfig { @@ -141,148 +111,6 @@ struct VmConfig { fs_mode: String, } -pub fn run_sims_in_vm(args: RunVmArgs) -> Result<()> { - let mut vm = VmConfig::from_args(&args)?; - up(&mut vm)?; - prepare_vm_guest(&vm)?; - run_in_guest(&vm, &args)?; - Ok(()) -} - -/// Stops the local VM if it is running and cleans leftover VM helper processes. -pub fn stop_vm_if_running() -> Result<()> { - let vm = VmConfig::from_cleanup_defaults()?; - down(&vm) -} - -/// Builds tests for the target and runs discovered test binaries inside the VM. -pub fn run_tests_in_vm(args: TestVmArgs) -> Result<()> { - let mut vm = VmConfig::from_cleanup_defaults()?; - vm.recreate = args.recreate; - let target_dir = cargo_target_dir()?; - - // Build test binaries and boot/prepare the VM in parallel. - let (test_bins, vm_result) = std::thread::scope(|s| { - let build = s.spawn(|| { - build_and_collect_test_binaries( - &target_dir, - &args.target, - &args.packages, - &args.tests, - &args.cargo_args, - ) - }); - let vm_setup = s.spawn(|| { - up(&mut vm)?; - prepare_vm_guest(&vm) - }); - (build.join(), vm_setup.join()) - }); - let test_bins = test_bins - .map_err(|_| anyhow!("build thread panicked"))? - .context("building test binaries")?; - vm_result - .map_err(|_| anyhow!("vm setup thread panicked"))? - .context("vm setup")?; - - if test_bins.is_empty() { - bail!("no test binaries were built for target {}", args.target); - } - - let staged = stage_test_binaries(&vm, &test_bins)?; - - // Collect env vars to forward to the guest. - let forward_envs: &[&str] = &[ - "RUST_LOG", - "RUST_BACKTRACE", - "PATCHBAY_OUTDIR", - "PATCHBAY_SIM", - ]; - let mut env_pairs: Vec = forward_envs - .iter() - .filter_map(|name| std::env::var(name).ok().map(|val| format!("{name}={val}"))) - .collect(); - // Ensure /usr/sbin is on PATH so tools like nft are found. - env_pairs.push("PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin".into()); - - let mut passed = 0usize; - let mut failed = 0usize; - for guest_bin in staged { - let mut run_args: Vec = Vec::new(); - if !env_pairs.is_empty() { - run_args.push("env".into()); - run_args.extend(env_pairs.iter().cloned()); - } - run_args.push(guest_bin.clone()); - if let Some(ref f) = args.filter { - run_args.push(f.clone()); - } - let run_refs: Vec<&str> = run_args.iter().map(|s| s.as_str()).collect(); - let rc = ssh_cmd_status(&vm, &run_refs); - match rc { - Ok(()) => { - passed += 1; - println!("[test] PASS {guest_bin}"); - } - Err(err) => { - failed += 1; - println!("[test] FAIL {guest_bin}: {err}"); - } - } - } - println!("[test] summary: passed={passed} failed={failed}"); - if failed > 0 { - bail!("{} test binaries failed in VM", failed); - } - Ok(()) -} - -/// `patchbay-vm up` entrypoint. -pub fn up_cmd(recreate: bool) -> Result<()> { - let mut vm = VmConfig::from_cleanup_defaults()?; - vm.recreate = recreate; - up(&mut vm) -} - -/// `patchbay-vm down` entrypoint. -pub fn down_cmd() -> Result<()> { - stop_vm_if_running() -} - -/// `patchbay-vm cleanup` entrypoint. -pub fn cleanup_cmd() -> Result<()> { - let vm = VmConfig::from_cleanup_defaults()?; - cleanup_seed_server(&vm)?; - cleanup_virtiofsd(&vm)?; - remove_if_exists(&vm.pid_file())?; - remove_if_exists(&vm.runtime_file())?; - Ok(()) -} - -/// `patchbay-vm status` entrypoint. -pub fn status_cmd() -> Result<()> { - let vm = VmConfig::from_cleanup_defaults()?; - println!("vm-name: {}", vm.vm_name); - println!("vm-dir: {}", vm.vm_dir().display()); - println!("running: {}", if is_running(&vm)? { "yes" } else { "no" }); - if vm.runtime_file().exists() { - println!("runtime: {}", vm.runtime_file().display()); - let text = std::fs::read_to_string(vm.runtime_file())?; - print!("{text}"); - } - Ok(()) -} - -/// `patchbay-vm ssh -- ...` entrypoint. -pub fn ssh_cmd_cli(cmd: Vec) -> Result<()> { - let vm = VmConfig::from_cleanup_defaults()?; - if cmd.is_empty() { - bail!("ssh: missing remote command"); - } - let refs: Vec<&str> = cmd.iter().map(String::as_str).collect(); - ssh_cmd(&vm, &refs) -} - impl VmConfig { fn from_args(args: &RunVmArgs) -> Result { let cwd = std::env::current_dir().context("get cwd")?; @@ -303,8 +131,8 @@ impl VmConfig { Ok(Self { vm_name: env_or("QEMU_VM_NAME", DEFAULT_VM_NAME), image_url: env_or("QEMU_VM_IMAGE_URL", default_image_url()), - mem_mb: env_or("QEMU_VM_MEM_MB", DEFAULT_MEM_MB), - cpus: env_or("QEMU_VM_CPUS", DEFAULT_CPUS), + mem_mb: env_or("QEMU_VM_MEM_MB", common::DEFAULT_MEM_MB), + cpus: env_or("QEMU_VM_CPUS", &common::default_cpus()), disk_gb: env_or("QEMU_VM_DISK_GB", DEFAULT_DISK_GB), ssh_user: env_or("QEMU_VM_SSH_USER", DEFAULT_SSH_USER), qemu_bin: env_or("QEMU_VM_QEMU_BIN", default_qemu_bin()), @@ -334,8 +162,8 @@ impl VmConfig { Ok(Self { vm_name: env_or("QEMU_VM_NAME", DEFAULT_VM_NAME), image_url: env_or("QEMU_VM_IMAGE_URL", default_image_url()), - mem_mb: env_or("QEMU_VM_MEM_MB", DEFAULT_MEM_MB), - cpus: env_or("QEMU_VM_CPUS", DEFAULT_CPUS), + mem_mb: env_or("QEMU_VM_MEM_MB", common::DEFAULT_MEM_MB), + cpus: env_or("QEMU_VM_CPUS", &common::default_cpus()), disk_gb: env_or("QEMU_VM_DISK_GB", DEFAULT_DISK_GB), ssh_user: env_or("QEMU_VM_SSH_USER", DEFAULT_SSH_USER), qemu_bin: env_or("QEMU_VM_QEMU_BIN", default_qemu_bin()), @@ -450,6 +278,148 @@ impl VmConfig { } } +// --------------------------------------------------------------------------- +// Public entrypoints +// --------------------------------------------------------------------------- + +pub fn run_sims_in_vm(args: RunVmArgs) -> Result<()> { + let mut vm = VmConfig::from_args(&args)?; + up(&mut vm)?; + prepare_vm_guest(&vm)?; + run_in_guest(&vm, &args)?; + Ok(()) +} + +/// Stops the local VM if it is running and cleans leftover VM helper processes. +pub fn stop_vm_if_running() -> Result<()> { + let vm = VmConfig::from_cleanup_defaults()?; + down(&vm) +} + +/// Builds tests for the target and runs discovered test binaries inside the VM. +pub fn run_tests_in_vm(args: TestVmArgs) -> Result<()> { + let mut vm = VmConfig::from_cleanup_defaults()?; + vm.recreate = args.recreate; + let target_dir = cargo_target_dir()?; + + let (test_bins, vm_result) = std::thread::scope(|s| { + let build = s.spawn(|| { + build_and_collect_test_binaries( + &target_dir, + &args.target, + &args.packages, + &args.tests, + &args.cargo_args, + ) + }); + let vm_setup = s.spawn(|| { + up(&mut vm)?; + prepare_vm_guest(&vm) + }); + (build.join(), vm_setup.join()) + }); + let test_bins = test_bins + .map_err(|_| anyhow!("build thread panicked"))? + .context("building test binaries")?; + vm_result + .map_err(|_| anyhow!("vm setup thread panicked"))? + .context("vm setup")?; + + if test_bins.is_empty() { + bail!("no test binaries were built for target {}", args.target); + } + + let staged = stage_test_binaries(&vm.work_dir, &test_bins)?; + + let forward_envs: &[&str] = &[ + "RUST_LOG", + "RUST_BACKTRACE", + "PATCHBAY_OUTDIR", + "PATCHBAY_SIM", + ]; + let mut env_pairs: Vec = forward_envs + .iter() + .filter_map(|name| std::env::var(name).ok().map(|val| format!("{name}={val}"))) + .collect(); + env_pairs.push("PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin".into()); + + let mut passed = 0usize; + let mut failed = 0usize; + for guest_bin in staged { + let mut run_args: Vec = Vec::new(); + if !env_pairs.is_empty() { + run_args.push("env".into()); + run_args.extend(env_pairs.iter().cloned()); + } + run_args.push(guest_bin.clone()); + if let Some(ref f) = args.filter { + run_args.push(f.clone()); + } + let run_refs: Vec<&str> = run_args.iter().map(|s| s.as_str()).collect(); + let rc = ssh_cmd_status(&vm, &run_refs); + match rc { + Ok(()) => { + passed += 1; + println!("[test] PASS {guest_bin}"); + } + Err(err) => { + failed += 1; + println!("[test] FAIL {guest_bin}: {err}"); + } + } + } + println!("[test] summary: passed={passed} failed={failed}"); + if failed > 0 { + bail!("{} test binaries failed in VM", failed); + } + Ok(()) +} + +pub fn up_cmd(recreate: bool) -> Result<()> { + let mut vm = VmConfig::from_cleanup_defaults()?; + vm.recreate = recreate; + up(&mut vm) +} + +pub fn down_cmd() -> Result<()> { + stop_vm_if_running() +} + +pub fn cleanup_cmd() -> Result<()> { + let vm = VmConfig::from_cleanup_defaults()?; + cleanup_seed_server(&vm)?; + cleanup_virtiofsd(&vm)?; + remove_if_exists(&vm.pid_file())?; + remove_if_exists(&vm.runtime_file())?; + Ok(()) +} + +pub fn status_cmd() -> Result<()> { + let vm = VmConfig::from_cleanup_defaults()?; + println!("vm-name: {}", vm.vm_name); + println!("vm-dir: {}", vm.vm_dir().display()); + println!("running: {}", if is_running(&vm)? { "yes" } else { "no" }); + if vm.runtime_file().exists() { + println!("runtime: {}", vm.runtime_file().display()); + let text = std::fs::read_to_string(vm.runtime_file())?; + print!("{text}"); + } + Ok(()) +} + +pub fn ssh_cmd_cli(cmd: Vec) -> Result<()> { + let vm = VmConfig::from_cleanup_defaults()?; + if cmd.is_empty() { + bail!("ssh: missing remote command"); + } + let refs: Vec<&str> = cmd.iter().map(String::as_str).collect(); + ssh_cmd(&vm, &refs) +} + +// --------------------------------------------------------------------------- +// Internal lifecycle +// --------------------------------------------------------------------------- + fn up(vm: &mut VmConfig) -> Result<()> { ensure_dirs(vm)?; log(&format!("workspace={}", vm.workspace.display())); @@ -461,7 +431,6 @@ fn up(vm: &mut VmConfig) -> Result<()> { log("recreate requested; stopping existing VM"); down(vm)?; } - // Clear known_hosts so the new VM's host key is accepted. remove_if_exists(&vm.known_hosts())?; } @@ -526,8 +495,8 @@ fn down(vm: &VmConfig) -> Result<()> { } fn run_in_guest(vm: &VmConfig, args: &RunVmArgs) -> Result<()> { - let guest_exe = ensure_guest_runner_binary(vm, &args.patchbay_version)?; - let auto_build_overrides = assemble_guest_build_overrides(vm, args)?; + let guest_exe = ensure_guest_runner_binary(&vm.work_dir, &vm.target_dir, &args.patchbay_version)?; + let auto_build_overrides = assemble_guest_build_overrides(&vm.target_dir, args)?; let staged_overrides = stage_binary_overrides( &args.binary_overrides, &vm.work_dir, @@ -573,613 +542,106 @@ fn run_in_guest(vm: &VmConfig, args: &RunVmArgs) -> Result<()> { ssh_cmd(vm, &refs) } -#[derive(Debug, Clone, Deserialize, Default)] -struct VmExtends { - file: String, -} - -#[derive(Debug, Clone, Deserialize, Default)] -struct VmSimMeta { - binaries: Option, -} - -#[derive(Debug, Clone, Deserialize, Default)] -struct VmSimFile { - #[serde(default)] - sim: VmSimMeta, - #[serde(default)] - extends: Vec, - #[serde(default, rename = "binary")] - binaries: Vec, -} - -#[derive(Debug, Clone, PartialEq, Eq)] -struct VmBuildRequest { - source_dir: PathBuf, - example: Option, - bin: Option, - features: Vec, - all_features: bool, -} - -fn assemble_guest_build_overrides(vm: &VmConfig, args: &RunVmArgs) -> Result> { - let user_override_names = parse_binary_overrides(&args.binary_overrides)? - .into_keys() - .collect::>(); - let sim_files = expand_vm_sim_inputs(&args.sim_inputs)?; - let mut requested: HashMap = HashMap::new(); - let mut first_seen: HashMap = HashMap::new(); - - for sim_path in sim_files { - let (sim, sim_root) = load_vm_sim(&sim_path)?; - let merged = merged_vm_binary_specs(&sim, &sim_path)?; - for spec in merged.into_values() { - if user_override_names.contains(&spec.name) { - continue; - } - if infer_binary_mode(&spec)? != "build" { - continue; - } - if spec.repo.is_some() { - bail!( - "VM auto-override does not support repo-based build spec '{}' in {}", - spec.name, - sim_path.display() - ); - } - let source_dir = resolve_vm_build_source_dir(&spec, &sim_root)?; - let req = VmBuildRequest { - source_dir, - example: spec.example.clone(), - bin: spec.bin.clone(), - features: spec.features.clone(), - all_features: spec.all_features, - }; - if let Some(existing) = requested.get(&spec.name) { - if existing != &req { - let first = first_seen - .get(&spec.name) - .map(|p| p.display().to_string()) - .unwrap_or_else(|| "".to_string()); - bail!( - "duplicate build spec '{}' differs across sims: {} vs {}", - spec.name, - first, - sim_path.display() - ); - } - continue; - } - first_seen.insert(spec.name.clone(), sim_path.clone()); - requested.insert(spec.name.clone(), req); - } - } - - let mut names: Vec = requested.keys().cloned().collect(); - names.sort(); - let mut out = Vec::new(); - for name in names { - let req = requested - .get(&name) - .ok_or_else(|| anyhow!("missing build request for '{}'", name))?; - let guest_path = build_vm_binary_and_guest_path(vm, &name, req)?; - out.push(format!("{name}:path:{guest_path}")); - } - Ok(out) -} - -fn expand_vm_sim_inputs(inputs: &[PathBuf]) -> Result> { - let mut sims = Vec::new(); - for input in inputs { - if input.is_file() { - if input.extension().and_then(|s| s.to_str()) == Some("toml") { - sims.push(input.clone()); - } - continue; - } - if input.is_dir() { - for entry in std::fs::read_dir(input) - .with_context(|| format!("read sim dir {}", input.display()))? - { - let entry = entry?; - let path = entry.path(); - if path.is_file() && path.extension().and_then(|s| s.to_str()) == Some("toml") { - sims.push(path); - } - } - continue; - } - bail!("sim input path does not exist: {}", input.display()); - } - sims.sort(); - sims.dedup(); - Ok(sims) -} - -fn load_vm_sim(sim_path: &Path) -> Result<(VmSimFile, PathBuf)> { - let text = std::fs::read_to_string(sim_path) - .with_context(|| format!("read sim {}", sim_path.display()))?; - let sim: VmSimFile = - toml::from_str(&text).with_context(|| format!("parse sim {}", sim_path.display()))?; - let root = find_ancestor_with_file(sim_path, "Cargo.toml") - .unwrap_or_else(|| sim_path.parent().unwrap_or(Path::new(".")).to_path_buf()); - Ok((sim, root)) -} - -fn merged_vm_binary_specs(sim: &VmSimFile, sim_path: &Path) -> Result> { - let mut merged = HashMap::new(); - for spec in load_vm_extends_binaries(sim, sim_path)? - .into_iter() - .chain(load_vm_shared_binaries(sim, sim_path)?) - .chain(sim.binaries.clone()) - { - merged.insert(spec.name.clone(), spec); - } - Ok(merged) -} - -fn load_vm_extends_binaries(sim: &VmSimFile, sim_path: &Path) -> Result> { - let sim_dir = sim_path.parent().unwrap_or(Path::new(".")); - let mut out = Vec::new(); - for ext in &sim.extends { - let path = sim_dir.join(&ext.file); - let text = std::fs::read_to_string(&path) - .with_context(|| format!("read extends file {}", path.display()))?; - let parsed: VmSimFile = toml::from_str(&text) - .with_context(|| format!("parse extends file {}", path.display()))?; - out.extend(parsed.binaries); - } - Ok(out) -} - -fn load_vm_shared_binaries(sim: &VmSimFile, sim_path: &Path) -> Result> { - #[derive(Deserialize, Default)] - struct BinaryFile { - #[serde(default, rename = "binary")] - binaries: Vec, - } - - let Some(ref_name) = sim.sim.binaries.as_deref() else { - return Ok(vec![]); - }; - let sim_dir = sim_path.parent().unwrap_or(Path::new(".")); - let path = sim_dir.join(ref_name); - let text = std::fs::read_to_string(&path) - .with_context(|| format!("read shared binaries file {}", path.display()))?; - let parsed: BinaryFile = toml::from_str(&text).context("parse shared binaries file")?; - Ok(parsed.binaries) -} - -fn resolve_vm_build_source_dir(spec: &BinarySpec, default_root: &Path) -> Result { - if let Some(path) = &spec.path { - let resolved = if path.is_absolute() { - path.clone() - } else { - default_root.join(path) - }; - if resolved.is_file() { - bail!( - "binary '{}' mode=build path must be a directory, got file {}", - spec.name, - resolved.display() - ); - } - return Ok(resolved); - } - Ok(default_root.to_path_buf()) -} - -fn build_vm_binary_and_guest_path( - vm: &VmConfig, - name: &str, - req: &VmBuildRequest, -) -> Result { - let mut base_args: Vec = vec![ - "build".into(), - "--release".into(), - "--target".into(), - default_musl_target().into(), - ]; - if req.all_features { - base_args.push("--all-features".into()); - } else if !req.features.is_empty() { - base_args.push("--features".into()); - base_args.push(req.features.join(",")); - } - - if let Some(example) = req.example.as_deref() { - let mut args = base_args.clone(); - args.push("--example".into()); - args.push(example.to_string()); - run_checked( - Command::new("cargo") - .args(args) - .env("CARGO_TARGET_DIR", &vm.target_dir) - .current_dir(&req.source_dir), - "build VM example binary", - )?; - return Ok(format!( - "/target/{}/release/examples/{}", - default_musl_target(), - example - )); - } - - if let Some(bin) = req.bin.as_deref() { - let mut args = base_args; - args.push("--bin".into()); - args.push(bin.to_string()); - run_checked( - Command::new("cargo") - .args(args) - .env("CARGO_TARGET_DIR", &vm.target_dir) - .current_dir(&req.source_dir), - "build VM bin binary", - )?; - return Ok(format!("/target/{}/release/{}", default_musl_target(), bin)); - } - - let mut example_args = base_args.clone(); - example_args.push("--example".into()); - example_args.push(name.to_string()); - let example_status = Command::new("cargo") - .args(example_args) - .env("CARGO_TARGET_DIR", &vm.target_dir) - .current_dir(&req.source_dir) - .status() - .context("spawn cargo build --example for VM")?; - if example_status.success() { - return Ok(format!( - "/target/{}/release/examples/{}", - default_musl_target(), - name - )); - } - - let mut bin_args = base_args; - bin_args.push("--bin".into()); - bin_args.push(name.to_string()); - run_checked( - Command::new("cargo") - .args(bin_args) - .env("CARGO_TARGET_DIR", &vm.target_dir) - .current_dir(&req.source_dir), - "build VM fallback bin", - )?; - Ok(format!( - "/target/{}/release/{}", - default_musl_target(), - name - )) -} - -fn find_ancestor_with_file(path: &Path, file_name: &str) -> Option { - let mut cur = if path.is_dir() { - path.to_path_buf() - } else { - path.parent()?.to_path_buf() - }; - loop { - if cur.join(file_name).is_file() { - return Some(cur); - } - if !cur.pop() { - return None; - } - } -} - -fn ensure_guest_runner_binary(vm: &VmConfig, version: &str) -> Result { - let source = resolve_vm_runner_binary(vm, version)?; - let staged_dir = vm.work_dir.join(".patchbay-bin"); - std::fs::create_dir_all(&staged_dir) - .with_context(|| format!("create {}", staged_dir.display()))?; - let staged = staged_dir.join("patchbay"); - std::fs::copy(&source, &staged) - .with_context(|| format!("copy {} -> {}", source.display(), staged.display()))?; - set_executable(&staged)?; - Ok("/work/.patchbay-bin/patchbay".to_string()) -} - -fn resolve_vm_runner_binary(vm: &VmConfig, version: &str) -> Result { - match std::env::consts::OS { - "linux" | "macos" => { - if let Some(path) = version.strip_prefix("path:") { - let bin = PathBuf::from(path); - if !bin.exists() { - bail!("--patchbay-version path does not exist: {}", bin.display()); - } - if bin.is_dir() { - bail!( - "--patchbay-version path points to a directory, expected executable file: {}", - bin.display() - ); - } - return Ok(bin); - } - if let Some(git_ref) = version.strip_prefix("git:") { - build_musl_from_git_ref(vm, git_ref) - } else { - download_release_runner(vm, version) - } - } - other => bail!("run-vm is not supported on host OS '{}'", other), - } -} - -fn download_release_runner(vm: &VmConfig, version: &str) -> Result { - need_cmd("curl")?; - need_cmd("tar")?; - let cache_root = vm.work_dir.join(".vm-cache"); - std::fs::create_dir_all(&cache_root) - .with_context(|| format!("create {}", cache_root.display()))?; - let version_key = if version == "latest" { - "latest".to_string() - } else { - normalize_release_tag(version) - }; - let archive = cache_root.join(format!( - "{}-{}", - version_key.replace('/', "_"), - release_musl_asset() - )); - let unpack = cache_root.join(format!( - "release-{}-{}", - version_key.replace('/', "_"), - default_musl_target() - )); - let cached_bin = unpack.join("patchbay"); - if cached_bin.exists() { - return Ok(cached_bin); - } - - let url = if version == "latest" { - format!( - "https://github.com/n0-computer/patchbay/releases/latest/download/{}", - release_musl_asset() - ) - } else { - format!( - "https://github.com/n0-computer/patchbay/releases/download/{}/{}", - normalize_release_tag(version), - release_musl_asset() - ) - }; - - run_checked( - Command::new("curl").args(["-fL", &url, "-o"]).arg(&archive), - "download patchbay musl release", - )?; - - if unpack.exists() { - std::fs::remove_dir_all(&unpack).with_context(|| format!("remove {}", unpack.display()))?; - } - std::fs::create_dir_all(&unpack).with_context(|| format!("create {}", unpack.display()))?; - run_checked( - Command::new("tar") - .arg("-xzf") - .arg(&archive) - .arg("-C") - .arg(&unpack), - "extract patchbay musl release", - )?; - - let bin = find_file_named(&unpack, "patchbay") - .with_context(|| format!("find patchbay binary under {}", unpack.display()))?; - set_executable(&bin)?; - Ok(bin) -} - -fn build_musl_from_git_ref(vm: &VmConfig, git_ref: &str) -> Result { - let checkout_root = vm.work_dir.join(".vm-cache").join("git"); - std::fs::create_dir_all(&checkout_root) - .with_context(|| format!("create {}", checkout_root.display()))?; - let checkout = checkout_root.join("patchbay"); - - if !checkout.exists() { - run_checked( - Command::new("git") - .args(["clone", "--no-tags", GITHUB_REPO]) - .arg(&checkout), - "clone patchbay repo", - )?; - } +fn prepare_vm_guest(vm: &VmConfig) -> Result<()> { + ssh_cmd(vm, &["sudo", "bash", "-lc", GUEST_PREPARE_SCRIPT]) +} - run_checked( - Command::new("git") - .arg("-C") - .arg(&checkout) - .args(["fetch", "--all", "--prune"]), - "git fetch patchbay repo", - )?; - run_checked( - Command::new("git") - .arg("-C") - .arg(&checkout) - .args(["checkout", git_ref]), - "git checkout requested ref", - )?; +// --------------------------------------------------------------------------- +// SSH +// --------------------------------------------------------------------------- - let target_dir = vm.work_dir.join(".vm-cache").join("git-target"); - std::fs::create_dir_all(&target_dir)?; - run_checked( - Command::new("cargo") - .args([ - "build", - "--release", - "--target", - default_musl_target(), - "--bin", - "patchbay", - ]) - .env("CARGO_TARGET_DIR", &target_dir) - .current_dir(&checkout), - "build patchbay from git ref", - )?; - let bin = target_dir - .join(default_musl_target()) - .join("release") - .join("patchbay"); - if !bin.exists() { - bail!("built patchbay binary missing at {}", bin.display()); - } - Ok(bin) +fn ssh_cmd(vm: &VmConfig, remote_args: &[&str]) -> Result<()> { + ssh_cmd_status(vm, remote_args) } -fn find_file_named(root: &Path, file_name: &str) -> Result { - let mut stack = vec![root.to_path_buf()]; - while let Some(dir) = stack.pop() { - for ent in std::fs::read_dir(&dir).with_context(|| format!("read {}", dir.display()))? { - let ent = ent?; - let path = ent.path(); - if path.is_dir() { - stack.push(path); - continue; - } - if path.file_name().and_then(|s| s.to_str()) == Some(file_name) { - return Ok(path); - } - } +fn ssh_cmd_status(vm: &VmConfig, remote_args: &[&str]) -> Result<()> { + let mut cmd = Command::new("ssh"); + cmd.arg("-i") + .arg(vm.ssh_key()) + .args([ + "-o", + "StrictHostKeyChecking=accept-new", + "-o", + &format!("UserKnownHostsFile={}", vm.known_hosts().display()), + "-o", + "IdentitiesOnly=yes", + "-o", + "ConnectTimeout=5", + "-p", + ]) + .arg(&vm.ssh_port) + .arg(format!("{}@127.0.0.1", vm.ssh_user)); + + if !remote_args.is_empty() { + let remote = shell_join(remote_args); + cmd.arg(remote); } - bail!("file '{}' not found under {}", file_name, root.display()) + run_checked(&mut cmd, "ssh") } -fn normalize_release_tag(version: &str) -> String { - if version.starts_with('v') { - version.to_string() - } else { - format!("v{version}") - } +fn ssh_probe(vm: &VmConfig) -> bool { + ssh_probe_inner(vm, false) } -fn build_and_collect_test_binaries( - target_dir: &Path, - target: &str, - packages: &[String], - tests: &[String], - cargo_args: &[String], -) -> Result> { - use std::io::{BufRead, BufReader}; - - let mut cmd = Command::new("cargo"); - cmd.args([ - "test", - "--no-run", - "--target", - target, - "--message-format", - "json", - ]); - for pkg in packages { - cmd.args(["-p", pkg]); - } - for test in tests { - cmd.args(["--test", test]); - } - if !cargo_args.is_empty() { - cmd.args(cargo_args); - } - cmd.env("CARGO_TARGET_DIR", target_dir) - .env("CARGO_TERM_COLOR", "always"); - eprintln!("[cargo] {cmd:?}"); - cmd.stdout(std::process::Stdio::piped()); - cmd.stderr(std::process::Stdio::piped()); - let mut child = cmd.spawn().context("spawn cargo test --no-run")?; - - let stderr = child.stderr.take().unwrap(); - let stderr_thread = std::thread::spawn(move || { - for line in BufReader::new(stderr).lines() { - let Ok(line) = line else { break }; - eprintln!("[cargo] {line}"); - } - }); +fn ssh_probe_verbose(vm: &VmConfig) -> bool { + ssh_probe_inner(vm, true) +} - let stdout = child.stdout.take().unwrap(); - let mut stdout_lines = Vec::new(); - for line in BufReader::new(stdout).lines() { - let Ok(line) = line else { break }; - stdout_lines.push(line); - } - - let status = child.wait().context("wait cargo test --no-run")?; - let _ = stderr_thread.join(); - if !status.success() { - // Print compiler diagnostics from the JSON stdout. - for line in &stdout_lines { - let Ok(v) = serde_json::from_str::(line) else { - continue; - }; - if v.get("reason").and_then(|x| x.as_str()) == Some("compiler-message") { - if let Some(rendered) = v - .get("message") - .and_then(|m| m.get("rendered")) - .and_then(|r| r.as_str()) - { - eprint!("{rendered}"); +fn ssh_probe_inner(vm: &VmConfig, verbose: bool) -> bool { + let mut cmd = Command::new("ssh"); + cmd.arg("-i") + .arg(vm.ssh_key()) + .args([ + "-o", + "StrictHostKeyChecking=accept-new", + "-o", + &format!("UserKnownHostsFile={}", vm.known_hosts().display()), + "-o", + "IdentitiesOnly=yes", + "-o", + "BatchMode=yes", + "-o", + "ConnectionAttempts=1", + "-o", + if verbose { + "LogLevel=VERBOSE" + } else { + "LogLevel=ERROR" + }, + "-o", + "ConnectTimeout=1", + "-p", + ]) + .arg(&vm.ssh_port) + .arg(format!("{}@127.0.0.1", vm.ssh_user)) + .arg("true") + .stdout(Stdio::null()); + if verbose { + cmd.stderr(Stdio::piped()); + match cmd.output() { + Ok(out) => { + if !out.status.success() { + let stderr = String::from_utf8_lossy(&out.stderr); + for line in stderr.lines() { + log(&format!("ssh-probe: {line}")); + } } + out.status.success() + } + Err(e) => { + log(&format!("ssh-probe error: {e}")); + false } } - bail!("cargo test --no-run failed"); - } - - let mut bins = Vec::new(); - for line in &stdout_lines { - let Ok(v) = serde_json::from_str::(line) else { - continue; - }; - if v.get("reason").and_then(|x| x.as_str()) != Some("compiler-artifact") { - continue; - } - if !v - .get("profile") - .and_then(|p| p.get("test")) - .and_then(|b| b.as_bool()) - .unwrap_or(false) - { - continue; - } - let Some(exe) = v.get("executable").and_then(|e| e.as_str()) else { - continue; - }; - let path = PathBuf::from(exe); - if path.exists() && path.is_file() { - bins.push(path); - } + } else { + cmd.stderr(Stdio::null()); + cmd.status().map(|s| s.success()).unwrap_or(false) } - bins.sort(); - bins.dedup(); - Ok(bins) -} - -fn stage_test_binaries(vm: &VmConfig, bins: &[PathBuf]) -> Result> { - let stage_dir = vm.work_dir.join("binaries").join("tests"); - std::fs::create_dir_all(&stage_dir) - .with_context(|| format!("create {}", stage_dir.display()))?; - // Make world-writable so the guest user can create testdir output here. - #[cfg(unix)] - { - use std::os::unix::fs::PermissionsExt; - std::fs::set_permissions(&stage_dir, std::fs::Permissions::from_mode(0o777))?; - } - let mut staged_guest = Vec::new(); - for bin in bins { - let file = bin - .file_name() - .and_then(|s| s.to_str()) - .ok_or_else(|| anyhow!("bad test binary name: {}", bin.display()))?; - let dest = stage_dir.join(file); - std::fs::copy(bin, &dest) - .with_context(|| format!("copy {} -> {}", bin.display(), dest.display()))?; - set_executable(&dest)?; - staged_guest.push(format!("/work/binaries/tests/{file}")); - } - Ok(staged_guest) } -fn prepare_vm_guest(vm: &VmConfig) -> Result<()> { - let script = "set -euo pipefail; export DEBIAN_FRONTEND=noninteractive; if ! command -v ip >/dev/null 2>&1 || ! command -v tc >/dev/null 2>&1 || ! command -v nft >/dev/null 2>&1; then apt-get update; apt-get install -y bridge-utils iproute2 iputils-ping iptables nftables net-tools curl iperf3 jq; fi; modprobe sch_netem || true; sysctl -w net.ipv4.ip_forward=1"; - ssh_cmd(vm, &["sudo", "bash", "-lc", script]) -} +// --------------------------------------------------------------------------- +// VM provisioning helpers +// --------------------------------------------------------------------------- fn ensure_dirs(vm: &VmConfig) -> Result<()> { std::fs::create_dir_all(vm.vm_dir()) @@ -1326,10 +788,6 @@ fn detect_accel(vm: &VmConfig) -> Result<(String, String)> { Ok((accel, cpu)) } -/// Locate the UEFI firmware blob needed by `qemu-system-aarch64`. -/// -/// Searches relative to the QEMU binary first (handles Nix and Homebrew -/// layouts), then falls back to common system paths. fn find_aarch64_efi(qemu_bin: &str) -> Option { if let Ok(out) = Command::new("which").arg(qemu_bin).output() { if out.status.success() { @@ -1409,8 +867,6 @@ fn render_cloud_init(vm: &VmConfig) -> Result<()> { ) .with_context(|| format!("write {}", vm.meta_data().display()))?; - // Match by driver rather than interface name: x86 guests use eth0, arm64 - // guests use enp0s1 (or similar), but both use virtio_net. std::fs::write( vm.network_cfg(), "version: 2\nethernets:\n id0:\n match:\n driver: virtio_net\n dhcp4: true\n", @@ -1428,7 +884,7 @@ fn create_seed(vm: &VmConfig) -> Result<()> { } fn create_seed_iso(vm: &VmConfig) -> Result { - if command_exists("cloud-localds")? { + if common::command_exists("cloud-localds")? { run_checked( Command::new("cloud-localds") .arg("-N") @@ -1442,11 +898,11 @@ fn create_seed_iso(vm: &VmConfig) -> Result { return Ok(true); } - let mkiso = if command_exists("genisoimage")? { + let mkiso = if common::command_exists("genisoimage")? { Some(("genisoimage", vec![])) - } else if command_exists("mkisofs")? { + } else if common::command_exists("mkisofs")? { Some(("mkisofs", vec![])) - } else if command_exists("xorriso")? { + } else if common::command_exists("xorriso")? { Some(("xorriso", vec!["-as", "mkisofs"])) } else { None @@ -1499,14 +955,14 @@ fn start_seed_server(vm: &VmConfig) -> Result<()> { cleanup_seed_server(vm)?; need_cmd("python3")?; - let log = File::create(vm.p("seed-http.log")) + let log_file = File::create(vm.p("seed-http.log")) .with_context(|| format!("create {}", vm.p("seed-http.log").display()))?; - let log2 = log.try_clone().context("clone seed log")?; + let log2 = log_file.try_clone().context("clone seed log")?; let child = Command::new("python3") .args(["-m", "http.server", &vm.seed_port, "--bind", "0.0.0.0"]) .current_dir(vm.seed_dir()) - .stdout(Stdio::from(log)) + .stdout(Stdio::from(log_file)) .stderr(Stdio::from(log2)) .spawn() .context("start cloud-init seed http server")?; @@ -1588,8 +1044,9 @@ fn spawn_virtiofsd( pid_path: &Path, readonly: bool, ) -> Result<()> { - let log = File::create(log_path).with_context(|| format!("create {}", log_path.display()))?; - let log2 = log.try_clone().context("clone virtiofsd log")?; + let log_file = + File::create(log_path).with_context(|| format!("create {}", log_path.display()))?; + let log2 = log_file.try_clone().context("clone virtiofsd log")?; let mut cmd = Command::new(bin); cmd.arg("--shared-dir") @@ -1606,7 +1063,6 @@ fn spawn_virtiofsd( if readonly { cmd.arg("--readonly"); } else { - // Map all guest UIDs/GIDs to the host user so the guest can write freely. #[cfg(unix)] let (uid, gid) = { use std::os::unix::fs::MetadataExt; @@ -1623,7 +1079,7 @@ fn spawn_virtiofsd( ]); } let child = cmd - .stdout(Stdio::from(log)) + .stdout(Stdio::from(log_file)) .stderr(Stdio::from(log2)) .spawn() .with_context(|| format!("start {}", bin.display()))?; @@ -1656,7 +1112,6 @@ fn wait_for_ssh(vm: &VmConfig) -> Result<()> { )); let mut last_msg = String::new(); for i in 1..=180 { - // Use verbose probe every 30 attempts (~9s) to diagnose failures. let ok = if i % 30 == 0 { ssh_probe_verbose(vm) } else { @@ -1769,95 +1224,6 @@ fn ensure_guest_mounts(vm: &VmConfig) -> Result<()> { Ok(()) } -fn ssh_cmd(vm: &VmConfig, remote_args: &[&str]) -> Result<()> { - ssh_cmd_status(vm, remote_args) -} - -fn ssh_cmd_status(vm: &VmConfig, remote_args: &[&str]) -> Result<()> { - let mut cmd = Command::new("ssh"); - cmd.arg("-i") - .arg(vm.ssh_key()) - .args([ - "-o", - "StrictHostKeyChecking=accept-new", - "-o", - &format!("UserKnownHostsFile={}", vm.known_hosts().display()), - "-o", - "IdentitiesOnly=yes", - "-o", - "ConnectTimeout=5", - "-p", - ]) - .arg(&vm.ssh_port) - .arg(format!("{}@127.0.0.1", vm.ssh_user)); - - if !remote_args.is_empty() { - let remote = shell_join(remote_args); - cmd.arg(remote); - } - run_checked(&mut cmd, "ssh") -} - -fn ssh_probe(vm: &VmConfig) -> bool { - ssh_probe_inner(vm, false) -} - -fn ssh_probe_verbose(vm: &VmConfig) -> bool { - ssh_probe_inner(vm, true) -} - -fn ssh_probe_inner(vm: &VmConfig, verbose: bool) -> bool { - let mut cmd = Command::new("ssh"); - cmd.arg("-i") - .arg(vm.ssh_key()) - .args([ - "-o", - "StrictHostKeyChecking=accept-new", - "-o", - &format!("UserKnownHostsFile={}", vm.known_hosts().display()), - "-o", - "IdentitiesOnly=yes", - "-o", - "BatchMode=yes", - "-o", - "ConnectionAttempts=1", - "-o", - if verbose { - "LogLevel=VERBOSE" - } else { - "LogLevel=ERROR" - }, - "-o", - "ConnectTimeout=1", - "-p", - ]) - .arg(&vm.ssh_port) - .arg(format!("{}@127.0.0.1", vm.ssh_user)) - .arg("true") - .stdout(Stdio::null()); - if verbose { - cmd.stderr(Stdio::piped()); - match cmd.output() { - Ok(out) => { - if !out.status.success() { - let stderr = String::from_utf8_lossy(&out.stderr); - for line in stderr.lines() { - log(&format!("ssh-probe: {line}")); - } - } - out.status.success() - } - Err(e) => { - log(&format!("ssh-probe error: {e}")); - false - } - } - } else { - cmd.stderr(Stdio::null()); - cmd.status().map(|s| s.success()).unwrap_or(false) - } -} - fn start_vm(vm: &mut VmConfig) -> Result<()> { if is_running(vm)? { return Ok(()); @@ -1898,7 +1264,6 @@ fn start_vm(vm: &mut VmConfig) -> Result<()> { .arg("-cpu") .arg(cpu); - // aarch64 requires an explicit machine type and UEFI firmware. if is_aarch64 { qemu.arg("-M").arg("virt"); if let Some(efi) = find_aarch64_efi(&vm.qemu_bin) { @@ -1993,27 +1358,6 @@ fn ensure_ssh_port_available(vm: &VmConfig) -> Result<()> { } } -fn env_or(name: &str, default: &str) -> String { - std::env::var(name).unwrap_or_else(|_| default.to_string()) -} - -fn cargo_target_dir() -> Result { - let out = Command::new("cargo") - .args(["metadata", "--format-version", "1", "--no-deps"]) - .output() - .context("run cargo metadata for target dir")?; - if !out.status.success() { - bail!("cargo metadata failed while resolving target dir"); - } - let v: serde_json::Value = - serde_json::from_slice(&out.stdout).context("parse cargo metadata json")?; - let dir = v - .get("target_directory") - .and_then(|s| s.as_str()) - .context("cargo metadata missing target_directory")?; - Ok(PathBuf::from(dir)) -} - fn shared_image_dir() -> Result { if let Some(data) = dirs::data_dir() { return Ok(data.join("patchbay").join("qemu-images")); @@ -2035,149 +1379,3 @@ fn base_image_name(image_url: &str) -> String { let hash = fnv1a64(image_url.as_bytes()); format!("{clean}-{hash:016x}.qcow2") } - -fn sanitize_filename(s: &str) -> String { - let out: String = s - .chars() - .map(|c| { - if c.is_ascii_alphanumeric() || c == '.' || c == '-' || c == '_' { - c - } else { - '_' - } - }) - .collect(); - if out.is_empty() { - "base-image".to_string() - } else { - out - } -} - -fn fnv1a64(bytes: &[u8]) -> u64 { - const OFFSET: u64 = 0xcbf29ce484222325; - const PRIME: u64 = 0x100000001b3; - let mut h = OFFSET; - for b in bytes { - h ^= u64::from(*b); - h = h.wrapping_mul(PRIME); - } - h -} - -fn need_cmd(name: &str) -> Result<()> { - if command_exists(name)? { - Ok(()) - } else { - bail!("missing required command: {name}") - } -} - -fn command_exists(name: &str) -> Result { - Ok(Command::new("sh") - .arg("-lc") - .arg(format!("command -v {name} >/dev/null 2>&1")) - .status() - .context("check command")? - .success()) -} - -fn run_checked(cmd: &mut Command, label: &str) -> Result<()> { - let status = cmd.status().with_context(|| format!("run '{label}'"))?; - if status.success() { - Ok(()) - } else { - bail!("command failed: {label} (status {status})") - } -} - -fn log(msg: &str) { - eprintln!("[qemu] {msg}"); -} - -fn abspath(path: &Path) -> Result { - if path.is_absolute() { - Ok(path.to_path_buf()) - } else { - Ok(std::env::current_dir()?.join(path)) - } -} - -fn remove_if_exists(path: &Path) -> Result<()> { - if !path.exists() { - return Ok(()); - } - if path.is_dir() { - std::fs::remove_dir_all(path).with_context(|| format!("remove {}", path.display())) - } else { - std::fs::remove_file(path).with_context(|| format!("remove {}", path.display())) - } -} - -fn read_pid(path: &Path) -> Result> { - if !path.exists() { - return Ok(None); - } - let text = std::fs::read_to_string(path).with_context(|| format!("read {}", path.display()))?; - Ok(text.trim().parse::().ok()) -} - -fn pid_alive(pid: i32) -> bool { - // SAFETY: kill with signal 0 is side-effect free and used only for liveness probing. - let rc = unsafe { nix::libc::kill(pid, 0) }; - if rc == 0 { - true - } else { - let errno = nix::errno::Errno::last_raw(); - errno == nix::libc::EPERM - } -} - -fn kill_pid(pid: i32) { - // SAFETY: best-effort process signal for known pid. - let _ = unsafe { nix::libc::kill(pid, nix::libc::SIGTERM) }; -} - -fn force_kill_pid(pid: i32) { - // SAFETY: best-effort forced process signal for known pid. - let _ = unsafe { nix::libc::kill(pid, nix::libc::SIGKILL) }; -} - -fn to_guest_sim_path(workspace: &Path, sim: &Path) -> Result { - let sim_abs = if sim.is_absolute() { - sim.to_path_buf() - } else { - std::env::current_dir()?.join(sim) - }; - let rel = sim_abs.strip_prefix(workspace).with_context(|| { - format!( - "sim path {} must be under workspace {}", - sim_abs.display(), - workspace.display() - ) - })?; - Ok(format!("/app/{}", rel.display())) -} - -fn shell_join>(parts: &[T]) -> String { - parts - .iter() - .map(|p| shell_escape(p.as_ref())) - .collect::>() - .join(" ") -} - -fn shell_escape(s: &str) -> String { - if s.is_empty() { - return "''".to_string(); - } - if s.bytes().all(|b| { - matches!( - b, - b'A'..=b'Z' | b'a'..=b'z' | b'0'..=b'9' | b'-' | b'_' | b'.' | b'/' | b':' - ) - }) { - return s.to_string(); - } - format!("'{}'", s.replace('\'', "'\"'\"'")) -} From 24d6acd7990b72a881d953d2843535550cd531da Mon Sep 17 00:00:00 2001 From: Asmir Avdicevic Date: Tue, 24 Mar 2026 12:44:45 +0100 Subject: [PATCH 2/6] fmt --- patchbay-vm/src/common.rs | 11 ++--------- patchbay-vm/src/container.rs | 32 ++++++++++++-------------------- patchbay-vm/src/main.rs | 3 +-- patchbay-vm/src/qemu.rs | 20 ++++++++++++-------- 4 files changed, 27 insertions(+), 39 deletions(-) diff --git a/patchbay-vm/src/common.rs b/patchbay-vm/src/common.rs index 36be194..f81f59f 100644 --- a/patchbay-vm/src/common.rs +++ b/patchbay-vm/src/common.rs @@ -128,11 +128,7 @@ pub fn ensure_guest_runner_binary( Ok("/work/.patchbay-bin/patchbay".to_string()) } -fn resolve_vm_runner_binary( - work_dir: &Path, - _target_dir: &Path, - version: &str, -) -> Result { +fn resolve_vm_runner_binary(work_dir: &Path, _target_dir: &Path, version: &str) -> Result { match std::env::consts::OS { "linux" | "macos" => { if let Some(path) = version.strip_prefix("path:") { @@ -278,10 +274,7 @@ fn build_musl_from_git_ref(work_dir: &Path, git_ref: &str) -> Result { } /// Assemble `--binary` overrides for binaries that need building on the host. -pub fn assemble_guest_build_overrides( - target_dir: &Path, - args: &RunVmArgs, -) -> Result> { +pub fn assemble_guest_build_overrides(target_dir: &Path, args: &RunVmArgs) -> Result> { let user_override_names = parse_binary_overrides(&args.binary_overrides)? .into_keys() .collect::>(); diff --git a/patchbay-vm/src/container.rs b/patchbay-vm/src/container.rs index 2aaadff..de7a8cf 100644 --- a/patchbay-vm/src/container.rs +++ b/patchbay-vm/src/container.rs @@ -13,13 +13,15 @@ use std::{ use anyhow::{anyhow, bail, Context, Result}; -use crate::common::{ - self, abspath, assemble_guest_build_overrides, build_and_collect_test_binaries, - cargo_target_dir, default_musl_target, ensure_guest_runner_binary, env_or, log_msg, - need_cmd, run_checked, stage_test_binaries, to_guest_sim_path, RunVmArgs, TestVmArgs, - GUEST_PREPARE_SCRIPT, +use crate::{ + common::{ + self, abspath, assemble_guest_build_overrides, build_and_collect_test_binaries, + cargo_target_dir, default_musl_target, ensure_guest_runner_binary, env_or, log_msg, + need_cmd, run_checked, stage_test_binaries, to_guest_sim_path, RunVmArgs, TestVmArgs, + GUEST_PREPARE_SCRIPT, + }, + util::stage_binary_overrides, }; -use crate::util::stage_binary_overrides; // --------------------------------------------------------------------------- // Constants @@ -132,10 +134,7 @@ pub fn status_cmd() -> Result<()> { let cfg = ContainerConfig::from_defaults()?; println!("backend: container"); println!("container-name: {}", cfg.name); - println!( - "running: {}", - if is_running(&cfg)? { "yes" } else { "no" } - ); + println!("running: {}", if is_running(&cfg)? { "yes" } else { "no" }); if cfg.runtime_file().exists() { println!("runtime: {}", cfg.runtime_file().display()); let text = std::fs::read_to_string(cfg.runtime_file())?; @@ -288,13 +287,9 @@ fn down(cfg: &ContainerConfig) -> Result<()> { return Ok(()); } log(&format!("stopping {}", cfg.name)); - let _ = Command::new("container") - .args(["stop", &cfg.name]) - .status(); + let _ = Command::new("container").args(["stop", &cfg.name]).status(); // Remove the stopped container so the name can be reused. - let _ = Command::new("container") - .args(["rm", &cfg.name]) - .status(); + let _ = Command::new("container").args(["rm", &cfg.name]).status(); common::remove_if_exists(&cfg.runtime_file())?; log(&format!("{} stopped", cfg.name)); Ok(()) @@ -351,10 +346,7 @@ fn start_container(cfg: &ContainerConfig) -> Result<()> { // Mount work dir (read-write) at /work. cmd.args([ "--mount", - &format!( - "type=bind,source={},target=/work", - cfg.work_dir.display() - ), + &format!("type=bind,source={},target=/work", cfg.work_dir.display()), ]); cmd.arg(&cfg.image); diff --git a/patchbay-vm/src/main.rs b/patchbay-vm/src/main.rs index 708a0a6..e756852 100644 --- a/patchbay-vm/src/main.rs +++ b/patchbay-vm/src/main.rs @@ -15,9 +15,8 @@ use std::path::PathBuf; use anyhow::Result; use clap::{Parser, Subcommand, ValueEnum}; -use patchbay_server::DEFAULT_UI_BIND; - use common::{RunVmArgs, TestVmArgs}; +use patchbay_server::DEFAULT_UI_BIND; /// VM backend selection. #[derive(Clone, Debug, ValueEnum)] diff --git a/patchbay-vm/src/qemu.rs b/patchbay-vm/src/qemu.rs index eabdac9..2092abb 100644 --- a/patchbay-vm/src/qemu.rs +++ b/patchbay-vm/src/qemu.rs @@ -8,14 +8,17 @@ use std::{ }; use anyhow::{anyhow, bail, Context, Result}; -use crate::common::{ - self, abspath, assemble_guest_build_overrides, build_and_collect_test_binaries, - cargo_target_dir, default_musl_target, ensure_guest_runner_binary, env_or, force_kill_pid, - is_arm64_host, kill_pid, log_msg, need_cmd, pid_alive, read_pid, remove_if_exists, - run_checked, sanitize_filename, fnv1a64, shell_join, stage_test_binaries, to_guest_sim_path, - RunVmArgs, TestVmArgs, GUEST_PREPARE_SCRIPT, + +use crate::{ + common::{ + self, abspath, assemble_guest_build_overrides, build_and_collect_test_binaries, + cargo_target_dir, default_musl_target, ensure_guest_runner_binary, env_or, fnv1a64, + force_kill_pid, is_arm64_host, kill_pid, log_msg, need_cmd, pid_alive, read_pid, + remove_if_exists, run_checked, sanitize_filename, shell_join, stage_test_binaries, + to_guest_sim_path, RunVmArgs, TestVmArgs, GUEST_PREPARE_SCRIPT, + }, + util::stage_binary_overrides, }; -use crate::util::stage_binary_overrides; // --------------------------------------------------------------------------- // QEMU-specific constants @@ -495,7 +498,8 @@ fn down(vm: &VmConfig) -> Result<()> { } fn run_in_guest(vm: &VmConfig, args: &RunVmArgs) -> Result<()> { - let guest_exe = ensure_guest_runner_binary(&vm.work_dir, &vm.target_dir, &args.patchbay_version)?; + let guest_exe = + ensure_guest_runner_binary(&vm.work_dir, &vm.target_dir, &args.patchbay_version)?; let auto_build_overrides = assemble_guest_build_overrides(&vm.target_dir, args)?; let staged_overrides = stage_binary_overrides( &args.binary_overrides, From 2dfa38b6631e1736626efe46b9759e1a5f260454 Mon Sep 17 00:00:00 2001 From: Asmir Avdicevic Date: Tue, 24 Mar 2026 12:59:00 +0100 Subject: [PATCH 3/6] ci: add macOS check job with container backend smoke test - Add macos-check job on self-hosted macOS ARM64 runner - Move e2e to self-hosted Linux runner - Switch all jobs to sccache - Smoke test runs iperf sim via container backend when CLI is available --- .github/workflows/ci.yml | 46 ++++++++++++++++++++++++++++++++++------ 1 file changed, 39 insertions(+), 7 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 3b7d67f..609245b 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -8,6 +8,7 @@ on: env: CARGO_TERM_COLOR: always + RUSTC_WRAPPER: sccache jobs: ci: @@ -22,9 +23,7 @@ jobs: - uses: taiki-e/install-action@cargo-make - uses: taiki-e/install-action@nextest - - uses: Swatinem/rust-cache@v2 - with: - cache-on-failure: true + - uses: mozilla-actions/sccache-action@v0.0.9 - name: Install iperf3 run: sudo apt-get update && sudo apt-get install -y iperf3 @@ -44,16 +43,49 @@ jobs: - name: Format run: cargo make format-check + macos-check: + runs-on: [self-hosted, macOS, arm64] + steps: + - uses: actions/checkout@v5 + + - uses: dtolnay/rust-toolchain@stable + with: + components: clippy + + - uses: mozilla-actions/sccache-action@v0.0.9 + + - name: Build patchbay-vm + run: cargo build -p patchbay-vm + + - name: Clippy patchbay-vm + run: cargo clippy -p patchbay-vm -- -D warnings + + - name: Add musl target + run: rustup target add aarch64-unknown-linux-musl + + - name: Install musl cross-compiler + run: brew install filosottile/musl-cross/musl-cross + + - name: Container backend smoke test + run: | + if command -v container >/dev/null 2>&1; then + cargo build --release -p patchbay-vm -p patchbay-runner --bin patchbay --target aarch64-unknown-linux-musl + ./target/release/patchbay-vm --backend container run \ + --patchbay-version "path:target/aarch64-unknown-linux-musl/release/patchbay" \ + ./iroh-integration/patchbay/sims/iperf-1to1-public.toml + ./target/release/patchbay-vm --backend container down + else + echo "container CLI not found, skipping smoke test" + fi + e2e: - runs-on: ubuntu-latest + runs-on: [self-hosted, linux, x64] steps: - uses: actions/checkout@v5 - uses: dtolnay/rust-toolchain@stable - - uses: Swatinem/rust-cache@v2 - with: - cache-on-failure: true + - uses: mozilla-actions/sccache-action@v0.0.9 - uses: actions/setup-node@v5 with: From 3814833a47119506fcfd9d5d61fb31f7aa0a9f5e Mon Sep 17 00:00:00 2001 From: Asmir Avdicevic Date: Tue, 24 Mar 2026 13:04:23 +0100 Subject: [PATCH 4/6] ci: fix macOS check (add node) and e2e (tolerate missing sysctl) --- .github/workflows/ci.yml | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 609245b..6d65694 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -52,6 +52,10 @@ jobs: with: components: clippy + - uses: actions/setup-node@v5 + with: + node-version: "20" + - uses: mozilla-actions/sccache-action@v0.0.9 - name: Build patchbay-vm @@ -126,7 +130,7 @@ jobs: run: sudo apt-get update && sudo apt-get install -y iperf3 - name: Enable unprivileged user namespaces - run: sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0 + run: sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0 || true - name: Build Rust (bins + test targets) run: cargo build --workspace --all-targets From 64187dcb4b51e635bfb5dec2d9ac9a10ce4526f8 Mon Sep 17 00:00:00 2001 From: Asmir Avdicevic Date: Tue, 24 Mar 2026 14:36:30 +0100 Subject: [PATCH 5/6] ci: fix macOS smoke test running as root (skip brew, gate on tooling) --- .github/workflows/ci.yml | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 6d65694..173d90b 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -64,23 +64,23 @@ jobs: - name: Clippy patchbay-vm run: cargo clippy -p patchbay-vm -- -D warnings - - name: Add musl target - run: rustup target add aarch64-unknown-linux-musl - - - name: Install musl cross-compiler - run: brew install filosottile/musl-cross/musl-cross - - name: Container backend smoke test run: | - if command -v container >/dev/null 2>&1; then - cargo build --release -p patchbay-vm -p patchbay-runner --bin patchbay --target aarch64-unknown-linux-musl - ./target/release/patchbay-vm --backend container run \ - --patchbay-version "path:target/aarch64-unknown-linux-musl/release/patchbay" \ - ./iroh-integration/patchbay/sims/iperf-1to1-public.toml - ./target/release/patchbay-vm --backend container down - else + if ! command -v container >/dev/null 2>&1; then echo "container CLI not found, skipping smoke test" + exit 0 + fi + if ! command -v aarch64-linux-musl-gcc >/dev/null 2>&1; then + echo "musl cross-compiler not found, skipping smoke test" + echo "install with: brew install filosottile/musl-cross/musl-cross" + exit 0 fi + rustup target add aarch64-unknown-linux-musl + cargo build --release -p patchbay-vm -p patchbay-runner --bin patchbay --target aarch64-unknown-linux-musl + ./target/release/patchbay-vm --backend container run \ + --patchbay-version "path:target/aarch64-unknown-linux-musl/release/patchbay" \ + ./iroh-integration/patchbay/sims/iperf-1to1-public.toml + ./target/release/patchbay-vm --backend container down e2e: runs-on: [self-hosted, linux, x64] From 91f9fd2a3712b2cfd1b29e3b01ba307280910bb2 Mon Sep 17 00:00:00 2001 From: Asmir Avdicevic Date: Tue, 24 Mar 2026 14:57:54 +0100 Subject: [PATCH 6/6] ci: always install Playwright browser on self-hosted runner --- .github/workflows/ci.yml | 19 +------------------ 1 file changed, 1 insertion(+), 18 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 173d90b..9f33c72 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -105,26 +105,9 @@ jobs: working-directory: ui run: npm run build - - name: Get Playwright version - id: pw-version - working-directory: ui - run: echo "version=$(node -e "console.log(require('./node_modules/@playwright/test/package.json').version)")" >> "$GITHUB_OUTPUT" - - - uses: actions/cache@v5 - id: pw-cache - with: - path: ~/.cache/ms-playwright - key: playwright-${{ steps.pw-version.outputs.version }}-chromium - - - name: Install Playwright browser + - name: Install Playwright browser and system deps working-directory: ui run: npx playwright install --with-deps chromium - if: steps.pw-cache.outputs.cache-hit != 'true' - - - name: Install Playwright system deps - working-directory: ui - run: npx playwright install-deps chromium - if: steps.pw-cache.outputs.cache-hit == 'true' - name: Install iperf3 run: sudo apt-get update && sudo apt-get install -y iperf3