From 60e9d48b41029adb1f16c2a0f16f2e883dba064b Mon Sep 17 00:00:00 2001 From: zazabap Date: Tue, 31 Mar 2026 16:13:42 +0000 Subject: [PATCH 01/23] docs: add design spec for proposed reductions Typst note Covers 9 reductions: 2 NP-hardness chain extensions (#973, #198), 4 Tier 1a blocked issues (#379, #380, #888, #822), and 3 Tier 1b blocked issues (#892, #894, #890). Co-Authored-By: Claude Opus 4.6 (1M context) --- ...6-03-31-proposed-reductions-note-design.md | 125 ++++++++++++++++++ 1 file changed, 125 insertions(+) create mode 100644 docs/superpowers/specs/2026-03-31-proposed-reductions-note-design.md diff --git a/docs/superpowers/specs/2026-03-31-proposed-reductions-note-design.md b/docs/superpowers/specs/2026-03-31-proposed-reductions-note-design.md new file mode 100644 index 000000000..9ce7a1a90 --- /dev/null +++ b/docs/superpowers/specs/2026-03-31-proposed-reductions-note-design.md @@ -0,0 +1,125 @@ +# Design: Proposed Reduction Rules — Typst Verification Note + +**Date:** 2026-03-31 +**Goal:** Create a standalone Typst document with compiled PDF that formalizes 9 reduction rules from issue #770, resolving blockers for 7 incomplete issues and adding 2 high-leverage NP-hardness chain extensions. + +## Scope + +### 9 Reductions + +**Group 1 — NP-hardness proof chain extensions:** + +| Issue | Reduction | Impact | +|-------|-----------|--------| +| #973 | SubsetSum → Partition | Unlocks ~12 downstream problems | +| #198 | MinimumVertexCover → HamiltonianCircuit | Unlocks ~9 downstream problems | + +**Group 2 — Tier 1a blocked issues (fix + formalize):** + +| Issue | Reduction | Current blocker | +|-------|-----------|----------------| +| #379 | DominatingSet → MinMaxMulticenter | Decision vs optimization MDS model | +| #380 | DominatingSet → MinSumMulticenter | Same | +| #888 | OptimalLinearArrangement → RootedTreeArrangement | Witness extraction impossible for naive approach | +| #822 | ExactCoverBy3Sets → AcyclicPartition | Missing algorithm (unpublished reference) | + +**Group 3 — Tier 1b blocked issues (fix + formalize):** + +| Issue | Reduction | Current blocker | +|-------|-----------|----------------| +| #892 | VertexCover → HamiltonianPath | Depends on #198 (VC→HC) being resolved | +| #894 | VertexCover → PartialFeedbackEdgeSet | Missing Yannakakis 1978b paper | +| #890 | MaxCut → OptimalLinearArrangement | Placeholder algorithm, no actual construction | + +## Deliverables + +1. **`docs/paper/proposed-reductions.typ`** — standalone Typst document +2. **`docs/paper/proposed-reductions.pdf`** — compiled PDF checked into repo +3. **Updated GitHub issues** — #379, #380, #888, #822, #892, #894, #890 corrected with verified algorithms +4. **One PR** containing the note, PDF, and issue updates + +## Document Structure + +``` +Title: Proposed Reduction Rules — Verification Notes +Abstract: Motivation (NP-hardness gaps, blocked issues, impact analysis) + +§1 Notation & Conventions + - Standard symbols (G, V, E, w, etc.) + - Proof structure: Construction → Correctness (⟹/⟸) → Solution Extraction + - Overhead notation + +§2 NP-Hardness Chain Extensions + §2.1 SubsetSum → Partition (#973) + §2.2 MinimumVertexCover → HamiltonianCircuit (#198) + §2.3 VertexCover → HamiltonianPath (#892) + +§3 Graph Reductions + §3.1 MaxCut → OptimalLinearArrangement (#890) + §3.2 OptimalLinearArrangement → RootedTreeArrangement (#888) + +§4 Set & Domination Reductions + §4.1 DominatingSet → MinMaxMulticenter (#379) + §4.2 DominatingSet → MinSumMulticenter (#380) + §4.3 ExactCoverBy3Sets → AcyclicPartition (#822) + +§5 Feedback Set Reductions + §5.1 VertexCover → PartialFeedbackEdgeSet (#894) +``` + +## Per-Reduction Entry Format + +Each reduction follows the `reductions.typ` convention: + +1. **Theorem statement** — 1-3 sentence intuition, citation (e.g., `[GJ79, ND50]`) +2. **Proof** with three mandatory subsections: + - _Construction._ Numbered algorithm steps, all symbols defined before use + - _Correctness._ Bidirectional: (⟹) forward direction, (⟸) backward direction + - _Solution extraction._ How to map target solution back to source +3. **Overhead table** — target size fields as functions of source size fields +4. **Worked example** — concrete small instance, full construction steps, solution verification + +Mathematical notation uses Typst math mode: `$V$`, `$E$`, `$arrow.r.double$`, `$overline(x)$`, etc. + +## Research Plan for Blocked Issues + +For each blocked reduction: + +1. **Search** for the original reference via the citation in Garey & Johnson +2. **Reconstruct** the correct algorithm from the paper or from first principles +3. **Verify** correctness with a hand-worked example in the note +4. **Resolve** the blocker: + - #379/#380: Clarify that the reduction operates on the decision variant; note model alignment needed + - #888: Research Gavril 1977a gadget construction for forcing path-tree solutions + - #822: Research the acyclic partition reduction from G&J or construct from first principles + - #892: Chain through #198 (VC→HC→HP); detail the HC→HP modification + - #894: Search for Yannakakis 1978b or reconstruct the gadget + - #890: Research the Garey-Johnson-Stockmeyer 1976 construction + +If a reference is unavailable, construct a novel reduction and clearly mark it as such. + +## Typst Setup + +- Standalone document (not importing from `reductions.typ`) +- Uses: `ctheorems` for theorem/proof environments, `cetz` if diagrams needed +- Page: A4, New Computer Modern 10pt +- Theorem numbering: `Theorem 2.1`, `Theorem 2.2`, etc. +- No dependency on `examples.json` or `reduction_graph.json` (standalone) +- Compile command: `typst compile docs/paper/proposed-reductions.typ docs/paper/proposed-reductions.pdf` + +## Quality Criteria + +Each reduction must satisfy: +1. **Math equations correct** — all formulas verified against source paper or hand-derivation +2. **Provable correctness** — both directions of the proof are rigorous, no hand-waving +3. **Algorithm clear** — detailed enough that a developer can implement `reduce_to()` and `extract_solution()` directly from the proof +4. **From math to code verifiable** — overhead expressions match the construction, worked example can be used as a test case + +## PR Structure + +- Branch: `feat/proposed-reductions-note` +- Files: + - `docs/paper/proposed-reductions.typ` (new) + - `docs/paper/proposed-reductions.pdf` (new, compiled) +- No code changes — this is a documentation-only PR +- Issue updates done via `gh issue edit` (not in the PR diff) From b17b48dea965cab51cc0a79d3f6f98c6dd1e1370 Mon Sep 17 00:00:00 2001 From: zazabap Date: Fri, 3 Apr 2026 09:07:43 +0000 Subject: [PATCH 02/23] fix(#974): add 39 example-db lookup tests and 31 paper reduction-rule entries MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Complete missing documentation and test coverage for 41 reduction rules shipped across PRs #777, #779, #804, and #972. Check 10: 39 new `test_find_rule_example_*` tests in example_db.rs covering all cross-problem reductions from the four PRs (ILP i32→bool excluded as a same-name variant edge). Check 11: 31 new `#reduction-rule(...)` entries in reductions.typ — 30 for cross-problem reductions (PRs #777, #779, #972) plus 1 for the ILP → ILP variant reduction via FBBT + truncated binary encoding. Closes #974 Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 305 ++++++++++++++++ src/unit_tests/example_db.rs | 661 +++++++++++++++++++++++++++++++++++ 2 files changed, 966 insertions(+) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 3ff1ba9d8..1d1c51b78 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13036,5 +13036,310 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ First $n$ variables (sign variables) give the source configuration. ] +// === Non-ILP reduction rules (issue #974) === + +#reduction-rule("ILP", "ILP")[ + Integer variables with finite upper bounds are replaced by groups of binary variables using truncated binary encoding. Feasibility-Based Bound Tightening (FBBT) first infers per-variable upper bounds $U_i$ from the constraint system; each integer variable $x_i in {0, dots, U_i}$ is then encoded as $x_i = sum_(j=0)^(K_i - 1) w_(i j) y_(i j)$ where $y_(i j) in {0, 1}$ and the weights are powers of two with a truncated final weight ensuring $sum w_(i j) = U_i$. +][ + _Construction._ Given an ILP over $n$ non-negative integer variables with constraints $A x <= b$ (and/or $>=, =$) and objective $c^top x$: + (1) Run FBBT to infer upper bounds $U_1, dots, U_n$. If infeasible, return a trivially infeasible binary ILP. If unbounded, fall back to $U_i = 2^(31) - 1$. + (2) For each $x_i$, compute $K_i = ceil(log_2(U_i + 1))$ and weights $w_(i 0) = 1, w_(i 1) = 2, dots, w_(i, K_i - 2) = 2^(K_i - 2), w_(i, K_i - 1) = U_i - (2^(K_i - 1) - 1)$. + (3) Substitute $x_i = sum_j w_(i j) y_(i j)$ into every constraint and objective term, expanding each integer coefficient into a sum over binary coefficients. + + _Correctness._ ($arrow.r.double$) Any integer assignment $x_i in {0, dots, U_i}$ has a unique truncated binary representation $y_(i j)$ with $sum w_(i j) y_(i j) = x_i$, preserving all constraints and the objective value. ($arrow.l.double$) Any binary assignment $y_(i j) in {0,1}$ yields $x_i = sum w_(i j) y_(i j) in {0, dots, U_i}$, so the decoded integer assignment satisfies the original constraints. + + _Solution extraction._ For each source variable $x_i$, compute $x_i = sum_(j=0)^(K_i - 1) w_(i j) y_(i j)$ from the binary solution. +] + +#reduction-rule("HamiltonianCircuit", "HamiltonianPath")[ + To decide whether $G = (V, E)$ contains a Hamiltonian circuit, we split an arbitrary vertex $v_0$ into two copies and attach a private pendant to each copy, forcing any Hamiltonian path in the expanded graph to enter through one pendant, traverse the original circuit, and exit through the other. +][ + _Construction._ Let $G = (V, E)$ with $n = |V|$ and $m = |E|$. Fix vertex $v_0 = 0$. Introduce three new vertices: a duplicate $v'$ of $v_0$, a pendant $s$, and a pendant $t$. Form $G' = (V', E')$ where $V' = V union {v', s, t}$, and $E'$ contains (i) every original edge of $E$; (ii) an edge ${v', u}$ for each neighbor $u$ of $v_0$ in $G$; (iii) the pendant edge ${s, v_0}$; and (iv) the pendant edge ${t, v'}$. Thus $|V'| = n + 3$ and $|E'| = m + deg(v_0) + 2$. + + _Correctness._ ($arrow.r.double$) Suppose $G$ has a Hamiltonian circuit $C = (v_0, u_1, u_2, dots, u_(n-1), v_0)$. The walk $s, v_0, u_1, u_2, dots, u_(n-1), v', t$ visits every vertex of $G'$ exactly once, so $G'$ has a Hamiltonian path. ($arrow.l.double$) Suppose $G'$ has a Hamiltonian path $P$. The pendants $s$ and $t$ have degree one, so $P$ must start at $s$ and end at $t$ (or vice versa), giving the form $s, v_0, pi_1, dots, pi_(n-1), v', t$. Because ${v', pi_(n-1)} in E'$, vertex $pi_(n-1)$ is a neighbor of $v_0$ in $G$. Hence $v_0, pi_1, dots, pi_(n-1), v_0$ is a Hamiltonian circuit in $G$. + + _Solution extraction._ Orient the path so $s$ is the start and $t$ the end. Drop $s$ and the last two elements $v', t$; the remaining sequence is the Hamiltonian circuit witness. +] + +#reduction-rule("KClique", "SubgraphIsomorphism")[ + A $k$-clique in $G$ is precisely a subgraph of $G$ isomorphic to $K_k$. Constructing $K_k$ as the pattern and passing $G$ as the host reduces $k$-clique detection to a single subgraph-isomorphism query. +][ + _Construction._ Given a $k$-clique instance $(G, k)$ with $G = (V, E)$, $|V| = n$, $|E| = m$, build the subgraph-isomorphism instance $(H, P)$ where $H := G$ (host) and $P := K_k$ (pattern, with $k$ vertices and $binom(k, 2)$ edges). + + _Correctness._ ($arrow.r.double$) If $G$ contains a $k$-clique $C = {v_1, dots, v_k}$, the injection $f(i) := v_i$ is a subgraph isomorphism from $K_k$ into $G$. ($arrow.l.double$) If $f: V(K_k) -> V(G)$ is a subgraph isomorphism, the image ${f(0), dots, f(k-1)}$ is a set of $k$ pairwise-adjacent vertices, i.e., a $k$-clique. + + _Solution extraction._ The subgraph-isomorphism solution $c in {0, dots, n-1}^k$ gives the host vertex assigned to each pattern vertex. The $k$-clique indicator $x in {0,1}^n$ sets $x[c[i]] := 1$ for each $i$. +] + +#reduction-rule("Partition", "MultiprocessorScheduling")[ + Each element $a_i$ becomes a task of length $a_i$ on $m = 2$ processors with deadline $D = floor(S / 2)$. A balanced partition exists iff a feasible schedule exists. +][ + _Construction._ Let $A = (a_1, dots, a_n)$ with total sum $S = sum_(i=1)^n a_i$. Set task lengths $ell_i = a_i$, number of processors $m = 2$, and deadline $D = floor(S / 2)$. + + _Correctness._ ($arrow.r.double$) If $A' subset.eq A$ has $sum_(i in A') a_i = S/2$, assign tasks in $A'$ to processor 0 and the rest to processor 1; both loads equal $S/2 = D$. ($arrow.l.double$) If a feasible schedule exists with both loads $<= D = floor(S/2)$, since both loads sum to $S$ and each is at most $floor(S/2)$, equality holds, giving a balanced partition. + + _Solution extraction._ The processor assignment $p_i in {0, 1}$ is the partition assignment directly. +] + +#reduction-rule("HamiltonianPath", "IsomorphicSpanningTree")[ + A Hamiltonian path in $G$ is a spanning tree isomorphic to the path graph $P_n$. The reduction asks whether $G$ admits a spanning tree isomorphic to $P_n$. +][ + _Construction._ Given $G = (V, E)$ with $|V| = n$, the target instance is $(G, P_n)$ where $P_n$ is the path $0 - 1 - dots - (n-1)$ with $n - 1$ edges. + + _Correctness._ ($arrow.r.double$) A Hamiltonian path $v_0, dots, v_(n-1)$ defines a bijection $phi(i) = v_i$ mapping $P_n$ edges to $G$ edges, giving a spanning tree isomorphic to $P_n$. ($arrow.l.double$) A spanning tree isomorphic to $P_n$ via $phi$ yields the Hamiltonian path $phi(0), phi(1), dots, phi(n-1)$. + + _Solution extraction._ The mapping $c[i]$ (graph vertex assigned to tree vertex $i$) is the Hamiltonian path vertex ordering directly. +] + +#reduction-rule("HamiltonianCircuit", "BottleneckTravelingSalesman")[ + Construct a complete weighted graph with weight 1 on edges of $G$ and weight 2 on non-edges. A Hamiltonian tour of bottleneck cost 1 exists iff $G$ has a Hamiltonian circuit. +][ + _Construction._ Let $G = (V, E)$ with $n = |V|$. Build $K_n$ with $w(u, v) = 1$ if ${u,v} in E$, else $w(u, v) = 2$. + + _Correctness._ ($arrow.r.double$) A Hamiltonian circuit in $G$ uses only weight-1 edges, giving bottleneck cost 1. ($arrow.l.double$) A tour with bottleneck $<= 1$ uses only weight-1 edges, which are exactly edges of $G$, so it is a Hamiltonian circuit. + + _Solution extraction._ Recover the vertex cycle order from the tour edge set. +] + +#reduction-rule("KClique", "ConjunctiveBooleanQuery")[ + Introduce $k$ existential variables over the vertex set and require the edge relation $R(y_i, y_j)$ for every pair $i < j$. The query is satisfiable iff $G$ contains a $k$-clique. +][ + _Construction._ Given $G = (V, E)$ with $|V| = n$ and parameter $k$, set domain $D = V$, define binary relation $R = {(u,v), (v,u) : {u,v} in E}$, and form the conjunction $phi = and.big_(0 <= i < j < k) R(y_i, y_j)$ with $binom(k, 2)$ conjuncts. + + _Correctness._ ($arrow.r.double$) A $k$-clique ${v_0, dots, v_(k-1)}$ satisfies every conjunct since all pairs are adjacent. ($arrow.l.double$) A satisfying assignment gives $k$ vertices where every pair is in $R$, hence pairwise-adjacent. + + _Solution extraction._ The satisfying tuple $(d_0, dots, d_(k-1))$ gives vertex indices; set $x[d_i] = 1$ for the clique indicator. +] + +#reduction-rule("ExactCoverBy3Sets", "StaffScheduling")[ + Each universe element becomes a time period requiring exactly one worker, each 3-element subset becomes a schedule active on those three periods, and the worker budget is $q = |X|/3$. A feasible assignment selects $q$ schedules covering every period exactly once. +][ + _Construction._ Let $(X, cal(C))$ with $|X| = 3q$. Set $m_p = 3q$ periods (one per element), requirement $r[i] = 1$ for all $i$, worker budget $W = q$. Each subset $S_j = {a, b, c}$ defines schedule $sigma_j$ with $sigma_j[i] = 1$ iff $i in S_j$. + + _Correctness._ ($arrow.r.double$) An exact cover $I$ with $|I| = q$ assigns one worker per selected schedule, covering each period exactly once. ($arrow.l.double$) A feasible assignment with $sum_j w_j = q$ and each period covered exactly once gives pairwise-disjoint schedules covering all of $X$. + + _Solution extraction._ Map $w in NN^m$ to $c in {0,1}^m$ by $c[j] = 1$ if $w[j] > 0$. +] + +#reduction-rule("Satisfiability", "NAESatisfiability")[ + Introduce a fresh sentinel variable $s$ and append it to every clause. The sentinel forces NAE-SAT to simulate ordinary disjunction. +][ + _Construction._ Given SAT instance $phi$ with variables $x_1, dots, x_n$ and clauses $C_1, dots, C_m$, introduce $s$ (index $n+1$). For each clause $C_j = (ell_1 or dots or ell_k)$, form NAE clause $C'_j = (ell_1, dots, ell_k, s)$. + + _Correctness._ ($arrow.r.double$) Set $s = 0$. A satisfying assignment makes at least one $ell_i = 1$ per clause; with $s = 0$, the clause is neither all-true nor all-false. ($arrow.l.double$) If $beta$ NAE-satisfies $phi'$, let $v = beta(s)$. If $v = 0$, at least one literal per clause is true. If $v = 1$, flip all variables; by NAE symmetry the complement also works. + + _Solution extraction._ If $beta(s) = 0$, return $(beta(x_1), dots, beta(x_n))$. If $beta(s) = 1$, return $(1 - beta(x_1), dots, 1 - beta(x_n))$. +] + +#reduction-rule("KSatisfiability", "MinimumVertexCover")[ + Each variable contributes a truth-setting edge; each clause contributes a satisfaction-testing triangle. The formula is satisfiable iff the graph has a vertex cover of size $n + 2m$. +][ + _Construction._ Given 3-CNF $phi$ with $n$ variables and $m$ clauses, construct $G = (V, E)$ with $|V| = 2n + 3m$. For each variable $x_i$: vertices $u_i$ (index $2i$) and $overline(u)_i$ (index $2i+1$) with edge $(u_i, overline(u)_i)$. For each clause $c_j$: triangle vertices $t^j_0, t^j_1, t^j_2$ at indices $2n + 3j, 2n+3j+1, 2n+3j+2$. Communication edges connect each $t^j_k$ to the literal vertex of its $k$-th literal. + + _Correctness._ ($arrow.r.double$) A satisfying assignment selects literal vertices ($n$ total) and two triangle vertices per clause ($2m$ total), covering all edges. ($arrow.l.double$) A cover of size $n + 2m$ must include exactly one literal vertex per variable and two triangle vertices per clause; the uncovered triangle vertex's communication edge forces the corresponding literal to be true. + + _Solution extraction._ For variable $x_i$, set $x_i = 1$ if the cover indicator at position $2i$ is 1. +] + +#reduction-rule("Partition", "SequencingWithinIntervals")[ + A unit-length enforcer task pinned at $[floor(S/2), floor(S/2)+1)$ splits the timeline into two blocks. A valid schedule exists iff the elements partition into two equal-sum subsets. +][ + _Construction._ Let $A = {a_1, dots, a_n}$ with $S = sum a_i$ and $h = floor(S/2)$. Create $n+1$ tasks: element tasks with $r_i = 0$, $d_i = S+1$, $p_i = a_i$; enforcer task with $r = h$, $d = h+1$, $p = 1$. + + _Correctness._ ($arrow.r.double$) A balanced partition places one subset's tasks in $[0, h)$ and the other in $[h+1, S+1)$. ($arrow.l.double$) The enforcer at $[h, h+1)$ splits usable time into two blocks of size $h = S/2$; since element tasks fill both blocks exactly, the assignment gives a balanced partition. + + _Solution extraction._ Task $t_i$ starting at time $s_i$: assign to subset 0 if $s_i <= h$, else subset 1. +] + +#reduction-rule("MinimumVertexCover", "MinimumFeedbackArcSet")[ + Each vertex $v$ splits into $v^"in"$ and $v^"out"$ joined by an internal arc weighted $w(v)$. Each edge becomes two crossing arcs weighted $M = 1 + sum_v w(v)$. The optimal FAS never includes crossing arcs; selecting internal arcs for cover vertices breaks every cycle. +][ + _Construction._ Given $(G, w)$ with $G = (V, E)$, $n = |V|$. Build directed graph $H$ on $2n$ nodes. Internal arcs $(v^"in", v^"out")$ with weight $w(v)$. For each ${u,v} in E$: crossing arcs $(u^"out", v^"in")$ and $(v^"out", u^"in")$ with weight $M = 1 + sum_(v in V) w(v)$. + + _Correctness._ ($arrow.r.double$) A vertex cover $S$ gives FAS $F = {(v^"in", v^"out") : v in S}$; every cycle through a crossing arc has at least one internal arc in $F$. ($arrow.l.double$) Since $M$ exceeds total internal weight, no crossing arc is in the optimal FAS. For each edge ${u,v}$, the 4-cycle through both internal and crossing arcs forces at least one internal arc into $F$. + + _Solution extraction._ Internal arcs at positions $0, dots, n-1$; the cover is $c[0 : n]$. +] + +#reduction-rule("KSatisfiability", "KClique")[ + Assign one vertex per literal position $(j, p)$; connect vertices from different clauses whose literals are not contradictory. A $k$-clique selects one consistent true literal per clause. +][ + _Construction._ Given 3-CNF $phi = C_1 and dots.c and C_m$ over $n$ variables, construct $G = (V, E)$ with $V = {(j, p) : 1 <= j <= m, 1 <= p <= 3}$, $|V| = 3m$. Edge between $(j_1, p_1)$ and $(j_2, p_2)$ iff $j_1 != j_2$ and $ell_(j_1, p_1) != not ell_(j_2, p_2)$. Set $k = m$. + + _Correctness._ ($arrow.r.double$) A satisfying assignment picks one true literal per clause; these vertices form a clique since they span all clauses without contradiction. ($arrow.l.double$) A $k$-clique has exactly one vertex per clause group; the selected literals are pairwise non-contradictory, defining a satisfying assignment. + + _Solution extraction._ For selected vertex $v$: clause $j = floor(v / 3)$, position $p = v mod 3$. If literal $ell_(j,p) = x_i$ set $x_i = 1$; if $ell_(j,p) = not x_i$ set $x_i = 0$. +] + +#reduction-rule("HamiltonianCircuit", "BiconnectivityAugmentation")[ + Start with the edgeless graph on $n$ vertices. Price original edges at cost 1 and non-edges at cost 2. A budget-$n$ augmentation is achievable iff $G$ has a Hamiltonian circuit (the only way to biconnect with $n$ weight-1 edges is a Hamiltonian cycle). +][ + _Construction._ Given $G = (V, E)$ with $n = |V|$, let $H = (V, emptyset)$. For every pair ${u, v}$: potential edge with weight 1 if ${u,v} in E$, else weight 2. Budget $B = n$. + + _Correctness._ ($arrow.r.double$) A Hamiltonian circuit selects $n$ weight-1 edges forming a 2-connected cycle, cost $= n$. ($arrow.l.double$) Budget $n$ with $>= n$ edges required (degree $>= 2$) forces exactly $n$ weight-1 edges (all from $E$), which must form a Hamiltonian cycle. + + _Solution extraction._ Walk the unique cycle from vertex 0 through selected edges to recover the circuit order. +] + +#reduction-rule("HamiltonianCircuit", "StrongConnectivityAugmentation")[ + Start with the empty digraph on $n$ vertices. Weight-1 candidate arcs correspond to edges of $G$; weight-2 arcs for non-edges. A budget-$n$ augmentation that achieves strong connectivity must select exactly $n$ weight-1 arcs forming a directed Hamiltonian cycle. +][ + _Construction._ Given $G = (V, E)$ with $n = |V|$. Build $D = (V, emptyset)$. For every ordered pair $(u, v)$ with $u != v$: candidate arc with weight 1 if ${u,v} in E$, else weight 2. Budget $B = n$. + + _Correctness._ ($arrow.r.double$) A Hamiltonian circuit gives $n$ directed arcs of weight 1 forming a strongly-connected cycle. ($arrow.l.double$) Strong connectivity needs $>= n$ arcs; budget $n$ forces all weight 1, hence all from $E$, forming a single $n$-cycle. + + _Solution extraction._ Follow unique successors from vertex 0 to recover the Hamiltonian permutation. +] + +#reduction-rule("HamiltonianCircuit", "StackerCrane")[ + Each vertex $v_i$ splits into $(v_i^"in", v_i^"out")$ with a mandatory directed arc of length 1. Undirected connector edges of length 1 encode original graph edges. A tour of cost $2n$ exists iff a Hamiltonian circuit exists. +][ + _Construction._ Given $G = (V, E)$ with $n = |V|$. Create $2n$ vertices: $v_i^"in" = 2i$, $v_i^"out" = 2i + 1$. Add $n$ mandatory arcs $(v_i^"in", v_i^"out")$ of length 1. For each ${v_i, v_j} in E$: connector edges $(v_i^"out", v_j^"in")$ and $(v_j^"out", v_i^"in")$ of length 1. + + _Correctness._ ($arrow.r.double$) A Hamiltonian circuit gives a tour serving arcs in circuit order, each inter-arc hop using one connector edge, total cost $2n$. ($arrow.l.double$) Cost $2n$ with $n$ arcs of cost 1 leaves exactly $n$ connector hops of cost 1 each; single-hop connections correspond to edges of $G$. + + _Solution extraction._ The arc service permutation directly encodes the Hamiltonian circuit vertex order. +] + +#reduction-rule("HamiltonianCircuit", "RuralPostman")[ + Each vertex splits into two copies connected by a required edge. Original graph edges become connector edges. A minimum-cost tour of cost $2n$ traversing all required edges exists iff $G$ has a Hamiltonian circuit. +][ + _Construction._ Given $G = (V, E)$ with $n = |V|$. Build $H$ with $2n$ vertices: $v_i^a = 2i$, $v_i^b = 2i+1$. Required edges ${v_i^a, v_i^b}$ of weight 1 ($n$ total). For each ${v_i, v_j} in E$: connector edges ${v_i^b, v_j^a}$ and ${v_j^b, v_i^a}$ of weight 1. + + _Correctness._ ($arrow.r.double$) A Hamiltonian circuit $(v_(p_0), dots, v_(p_(n-1)))$ gives a tour traversing required edges and single connector edges, total $2n$. ($arrow.l.double$) Each required edge costs 1 ($n$ total); remaining budget $n$ for connectors forces single-hop connections corresponding to edges of $G$. + + _Solution extraction._ Identify used connector edges and follow the successor map from vertex 0. +] + +#reduction-rule("Partition", "ShortestWeightConstrainedPath")[ + A layered digraph with include/exclude edges per element: include edge has length $a_i + 1$ and weight 1; exclude has length 1 and weight $a_i + 1$. A feasible path witnesses a balanced partition. +][ + _Construction._ Build layered graph $v_0, dots, v_n$ with $s = v_0$, $t = v_n$. For each element $a_i$: include edge with length $a_i + 1$, weight 1; exclude edge with length 1, weight $a_i + 1$. Weight bound $W = floor(S/2) + n$. + + _Correctness._ ($arrow.r.double$) A balanced partition uses include edges for one subset and exclude for the other, achieving feasible weight and minimum length. ($arrow.l.double$) The weight constraint forces included elements to sum to at least $ceil(S/2)$; minimizing length forces equality at $floor(S/2)$. + + _Solution extraction._ Element $i$ assigned to subset 0 if include edge selected, subset 1 if exclude edge selected. +] + +#reduction-rule("MaximumIndependentSet", "IntegralFlowBundles")[ + Each vertex $v_i$ maps to a flow unit through an intermediate node; edge-bundle constraints cap combined outflow of adjacent pairs at 1, so feasible flow of value $k$ exists iff an independent set of size $>= k$ exists. +][ + _Construction._ Directed graph on $n + 2$ nodes: source $s$, intermediates $w_0, dots, w_(n-1)$, sink $t$. Arcs $a_i^"in" = (s, w_i)$ (index $2i$) and $a_i^"out" = (w_i, t)$ (index $2i+1$). Edge bundles ${a_i^"out", a_j^"out"}$ with capacity 1 for each ${v_i, v_j} in E$. Vertex bundles ${a_i^"in", a_i^"out"}$ with capacity 2. Flow requirement $R = 1$. + + _Correctness._ ($arrow.r.double$) An independent set $S$ gives flow $f_i = 1$ for $v_i in S$; edge bundles satisfied since $S$ is independent. ($arrow.l.double$) Feasible flow gives $f_i in {0,1}$; edge bundles force ${v_i : f_i = 1}$ to be independent. + + _Solution extraction._ Vertex $v_i$ in independent set iff target solution at arc index $2i+1$ is positive. +] + +#reduction-rule("HamiltonianCircuit", "QuadraticAssignment")[ + Position-adjacency encoded in cost matrix $C$ (directed cycle on positions), graph-adjacency in distance matrix $D$ (1 for edges, $omega = n+1$ for non-edges). QAP optimum equals $n$ iff a Hamiltonian circuit exists. +][ + _Construction._ Let $G = (V, E)$ with $n = |V|$ and $omega = n + 1$. Cost matrix: $c[i][j] = 1$ if $j equiv i+1 space (mod n)$, else 0. Distance matrix: $d[k][l] = 0$ if $k = l$; 1 if ${k,l} in E$; $omega$ otherwise. + + _Correctness._ ($arrow.r.double$) A Hamiltonian circuit $v_0, dots, v_(n-1)$ with $gamma(i) = v_i$ gives cost $n$ (each consecutive pair is an edge). ($arrow.l.double$) Cost $n$ forces $d[gamma(i), gamma((i+1) mod n)] = 1$ for all $i$, meaning all consecutive pairs are edges. + + _Solution extraction._ The QAP permutation $gamma$ is the Hamiltonian circuit visit order directly. +] + +#reduction-rule("HamiltonianPath", "ConsecutiveOnesSubmatrix")[ + The vertex-edge incidence matrix has the consecutive-ones property on a selected subset of $n-1$ columns iff those columns correspond to a Hamiltonian path. +][ + _Construction._ Given $G = (V, E)$ with $|V| = n$, $|E| = m$, build $n times m$ Boolean matrix $A$ with $A_(i,j) = 1$ iff vertex $i$ is an endpoint of edge $j$. Set bound $K = n - 1$. + + _Correctness._ ($arrow.r.double$) A Hamiltonian path's $n-1$ edges, ordered by path position, give each row at most two consecutive ones. ($arrow.l.double$) $n-1$ columns with the C1P property form a subgraph where each vertex has degree $<= 2$; with $n-1$ edges spanning $n$ vertices, this is a Hamiltonian path. + + _Solution extraction._ Selected columns identify edges; walk from a degree-1 vertex to recover the path ordering. +] + +#reduction-rule("Partition", "BinPacking")[ + Items with sizes $a_i$ packed into 2 bins of capacity $floor(S/2)$. A valid 2-bin packing exists iff a balanced partition exists. +][ + _Construction._ Set item sizes $s_i = a_i$, bin capacity $C = floor(S/2)$, number of bins $k = 2$. + + _Correctness._ ($arrow.r.double$) A balanced partition with both subsets summing to $S/2$ gives a valid 2-bin packing. ($arrow.l.double$) Both bins summing to $S$ with capacity $floor(S/2)$ forces $S$ even and each bin holding exactly $S/2$. + + _Solution extraction._ Normalize bin labels: element $i$ in subset 0 if $b_i = b_0$, else subset 1. +] + +#reduction-rule("ExactCoverBy3Sets", "MaximumSetPacking")[ + The identity map embeds exact cover as set packing: unit-weight 3-element subsets with a $3q$-element universe. A maximum packing of $q$ disjoint sets is an exact cover. +][ + _Construction._ Given $(X, cal(C))$ with $|X| = 3q$, the MaximumSetPacking instance is $(X, cal(C))$ with unit weights. + + _Correctness._ ($arrow.r.double$) An exact cover selects $q$ pairwise-disjoint sets, which is a maximum packing (no packing can exceed $q$ sets of size 3 over $3q$ elements). ($arrow.l.double$) A maximum packing of $q$ disjoint size-3 sets covers all $3q$ elements, hence is an exact cover. + + _Solution extraction._ The selection vector is unchanged. +] + +#reduction-rule("NAESatisfiability", "MaxCut")[ + Each variable $x_i$ becomes a literal pair $(v_i^+, v_i^-)$ joined by a heavy edge of weight $M = m+1$. Clause edges of weight 1 connect literal vertices within each clause. The optimal cut separates each literal pair and has at least one literal on each side per clause, encoding NAE-satisfiability. +][ + _Construction._ Given $n$ variables and $m$ clauses. For each variable $x_i$: vertices $v_i^+ = 2(i-1)$ and $v_i^- = 2(i-1)+1$ with variable edge of weight $M = m + 1$. For each clause $C_j$: unit-weight edges between all pairs of literal vertices in $C_j$. + + _Correctness._ ($arrow.r.double$) A NAE-satisfying assignment places true literals on one side, false on the other; variable edges cross ($n dot M$ contribution), and each clause has literals on both sides. ($arrow.l.double$) Weight $M > m$ forces every literal pair to be separated; the remaining cut value requires each clause to have literals on both sides. + + _Solution extraction._ Set $x_i = top$ if $v_i^+$ is in side 1, else $x_i = bot$. +] + +#reduction-rule("ThreePartition", "ResourceConstrainedScheduling")[ + Each element becomes a unit-length task requiring $a_i$ units of a shared resource with bound $B$. With 3 processors and deadline $m$, every slot receives exactly 3 tasks summing to $B$. +][ + _Construction._ Given $(S, B)$ with $|S| = 3m$ and $B/4 < a_i < B/2$. Create $3m$ unit-length tasks with resource requirement $r_i = a_i$, $p = 3$ processors, resource bound $B$, deadline $D = m$. + + _Correctness._ ($arrow.r.double$) A valid 3-partition assigns each triple to a time slot; each slot uses exactly $B$ resource units. ($arrow.l.double$) $3m$ tasks in $m$ slots with $p = 3$: every slot has exactly 3 tasks. Resource bound $B$ with total $m B$: each slot sums to exactly $B$. Size constraints prevent fewer or more than 3 elements per slot. + + _Solution extraction._ Task $t_i$ assigned to slot $k$ means element $a_i$ belongs to group $k$. +] + +#reduction-rule("ThreePartition", "SequencingWithReleaseTimesAndDeadlines")[ + $m-1$ filler tasks with tight release windows partition the timeline into $m$ slots of width $B$. Element tasks must fill these slots, and size constraints force exactly 3 elements per slot. +][ + _Construction._ Given $(S, B)$ with $|S| = 3m$. Create $3m$ element tasks with $p_i = a_i$, $r_i = 0$, $d_i = H$ where $H = m(B+1) - 1$. Add $m-1$ filler tasks with $p = 1$, $r_j = (j+1)B + j$, $d_j = (j+1)B + j + 1$. + + _Correctness._ ($arrow.r.double$) A valid 3-partition fills each slot of width $B$ with exactly 3 elements; fillers are placed in their tight windows. ($arrow.l.double$) Fillers pinned to unit windows create $m$ slots of width $B$; total element work $m B$ fills all slots exactly; size constraints force 3 elements per slot. + + _Solution extraction._ Decode the Lehmer code, simulate the schedule tracking start times; assign elements to groups by $floor("start" / (B+1))$. +] + +#reduction-rule("ThreePartition", "SequencingToMinimizeWeightedTardiness")[ + High-weight filler tasks with tight deadlines force zero-tardiness schedules to leave exactly $m$ slots of width $B$ for element tasks. Size constraints ensure 3 elements per slot. +][ + _Construction._ Given $(S, B)$ with $|S| = 3m$. Horizon $H = m B + (m-1)$, filler weight $W_f = m B + 1$. Element tasks: $p_i = a_i$, $w_i = 1$, $d_i = H$. Filler tasks ($m-1$): $p = 1$, $w = W_f$, $d_j = (j+1)B + (j+1)$. Tardiness bound $K = 0$. + + _Correctness._ ($arrow.r.double$) A valid 3-partition schedules each triple in a slot between fillers, achieving zero tardiness. ($arrow.l.double$) Zero tardiness forces fillers to their tight deadlines, creating $m$ slots of width $B$; element tasks fill them exactly with 3 per slot. + + _Solution extraction._ Decode Lehmer code; scan left to right incrementing group index at each filler. +] + +#reduction-rule("ThreePartition", "FlowShopScheduling")[ + On 3 machines, $m-1$ separator jobs with large machine-2 tasks force element jobs into $m$ windows of width $B$. Size constraints ensure 3 elements per window. +][ + _Construction._ Given $(S, B)$ with $|S| = 3m$ and $L = m B + 1$. Element jobs: task lengths $(a_i, a_i, a_i)$ on machines 1, 2, 3. Separator jobs ($m-1$): task lengths $(0, L, 0)$. Deadline $D$ computed from canonical schedule. + + _Correctness._ ($arrow.r.double$) A valid 3-partition interleaves element groups with separators, meeting the deadline. ($arrow.l.double$) Separators on machine 2 create $m$ windows of capacity $B$; element tasks fill exactly; size constraints give 3 per window. + + _Solution extraction._ Decode Lehmer code; walk job sequence incrementing group at each separator. +] + +#reduction-rule("ThreePartition", "JobShopScheduling")[ + On 2 machines, $m-1$ long separator jobs on machine 0 force element jobs into $m$ windows of width $B$. Size constraints ensure 3 elements per window. +][ + _Construction._ Given $(S, B)$ with $|S| = 3m$, $L = m B + 1$, $D = m B + (m-1)L$. Element jobs ($3m$): two tasks $(text("machine") 0, a_i)$ then $(text("machine") 1, a_i)$. Separator jobs ($m-1$): single task $(text("machine") 0, L)$. + + _Correctness._ ($arrow.r.double$) A valid 3-partition interleaves element groups with separators on machine 0, achieving makespan $D$. ($arrow.l.double$) Separators of length $L > m B$ create impassable barriers; remaining budget $m B$ split into $m$ windows; size constraints force 3 elements per window. + + _Solution extraction._ Decode machine 0 Lehmer code; walk permutation incrementing group at each separator. +] + +#reduction-rule("MaxCut", "MinimumCutIntoBoundedSets")[ + Invert edge weights relative to $w_"max"$ on a complete graph $K_N$ with $N = 2n'$. A minimum balanced bisection in the inverted graph corresponds to a maximum cut in the original. +][ + _Construction._ Given $G = (V, E, w)$ with $n = |V|$. Set $n' = n + (n mod 2)$, $N = 2n'$, $w_"max" = 1 + max_(e in E) w(e)$. Build $K_N$ with $tilde(w)(i,j) = w_"max" - w(i,j)$ for edges in $E$, else $w_"max"$. Designate $s = n'$, $t = n' + 1$, bound $b = n'$. + + _Correctness._ ($arrow.r.double$) A max-cut extended to a balanced bisection gives a feasible target instance. ($arrow.l.double$) Minimizing $tilde(w)$-cut cost is equivalent to maximizing original weight crossing the cut, since $tilde(w) = w_"max" - w$. + + _Solution extraction._ Return the first $n$ entries of the target assignment. +] + #pagebreak() #bibliography("references.bib", style: "ieee") diff --git a/src/unit_tests/example_db.rs b/src/unit_tests/example_db.rs index 157a9649c..3e7cbb757 100644 --- a/src/unit_tests/example_db.rs +++ b/src/unit_tests/example_db.rs @@ -618,3 +618,664 @@ fn rule_specs_solution_pairs_are_consistent() { } } } + +// ---- Rule lookup tests for issue #974 ---- + +// PR #777 rules + +#[test] +fn test_find_rule_example_hamiltoniancircuit_to_hamiltonianpath() { + let source = ProblemRef { + name: "HamiltonianCircuit".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "HamiltonianPath".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "HamiltonianCircuit"); + assert_eq!(example.target.problem, "HamiltonianPath"); +} + +#[test] +fn test_find_rule_example_kclique_to_subgraphisomorphism() { + let source = ProblemRef { + name: "KClique".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "SubgraphIsomorphism".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "KClique"); + assert_eq!(example.target.problem, "SubgraphIsomorphism"); +} + +#[test] +fn test_find_rule_example_partition_to_multiprocessorscheduling() { + let source = ProblemRef { + name: "Partition".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "MultiprocessorScheduling".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "Partition"); + assert_eq!(example.target.problem, "MultiprocessorScheduling"); +} + +#[test] +fn test_find_rule_example_hamiltonianpath_to_isomorphicspanningtree() { + let source = ProblemRef { + name: "HamiltonianPath".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "IsomorphicSpanningTree".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "HamiltonianPath"); + assert_eq!(example.target.problem, "IsomorphicSpanningTree"); +} + +#[test] +fn test_find_rule_example_hamiltoniancircuit_to_bottlenecktravelingsalesman() { + let source = ProblemRef { + name: "HamiltonianCircuit".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "BottleneckTravelingSalesman".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "HamiltonianCircuit"); + assert_eq!(example.target.problem, "BottleneckTravelingSalesman"); +} + +#[test] +fn test_find_rule_example_kclique_to_conjunctivebooleanquery() { + let source = ProblemRef { + name: "KClique".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "ConjunctiveBooleanQuery".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "KClique"); + assert_eq!(example.target.problem, "ConjunctiveBooleanQuery"); +} + +#[test] +fn test_find_rule_example_exactcoverby3sets_to_staffscheduling() { + let source = ProblemRef { + name: "ExactCoverBy3Sets".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "StaffScheduling".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "ExactCoverBy3Sets"); + assert_eq!(example.target.problem, "StaffScheduling"); +} + +#[test] +fn test_find_rule_example_satisfiability_to_naesatisfiability() { + let source = ProblemRef { + name: "Satisfiability".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "NAESatisfiability".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "Satisfiability"); + assert_eq!(example.target.problem, "NAESatisfiability"); +} + +// PR #779 rules + +#[test] +fn test_find_rule_example_ksatisfiability_to_minimumvertexcover() { + let source = ProblemRef { + name: "KSatisfiability".to_string(), + variant: BTreeMap::from([("k".to_string(), "K3".to_string())]), + }; + let target = ProblemRef { + name: "MinimumVertexCover".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "KSatisfiability"); + assert_eq!(example.target.problem, "MinimumVertexCover"); +} + +#[test] +fn test_find_rule_example_partition_to_sequencingwithinintervals() { + let source = ProblemRef { + name: "Partition".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "SequencingWithinIntervals".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "Partition"); + assert_eq!(example.target.problem, "SequencingWithinIntervals"); +} + +#[test] +fn test_find_rule_example_minimumvertexcover_to_minimumfeedbackarcset() { + let source = ProblemRef { + name: "MinimumVertexCover".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let target = ProblemRef { + name: "MinimumFeedbackArcSet".to_string(), + variant: BTreeMap::from([("weight".to_string(), "i32".to_string())]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "MinimumVertexCover"); + assert_eq!(example.target.problem, "MinimumFeedbackArcSet"); +} + +#[test] +fn test_find_rule_example_ksatisfiability_to_kclique() { + let source = ProblemRef { + name: "KSatisfiability".to_string(), + variant: BTreeMap::from([("k".to_string(), "K3".to_string())]), + }; + let target = ProblemRef { + name: "KClique".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "KSatisfiability"); + assert_eq!(example.target.problem, "KClique"); +} + +#[test] +fn test_find_rule_example_hamiltoniancircuit_to_biconnectivityaugmentation() { + let source = ProblemRef { + name: "HamiltonianCircuit".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "BiconnectivityAugmentation".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "HamiltonianCircuit"); + assert_eq!(example.target.problem, "BiconnectivityAugmentation"); +} + +#[test] +fn test_find_rule_example_hamiltoniancircuit_to_strongconnectivityaugmentation() { + let source = ProblemRef { + name: "HamiltonianCircuit".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "StrongConnectivityAugmentation".to_string(), + variant: BTreeMap::from([("weight".to_string(), "i32".to_string())]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "HamiltonianCircuit"); + assert_eq!(example.target.problem, "StrongConnectivityAugmentation"); +} + +#[test] +fn test_find_rule_example_hamiltoniancircuit_to_stackercrane() { + let source = ProblemRef { + name: "HamiltonianCircuit".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "StackerCrane".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "HamiltonianCircuit"); + assert_eq!(example.target.problem, "StackerCrane"); +} + +#[test] +fn test_find_rule_example_hamiltoniancircuit_to_ruralpostman() { + let source = ProblemRef { + name: "HamiltonianCircuit".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "RuralPostman".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "HamiltonianCircuit"); + assert_eq!(example.target.problem, "RuralPostman"); +} + +#[test] +fn test_find_rule_example_partition_to_shortestweightconstrainedpath() { + let source = ProblemRef { + name: "Partition".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "ShortestWeightConstrainedPath".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "Partition"); + assert_eq!(example.target.problem, "ShortestWeightConstrainedPath"); +} + +#[test] +fn test_find_rule_example_maximumindependentset_to_integralflowbundles() { + let source = ProblemRef { + name: "MaximumIndependentSet".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let target = ProblemRef { + name: "IntegralFlowBundles".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "MaximumIndependentSet"); + assert_eq!(example.target.problem, "IntegralFlowBundles"); +} + +#[test] +fn test_find_rule_example_hamiltoniancircuit_to_quadraticassignment() { + let source = ProblemRef { + name: "HamiltonianCircuit".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "QuadraticAssignment".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "HamiltonianCircuit"); + assert_eq!(example.target.problem, "QuadraticAssignment"); +} + +#[test] +fn test_find_rule_example_hamiltonianpath_to_consecutiveonessubmatrix() { + let source = ProblemRef { + name: "HamiltonianPath".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "ConsecutiveOnesSubmatrix".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "HamiltonianPath"); + assert_eq!(example.target.problem, "ConsecutiveOnesSubmatrix"); +} + +// PR #804 rules + +#[test] +fn test_find_rule_example_minimumvertexcover_to_minimumhittingset() { + let source = ProblemRef { + name: "MinimumVertexCover".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "One".to_string()), + ]), + }; + let target = ProblemRef { + name: "MinimumHittingSet".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "MinimumVertexCover"); + assert_eq!(example.target.problem, "MinimumHittingSet"); +} + +#[test] +fn test_find_rule_example_pp2_to_boundedcomponentspanningforest() { + let source = ProblemRef { + name: "PartitionIntoPathsOfLength2".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "BoundedComponentSpanningForest".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "PartitionIntoPathsOfLength2"); + assert_eq!(example.target.problem, "BoundedComponentSpanningForest"); +} + +#[test] +fn test_find_rule_example_hamiltoniancircuit_to_longestcircuit() { + let source = ProblemRef { + name: "HamiltonianCircuit".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "LongestCircuit".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "HamiltonianCircuit"); + assert_eq!(example.target.problem, "LongestCircuit"); +} + +#[test] +fn test_find_rule_example_partition_to_subsetsum() { + let source = ProblemRef { + name: "Partition".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "SubsetSum".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "Partition"); + assert_eq!(example.target.problem, "SubsetSum"); +} + +#[test] +fn test_find_rule_example_rootedtreearrangement_to_rootedtreestorageassignment() { + let source = ProblemRef { + name: "RootedTreeArrangement".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "RootedTreeStorageAssignment".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "RootedTreeArrangement"); + assert_eq!(example.target.problem, "RootedTreeStorageAssignment"); +} + +#[test] +fn test_find_rule_example_subsetsum_to_capacityassignment() { + let source = ProblemRef { + name: "SubsetSum".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "CapacityAssignment".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "SubsetSum"); + assert_eq!(example.target.problem, "CapacityAssignment"); +} + +#[test] +fn test_find_rule_example_longestcommonsubsequence_to_maximumindependentset() { + let source = ProblemRef { + name: "LongestCommonSubsequence".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "MaximumIndependentSet".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "One".to_string()), + ]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "LongestCommonSubsequence"); + assert_eq!(example.target.problem, "MaximumIndependentSet"); +} + +#[test] +fn test_find_rule_example_minimumvertexcover_to_ensemblecomputation() { + let source = ProblemRef { + name: "MinimumVertexCover".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "One".to_string()), + ]), + }; + let target = ProblemRef { + name: "EnsembleComputation".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "MinimumVertexCover"); + assert_eq!(example.target.problem, "EnsembleComputation"); +} + +#[test] +fn test_find_rule_example_kclique_to_balancedcompletebipartitesubgraph() { + let source = ProblemRef { + name: "KClique".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let target = ProblemRef { + name: "BalancedCompleteBipartiteSubgraph".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "KClique"); + assert_eq!(example.target.problem, "BalancedCompleteBipartiteSubgraph"); +} + +#[test] +fn test_find_rule_example_kcoloring_to_twodimensionalconsecutivesets() { + let source = ProblemRef { + name: "KColoring".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("k".to_string(), "K3".to_string()), + ]), + }; + let target = ProblemRef { + name: "TwoDimensionalConsecutiveSets".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "KColoring"); + assert_eq!(example.target.problem, "TwoDimensionalConsecutiveSets"); +} + +#[test] +fn test_find_rule_example_paintshop_to_qubo() { + let source = ProblemRef { + name: "PaintShop".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "QUBO".to_string(), + variant: BTreeMap::from([("weight".to_string(), "f64".to_string())]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "PaintShop"); + assert_eq!(example.target.problem, "QUBO"); +} + +// PR #972 rules + +#[test] +fn test_find_rule_example_partition_to_binpacking() { + let source = ProblemRef { + name: "Partition".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "BinPacking".to_string(), + variant: BTreeMap::from([("weight".to_string(), "i32".to_string())]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "Partition"); + assert_eq!(example.target.problem, "BinPacking"); +} + +#[test] +fn test_find_rule_example_exactcoverby3sets_to_maximumsetpacking() { + let source = ProblemRef { + name: "ExactCoverBy3Sets".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "MaximumSetPacking".to_string(), + variant: BTreeMap::from([("weight".to_string(), "One".to_string())]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "ExactCoverBy3Sets"); + assert_eq!(example.target.problem, "MaximumSetPacking"); +} + +#[test] +fn test_find_rule_example_naesatisfiability_to_maxcut() { + let source = ProblemRef { + name: "NAESatisfiability".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "MaxCut".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "NAESatisfiability"); + assert_eq!(example.target.problem, "MaxCut"); +} + +#[test] +fn test_find_rule_example_threepartition_to_resourceconstrainedscheduling() { + let source = ProblemRef { + name: "ThreePartition".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "ResourceConstrainedScheduling".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "ThreePartition"); + assert_eq!(example.target.problem, "ResourceConstrainedScheduling"); +} + +#[test] +fn test_find_rule_example_threepartition_to_sequencingwithreleasetimesanddeadlines() { + let source = ProblemRef { + name: "ThreePartition".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "SequencingWithReleaseTimesAndDeadlines".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "ThreePartition"); + assert_eq!( + example.target.problem, + "SequencingWithReleaseTimesAndDeadlines" + ); +} + +#[test] +fn test_find_rule_example_threepartition_to_sequencingtominimizeweightedtardiness() { + let source = ProblemRef { + name: "ThreePartition".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "SequencingToMinimizeWeightedTardiness".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "ThreePartition"); + assert_eq!( + example.target.problem, + "SequencingToMinimizeWeightedTardiness" + ); +} + +#[test] +fn test_find_rule_example_threepartition_to_flowshopscheduling() { + let source = ProblemRef { + name: "ThreePartition".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "FlowShopScheduling".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "ThreePartition"); + assert_eq!(example.target.problem, "FlowShopScheduling"); +} + +#[test] +fn test_find_rule_example_threepartition_to_jobshopscheduling() { + let source = ProblemRef { + name: "ThreePartition".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "JobShopScheduling".to_string(), + variant: BTreeMap::new(), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "ThreePartition"); + assert_eq!(example.target.problem, "JobShopScheduling"); +} + +#[test] +fn test_find_rule_example_maxcut_to_minimumcutintoboundedsets() { + let source = ProblemRef { + name: "MaxCut".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let target = ProblemRef { + name: "MinimumCutIntoBoundedSets".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let example = find_rule_example(&source, &target).unwrap(); + assert_eq!(example.source.problem, "MaxCut"); + assert_eq!(example.target.problem, "MinimumCutIntoBoundedSets"); +} From 259ad5a279030b71da7be4322812e89a436be476 Mon Sep 17 00:00:00 2001 From: Shiwen An <97461865+zazabap@users.noreply.github.com> Date: Fri, 3 Apr 2026 18:11:43 +0900 Subject: [PATCH 03/23] Delete docs/superpowers/specs/2026-03-31-proposed-reductions-note-design.md --- ...6-03-31-proposed-reductions-note-design.md | 125 ------------------ 1 file changed, 125 deletions(-) delete mode 100644 docs/superpowers/specs/2026-03-31-proposed-reductions-note-design.md diff --git a/docs/superpowers/specs/2026-03-31-proposed-reductions-note-design.md b/docs/superpowers/specs/2026-03-31-proposed-reductions-note-design.md deleted file mode 100644 index 9ce7a1a90..000000000 --- a/docs/superpowers/specs/2026-03-31-proposed-reductions-note-design.md +++ /dev/null @@ -1,125 +0,0 @@ -# Design: Proposed Reduction Rules — Typst Verification Note - -**Date:** 2026-03-31 -**Goal:** Create a standalone Typst document with compiled PDF that formalizes 9 reduction rules from issue #770, resolving blockers for 7 incomplete issues and adding 2 high-leverage NP-hardness chain extensions. - -## Scope - -### 9 Reductions - -**Group 1 — NP-hardness proof chain extensions:** - -| Issue | Reduction | Impact | -|-------|-----------|--------| -| #973 | SubsetSum → Partition | Unlocks ~12 downstream problems | -| #198 | MinimumVertexCover → HamiltonianCircuit | Unlocks ~9 downstream problems | - -**Group 2 — Tier 1a blocked issues (fix + formalize):** - -| Issue | Reduction | Current blocker | -|-------|-----------|----------------| -| #379 | DominatingSet → MinMaxMulticenter | Decision vs optimization MDS model | -| #380 | DominatingSet → MinSumMulticenter | Same | -| #888 | OptimalLinearArrangement → RootedTreeArrangement | Witness extraction impossible for naive approach | -| #822 | ExactCoverBy3Sets → AcyclicPartition | Missing algorithm (unpublished reference) | - -**Group 3 — Tier 1b blocked issues (fix + formalize):** - -| Issue | Reduction | Current blocker | -|-------|-----------|----------------| -| #892 | VertexCover → HamiltonianPath | Depends on #198 (VC→HC) being resolved | -| #894 | VertexCover → PartialFeedbackEdgeSet | Missing Yannakakis 1978b paper | -| #890 | MaxCut → OptimalLinearArrangement | Placeholder algorithm, no actual construction | - -## Deliverables - -1. **`docs/paper/proposed-reductions.typ`** — standalone Typst document -2. **`docs/paper/proposed-reductions.pdf`** — compiled PDF checked into repo -3. **Updated GitHub issues** — #379, #380, #888, #822, #892, #894, #890 corrected with verified algorithms -4. **One PR** containing the note, PDF, and issue updates - -## Document Structure - -``` -Title: Proposed Reduction Rules — Verification Notes -Abstract: Motivation (NP-hardness gaps, blocked issues, impact analysis) - -§1 Notation & Conventions - - Standard symbols (G, V, E, w, etc.) - - Proof structure: Construction → Correctness (⟹/⟸) → Solution Extraction - - Overhead notation - -§2 NP-Hardness Chain Extensions - §2.1 SubsetSum → Partition (#973) - §2.2 MinimumVertexCover → HamiltonianCircuit (#198) - §2.3 VertexCover → HamiltonianPath (#892) - -§3 Graph Reductions - §3.1 MaxCut → OptimalLinearArrangement (#890) - §3.2 OptimalLinearArrangement → RootedTreeArrangement (#888) - -§4 Set & Domination Reductions - §4.1 DominatingSet → MinMaxMulticenter (#379) - §4.2 DominatingSet → MinSumMulticenter (#380) - §4.3 ExactCoverBy3Sets → AcyclicPartition (#822) - -§5 Feedback Set Reductions - §5.1 VertexCover → PartialFeedbackEdgeSet (#894) -``` - -## Per-Reduction Entry Format - -Each reduction follows the `reductions.typ` convention: - -1. **Theorem statement** — 1-3 sentence intuition, citation (e.g., `[GJ79, ND50]`) -2. **Proof** with three mandatory subsections: - - _Construction._ Numbered algorithm steps, all symbols defined before use - - _Correctness._ Bidirectional: (⟹) forward direction, (⟸) backward direction - - _Solution extraction._ How to map target solution back to source -3. **Overhead table** — target size fields as functions of source size fields -4. **Worked example** — concrete small instance, full construction steps, solution verification - -Mathematical notation uses Typst math mode: `$V$`, `$E$`, `$arrow.r.double$`, `$overline(x)$`, etc. - -## Research Plan for Blocked Issues - -For each blocked reduction: - -1. **Search** for the original reference via the citation in Garey & Johnson -2. **Reconstruct** the correct algorithm from the paper or from first principles -3. **Verify** correctness with a hand-worked example in the note -4. **Resolve** the blocker: - - #379/#380: Clarify that the reduction operates on the decision variant; note model alignment needed - - #888: Research Gavril 1977a gadget construction for forcing path-tree solutions - - #822: Research the acyclic partition reduction from G&J or construct from first principles - - #892: Chain through #198 (VC→HC→HP); detail the HC→HP modification - - #894: Search for Yannakakis 1978b or reconstruct the gadget - - #890: Research the Garey-Johnson-Stockmeyer 1976 construction - -If a reference is unavailable, construct a novel reduction and clearly mark it as such. - -## Typst Setup - -- Standalone document (not importing from `reductions.typ`) -- Uses: `ctheorems` for theorem/proof environments, `cetz` if diagrams needed -- Page: A4, New Computer Modern 10pt -- Theorem numbering: `Theorem 2.1`, `Theorem 2.2`, etc. -- No dependency on `examples.json` or `reduction_graph.json` (standalone) -- Compile command: `typst compile docs/paper/proposed-reductions.typ docs/paper/proposed-reductions.pdf` - -## Quality Criteria - -Each reduction must satisfy: -1. **Math equations correct** — all formulas verified against source paper or hand-derivation -2. **Provable correctness** — both directions of the proof are rigorous, no hand-waving -3. **Algorithm clear** — detailed enough that a developer can implement `reduce_to()` and `extract_solution()` directly from the proof -4. **From math to code verifiable** — overhead expressions match the construction, worked example can be used as a test case - -## PR Structure - -- Branch: `feat/proposed-reductions-note` -- Files: - - `docs/paper/proposed-reductions.typ` (new) - - `docs/paper/proposed-reductions.pdf` (new, compiled) -- No code changes — this is a documentation-only PR -- Issue updates done via `gh issue edit` (not in the PR diff) From 93fd5b6029ba1c55d4e8ca7b55dd812993633b1b Mon Sep 17 00:00:00 2001 From: zazabap Date: Fri, 3 Apr 2026 09:23:41 +0000 Subject: [PATCH 04/23] docs: add Institute of Science Tokyo affiliation to paper Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 1d1c51b78..1229ec751 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -510,9 +510,9 @@ #align(center)[ #text(size: 16pt, weight: "bold")[Problem Reductions: Models and Transformations] #v(0.5em) - #text(size: 11pt)[Jin-Guo Liu#super[1] #h(1em) Xi-Wei Pan#super[1] #h(1em) Shi-Wen An] + #text(size: 11pt)[Jin-Guo Liu#super[1] #h(1em) Xi-Wei Pan#super[1] #h(1em) Shi-Wen An#super[2]] #v(0.3em) - #text(size: 9pt)[#super[1]Hong Kong University of Science and Technology (Guangzhou)] + #text(size: 9pt)[#super[1]Hong Kong University of Science and Technology (Guangzhou) #h(1em) #super[2]Institute of Science Tokyo] #v(0.3em) #text(size: 10pt, style: "italic")[github.com/CodingThrust/problem-reductions] #v(1em) From 4d548143999bf80b8932b3800cf08fe7bd479762 Mon Sep 17 00:00:00 2001 From: zazabap Date: Fri, 3 Apr 2026 09:24:38 +0000 Subject: [PATCH 05/23] chore: remove accidentally committed docs/superpowers directory Co-Authored-By: Claude Opus 4.6 (1M context) --- .../plans/2026-03-24-tier3-ilp-reductions.md | 547 ------------------ .../2026-03-24-tier3-ilp-reductions-design.md | 220 ------- 2 files changed, 767 deletions(-) delete mode 100644 docs/superpowers/plans/2026-03-24-tier3-ilp-reductions.md delete mode 100644 docs/superpowers/specs/2026-03-24-tier3-ilp-reductions-design.md diff --git a/docs/superpowers/plans/2026-03-24-tier3-ilp-reductions.md b/docs/superpowers/plans/2026-03-24-tier3-ilp-reductions.md deleted file mode 100644 index d0338d75b..000000000 --- a/docs/superpowers/plans/2026-03-24-tier3-ilp-reductions.md +++ /dev/null @@ -1,547 +0,0 @@ -# Tier 3 ILP Reductions Implementation Plan - -> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. - -**Goal:** Connect 39 orphan Tier 3 problems to ILP via direct reductions in one PR, with shared linearization helpers. - -**Architecture:** Each reduction follows the established pattern: `#[reduction(overhead)]` macro on `impl ReduceTo>`, a `ReductionResult` struct with `extract_solution`, and a closed-loop test. A new `ilp_helpers.rs` module provides shared linearization primitives (McCormick, MTZ, flow conservation, big-M, abs-diff, minimax, one-hot decode). Paper entries are already written in `docs/paper/reductions.typ`. - -**Tech Stack:** Rust, `#[reduction]` proc macro, `ILP` / `ILP` target types, `LinearConstraint` API. - -**Spec:** `docs/superpowers/specs/2026-03-24-tier3-ilp-reductions-design.md` -**Paper entries:** `docs/paper/reductions.typ` (search for each `#reduction-rule("", "ILP")`) - -**Status:** Paper entries are committed and reviewed. All 39 entries have standardized multiline ILP equation blocks + detailed prose constructions. 9 complex entries have been expanded with full variable indexing, big-M values, and flow schemes. All symbols verified against problem definitions. - ---- - -## CRITICAL: Paper Is Ground Truth - -**The Typst paper (`docs/paper/reductions.typ`) is the authoritative source for every ILP formulation.** Each reduction-rule entry contains a standardized multiline equation block showing the complete ILP (objective/find + constraints + domain), plus prose explaining variable meanings and solution extraction. These entries have been reviewed and verified against the model files. - -**When implementing each reduction in Rust, you MUST:** -1. **Read the paper entry first** — find the `#reduction-rule("", "ILP")` block -2. **Implement exactly the formulation described in the paper** — same variables, same constraints, same extraction logic. Do NOT invent a different formulation. -3. **Cross-check** — if you find the paper's formulation seems wrong or incomplete, STOP and flag it for human review. Do not silently deviate. -4. **The spec file is secondary** — it provides metadata (ILP type, helpers, dims) but the paper has the precise mathematical construction. When they conflict, the paper wins. - ---- - -## File Structure - -**New files (40 total):** -- `src/rules/ilp_helpers.rs` — shared helper module -- `src/unit_tests/rules/ilp_helpers.rs` — helper tests -- 39 rule files: `src/rules/_ilp.rs` -- 39 test files: `src/unit_tests/rules/_ilp.rs` - -**Modified files:** -- `src/rules/mod.rs` — 39 module declarations + 39 canonical_rule_example_specs calls - ---- - -## Reference Files - -Before implementing ANY task, read these files to understand the patterns: - -- **Rule template:** `src/rules/maximalis_ilp.rs` (complete ILP reduction example) -- **Test template:** `src/unit_tests/rules/knapsack_ilp.rs` (closed-loop test pattern) -- **Test helpers:** `src/rules/test_helpers.rs` (assertion functions) -- **ILP model:** `src/models/algebraic/ilp.rs` (LinearConstraint, ILP struct, ObjectiveSense) -- **Paper formulations:** `docs/paper/reductions.typ` lines 8206-8607 (mathematical reference for each reduction) - ---- - -## Task 0: Helper Module - -**Files:** -- Create: `src/rules/ilp_helpers.rs` -- Create: `src/unit_tests/rules/ilp_helpers.rs` -- Modify: `src/rules/mod.rs` (add module declaration) - -- [ ] **Step 0.1: Add module declaration to mod.rs** - -Add inside the `#[cfg(feature = "ilp-solver")]` block in `src/rules/mod.rs`: -```rust -#[cfg(feature = "ilp-solver")] -pub(crate) mod ilp_helpers; -``` - -- [ ] **Step 0.2: Write helper tests (TDD)** - -Create `src/unit_tests/rules/ilp_helpers.rs` with tests for all 7 helpers: -```rust -// Test mccormick_product: verify 3 constraints y<=x_a, y<=x_b, y>=x_a+x_b-1 -// Test mtz_ordering: verify arc constraints + bound constraints -// Test flow_conservation: verify demand equations at each node -// Test big_m_activation: verify f <= M*y -// Test abs_diff_le: verify two constraints for |a-b| <= z -// Test minimax_constraints: verify z >= expr_i for each expr -// Test one_hot_decode: verify correct index extraction -``` - -- [ ] **Step 0.3: Implement ilp_helpers.rs** - -Create `src/rules/ilp_helpers.rs` with 7 public functions matching the spec's Phase 0 signatures. Reference `src/models/algebraic/ilp.rs` for `LinearConstraint` API. - -- [ ] **Step 0.4: Run tests, verify pass** - -```bash -cargo test --features ilp-solver ilp_helpers -- --nocapture -``` - -- [ ] **Step 0.5: Commit** - -```bash -git add src/rules/ilp_helpers.rs src/unit_tests/rules/ilp_helpers.rs src/rules/mod.rs -git commit -m "feat: add shared ILP linearization helpers (McCormick, MTZ, flow, big-M, abs-diff, minimax, one-hot)" -``` - ---- - -## Task 1: Flow-based reductions (9 rules) - -**For each rule below, follow this sub-pattern:** -1. **Read the paper entry FIRST** (`docs/paper/reductions.typ`) — this is the ground truth for the ILP formulation (variables, constraints, objective, extraction). Implement exactly what it says. -2. Read the model file (`src/models//.rs`) — note `dims()`, `Value`, getters for overhead expressions -3. Write the test file (`src/unit_tests/rules/_ilp.rs`) — closed-loop test with small instance -4. Write the rule file (`src/rules/_ilp.rs`) — implement the paper's formulation in Rust, with extract_solution + canonical example -5. Add module + specs registration to `src/rules/mod.rs` -6. Run `cargo test --features ilp-solver _ilp` -7. Run `cargo clippy --features ilp-solver` - -### Task 1.1: IntegralFlowHomologousArcs → ILP - -**Files:** -- Create: `src/rules/integralflowhomologousarcs_ilp.rs` -- Create: `src/unit_tests/rules/integralflowhomologousarcs_ilp.rs` -- Modify: `src/rules/mod.rs` -- Model: `src/models/graph/integral_flow_homologous_arcs.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8209 - -**ILP type:** `ILP`. **Value:** `Or`. **Extract:** Direct (f_a values). -**Formulation:** Integer f_a per arc. Capacity, conservation, homologous equality, requirement. -**Helpers:** `flow_conservation` - -- [ ] **Step 1.1.1:** Write test — construct small network (4-5 nodes, 6-8 arcs, 1-2 homologous pairs), test closed-loop with `assert_satisfaction_round_trip_from_satisfaction_target` -- [ ] **Step 1.1.2:** Write rule — `impl ReduceTo>`, overhead = `{ num_vars = "num_arcs", num_constraints = "num_arcs + num_vertices + num_homologous_pairs" }` -- [ ] **Step 1.1.3:** Register in mod.rs, run tests + clippy - -### Task 1.2: IntegralFlowWithMultipliers → ILP - -**Files:** `src/rules/integralflowwithmultipliers_ilp.rs` + test -- Model: `src/models/graph/integral_flow_with_multipliers.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8219 - -**ILP type:** `ILP`. **Value:** `Or`. **Extract:** Direct. -**Formulation:** Integer f_a per arc. Capacity, multiplier-scaled conservation, requirement. -**Helpers:** `flow_conservation` (adapted for multipliers) - -- [ ] **Step 1.2.1-1.2.3:** Test → Rule → Register (same sub-pattern) - -### Task 1.3: PathConstrainedNetworkFlow → ILP - -**Files:** `src/rules/pathconstrainednetworkflow_ilp.rs` + test -- Model: `src/models/graph/path_constrained_network_flow.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8229 - -**ILP type:** `ILP`. **Value:** `Or`. **Extract:** Direct (f_p per path). -**Formulation:** Integer f_p per allowed path. Arc capacity aggregation, requirement. -**Helpers:** None - -- [ ] **Step 1.3.1-1.3.3:** Test → Rule → Register - -### Task 1.4: DisjointConnectingPaths → ILP - -**Files:** `src/rules/disjointconnectingpaths_ilp.rs` + test -- Model: `src/models/graph/disjoint_connecting_paths.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8239 - -**ILP type:** `bool`. **Value:** `Or`. **Extract:** OR over commodities → binary edge selection. -**Formulation:** Binary f^k_{uv} per commodity per arc. Conservation, vertex-disjointness (Σ_k ≤ 1), order vars for subtour elimination. -**Helpers:** `flow_conservation` - -- [ ] **Step 1.4.1-1.4.3:** Test → Rule → Register - -### Task 1.5: LengthBoundedDisjointPaths → ILP - -**Files:** `src/rules/lengthboundeddisjointpaths_ilp.rs` + test -- Model: `src/models/graph/length_bounded_disjoint_paths.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8249 - -**ILP type:** `ILP`. **Value:** `Or`. **Extract:** Flow vars → vertex indicators per path slot. -**Formulation:** Binary flow + integer hop counters per commodity. Conservation, disjointness, hop ≤ L. -**Helpers:** `flow_conservation` - -- [ ] **Step 1.5.1-1.5.3:** Test → Rule → Register - -### Task 1.6: MixedChinesePostman → ILP - -**Files:** `src/rules/mixedchinesepostman_ilp.rs` + test -- Model: `src/models/graph/mixed_chinese_postman.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8259 - -**ILP type:** `ILP`. **Value:** `Or`. **Extract:** Orientation bits d_e. -**Formulation:** Binary orientation + integer augmentation + connectivity flow. Euler balance, length bound. -**Helpers:** `flow_conservation`, `big_m_activation` - -- [ ] **Step 1.6.1-1.6.3:** Test → Rule → Register - -### Task 1.7: RuralPostman → ILP - -**Files:** `src/rules/ruralpostman_ilp.rs` + test -- Model: `src/models/graph/rural_postman.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8269 - -**ILP type:** `ILP`. **Value:** `Or`. **Extract:** Direct (t_e ternary multiplicity, `dims() = vec![3; num_edges]`). -**Formulation:** Integer t_e ∈ {0,1,2} + binary y_e + flow. Required coverage, even degree, connectivity, length bound. -**Helpers:** `flow_conservation`, `big_m_activation` - -- [ ] **Step 1.7.1-1.7.3:** Test → Rule → Register - -### Task 1.8: StackerCrane → ILP - -**Files:** `src/rules/stackercrane_ilp.rs` + test -- Model: `src/models/misc/stacker_crane.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8279 - -**ILP type:** `ILP`. **Value:** `Or`. **Extract:** One-hot decode → arc permutation (`dims() = vec![m; m]`). -**Formulation:** Binary x_{a,p} position-assignment + McCormick z for consecutive pairs. Precomputed shortest-path connector costs. -**Helpers:** `mccormick_product`, `one_hot_decode` - -- [ ] **Step 1.8.1-1.8.3:** Test → Rule → Register - -### Task 1.9: SteinerTreeInGraphs → ILP - -**Files:** `src/rules/steinertreeingraphs_ilp.rs` + test -- Model: `src/models/graph/steiner_tree_in_graphs.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8289 - -**ILP type:** `ILP`. **Value:** `Min` (optimization). **Extract:** Direct (edge selection). -**Formulation:** Binary y_e + multi-commodity flow. Same pattern as existing SteinerTree→ILP. -**Helpers:** `flow_conservation`, `big_m_activation` - -- [ ] **Step 1.9.1-1.9.3:** Test (use `assert_optimization_round_trip_from_optimization_target`) → Rule → Register - -- [ ] **Step 1.10: Run full flow-based test suite + commit** - -```bash -cargo test --features ilp-solver -- integralflow steiner disjoint lengthbounded mixed rural stacker -cargo clippy --features ilp-solver -git add src/rules/*_ilp.rs src/unit_tests/rules/*_ilp.rs src/rules/mod.rs -git commit -m "feat: add 9 flow-based Tier 3 ILP reductions" -``` - ---- - -## Task 2: Scheduling reductions (7 rules) - -**Common note:** FlowShopScheduling, MinimumTardinessSequencing, SequencingToMinimizeWeightedTardiness use Lehmer-code configs. Extract via: sort jobs by ILP completion times → derive permutation → convert to Lehmer code. Use a shared `permutation_to_lehmer()` helper (can be added to `ilp_helpers.rs`). - -### Task 2.1: FlowShopScheduling → ILP -- Model: `src/models/misc/flow_shop_scheduling.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8301 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Completion times → sort → Lehmer code. -- [ ] **Step 2.1.1-2.1.3:** Test → Rule → Register - -### Task 2.2: MinimumTardinessSequencing → ILP -- Model: `src/models/misc/minimum_tardiness_sequencing.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8311 -- **ILP type:** `ILP`. **Value:** `Min` (optimization). **Extract:** Position decode → Lehmer code. -- [ ] **Step 2.2.1-2.2.3:** Test (optimization round-trip) → Rule → Register - -### Task 2.3: ResourceConstrainedScheduling → ILP -- Model: `src/models/misc/resource_constrained_scheduling.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8321 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Time-slot decode. -- [ ] **Step 2.3.1-2.3.3:** Test → Rule → Register - -### Task 2.4: SequencingToMinimizeMaximumCumulativeCost → ILP -- Model: `src/models/misc/sequencing_to_minimize_maximum_cumulative_cost.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8331 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Position decode → Lehmer code. -- [ ] **Step 2.4.1-2.4.3:** Test → Rule → Register - -### Task 2.5: SequencingToMinimizeWeightedTardiness → ILP -- Model: `src/models/misc/sequencing_to_minimize_weighted_tardiness.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8341 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Completion times → sort → Lehmer code. -- [ ] **Step 2.5.1-2.5.3:** Test → Rule → Register - -### Task 2.6: SequencingWithReleaseTimesAndDeadlines → ILP -- Model: `src/models/misc/sequencing_with_release_times_and_deadlines.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8351 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Start-time decode → sort → Lehmer code. -- [ ] **Step 2.6.1-2.6.3:** Test → Rule → Register - -### Task 2.7: TimetableDesign → ILP -- Model: `src/models/misc/timetable_design.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8361 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Direct (binary tensor). -- [ ] **Step 2.7.1-2.7.3:** Test → Rule → Register - -- [ ] **Step 2.8: Run full scheduling test suite + commit** - -```bash -cargo test --features ilp-solver -- flowshop tardiness resourceconstrained sequencing timetable -cargo clippy --features ilp-solver -git commit -m "feat: add 7 scheduling Tier 3 ILP reductions" -``` - ---- - -## Task 3: Position/Assignment + McCormick reductions (6 rules) - -### Task 3.1: HamiltonianPath → ILP -- Model: `src/models/graph/hamiltonian_path.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8373 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** One-hot decode → vertex permutation (`dims() = vec![n; n]`). -- **Helpers:** `mccormick_product`, `one_hot_decode` -- [ ] **Step 3.1.1-3.1.3:** Test → Rule → Register - -### Task 3.2: BottleneckTravelingSalesman → ILP -- Model: `src/models/graph/bottleneck_traveling_salesman.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8383 -- **ILP type:** `ILP`. **Value:** `Min` (optimization). **Extract:** Position tour → edge selection (`dims() = vec![2; num_edges]`). -- **Helpers:** `mccormick_product`, `minimax_constraints`, `one_hot_decode` -- [ ] **Step 3.2.1-3.2.3:** Test (optimization round-trip) → Rule → Register - -### Task 3.3: LongestCircuit → ILP -- Model: `src/models/graph/longest_circuit.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8393 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Direct (binary edge selection). -- **Formulation:** Degree-2 vertex selection + flow connectivity (NOT position-assignment). No McCormick. -- **Helpers:** `flow_conservation` -- [ ] **Step 3.3.1-3.3.3:** Test → Rule → Register - -### Task 3.4: QuadraticAssignment → ILP -- Model: `src/models/algebraic/quadratic_assignment.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8403 -- **ILP type:** `ILP`. **Value:** `Min` (optimization). **Extract:** One-hot decode → injection (`dims() = vec![num_locations; num_facilities]`). -- **Helpers:** `mccormick_product`, `one_hot_decode` -- [ ] **Step 3.4.1-3.4.3:** Test (optimization round-trip) → Rule → Register - -### Task 3.5: OptimalLinearArrangement → ILP -- Model: `src/models/graph/optimal_linear_arrangement.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8413 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** One-hot decode → vertex positions (`dims() = vec![n; n]`). -- **Helpers:** `abs_diff_le`, `one_hot_decode` -- [ ] **Step 3.5.1-3.5.3:** Test → Rule → Register - -### Task 3.6: SubgraphIsomorphism → ILP -- Model: `src/models/graph/subgraph_isomorphism.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8423 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** One-hot decode → injection (`dims() = vec![n_host; n_pattern]`). -- **Formulation:** No McCormick — direct non-edge constraints `x_{v,u} + x_{w,u'} ≤ 1`. -- [ ] **Step 3.6.1-3.6.3:** Test → Rule → Register - -- [ ] **Step 3.7: Run full position/assignment test suite + commit** - -```bash -cargo test --features ilp-solver -- hamiltonianpath bottleneck longestcircuit quadratic optimal subgraph -cargo clippy --features ilp-solver -git commit -m "feat: add 6 position/assignment Tier 3 ILP reductions" -``` - ---- - -## Task 4: Graph structure reductions (7 rules) - -### Task 4.1: AcyclicPartition → ILP -- Model: `src/models/graph/acyclic_partition.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8435 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** One-hot decode x_{v,c} → partition label (`dims() = vec![n; n]`). -- **Formulation:** Binary assignment + McCormick same-class indicators + class ordering for quotient DAG. -- **Helpers:** `mccormick_product`, `one_hot_decode` -- [ ] **Step 4.1.1-4.1.3:** Test → Rule → Register - -### Task 4.2: BalancedCompleteBipartiteSubgraph → ILP -- Model: `src/models/graph/balanced_complete_bipartite_subgraph.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8445 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Direct (binary selection). -- **Formulation:** Binary x_l, y_r. Balance + non-edge constraints. No McCormick. -- [ ] **Step 4.2.1-4.2.3:** Test → Rule → Register - -### Task 4.3: BicliqueCover → ILP -- Model: `src/models/graph/biclique_cover.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8455 -- **ILP type:** `ILP`. **Value:** `Min` (optimization). **Extract:** Direct (membership bits). -- **Helpers:** `mccormick_product` -- [ ] **Step 4.3.1-4.3.3:** Test (optimization round-trip) → Rule → Register - -### Task 4.4: BiconnectivityAugmentation → ILP -- Model: `src/models/graph/biconnectivity_augmentation.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8465 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Direct (binary edge selection). -- **Formulation:** Binary y_e + flow for 2-vertex-connectivity (per-vertex-deletion connectivity check). -- **Helpers:** `flow_conservation`, `big_m_activation` -- [ ] **Step 4.4.1-4.4.3:** Test → Rule → Register - -### Task 4.5: BoundedComponentSpanningForest → ILP -- Model: `src/models/graph/bounded_component_spanning_forest.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8475 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Component label decode. -- **Formulation:** Binary x_{v,c} assignment + weight bounds + flow connectivity within components. -- **Helpers:** `flow_conservation`, `one_hot_decode` -- [ ] **Step 4.5.1-4.5.3:** Test → Rule → Register - -### Task 4.6: MinimumCutIntoBoundedSets → ILP -- Model: `src/models/graph/minimum_cut_into_bounded_sets.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8485 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Direct (partition bit-vector). -- **Formulation:** Binary x_v + binary y_e. Balance bounds + cut linking. -- [ ] **Step 4.6.1-4.6.3:** Test → Rule → Register - -### Task 4.7: StrongConnectivityAugmentation → ILP -- Model: `src/models/graph/strong_connectivity_augmentation.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8495 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Direct (binary arc selection). -- **Formulation:** Binary y_a + bidirectional multi-commodity flow from root. -- **Helpers:** `flow_conservation`, `big_m_activation` -- [ ] **Step 4.7.1-4.7.3:** Test → Rule → Register - -- [ ] **Step 4.8: Run full graph structure test suite + commit** - -```bash -cargo test --features ilp-solver -- acyclicpartition balanced biclique biconnectivity bounded minimumcut strongconnectivity -cargo clippy --features ilp-solver -git commit -m "feat: add 7 graph structure Tier 3 ILP reductions" -``` - ---- - -## Task 5: Matrix/encoding reductions (5 rules) - -### Task 5.1: BMF → ILP -- Model: `src/models/algebraic/bmf.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8507 -- **ILP type:** `ILP`. **Value:** `Min` (optimization). **Extract:** Direct (factor matrix bits). -- **Formulation:** McCormick for Boolean products, OR-of-ANDs reconstruction, Hamming distance. -- **Helpers:** `mccormick_product` -- [ ] **Step 5.1.1-5.1.3:** Test (optimization round-trip) → Rule → Register - -### Task 5.2: ConsecutiveBlockMinimization → ILP -- Model: `src/models/algebraic/consecutive_block_minimization.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8517 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** One-hot decode → column permutation (`dims() = vec![num_cols; num_cols]`). -- **Helpers:** `one_hot_decode` -- [ ] **Step 5.2.1-5.2.3:** Test → Rule → Register - -### Task 5.3: ConsecutiveOnesMatrixAugmentation → ILP -- Model: `src/models/algebraic/consecutive_ones_matrix_augmentation.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8527 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** One-hot decode → column permutation. -- [ ] **Step 5.3.1-5.3.3:** Test → Rule → Register - -### Task 5.4: ConsecutiveOnesSubmatrix → ILP -- Model: `src/models/algebraic/consecutive_ones_submatrix.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8537 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Direct (s_j selection bits, `dims() = vec![2; num_cols]`). -- **Formulation:** Binary s_j + auxiliary permutation x_{c,p} + C1P interval constraints. -- [ ] **Step 5.4.1-5.4.3:** Test → Rule → Register - -### Task 5.5: SparseMatrixCompression → ILP -- Model: `src/models/algebraic/sparse_matrix_compression.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8547 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** One-hot decode → shift assignment. -- [ ] **Step 5.5.1-5.5.3:** Test → Rule → Register - -- [ ] **Step 5.6: Run full matrix/encoding test suite + commit** - -```bash -cargo test --features ilp-solver -- bmf consecutiveblock consecutiveones sparse -cargo clippy --features ilp-solver -git commit -m "feat: add 5 matrix/encoding Tier 3 ILP reductions" -``` - ---- - -## Task 6: Sequence/misc reductions (5 rules) - -### Task 6.1: ShortestCommonSupersequence → ILP -- Model: `src/models/misc/shortest_common_supersequence.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8559 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Symbol sequence extraction. -- [ ] **Step 6.1.1-6.1.3:** Test → Rule → Register - -### Task 6.2: StringToStringCorrection → ILP -- Model: `src/models/misc/string_to_string_correction.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8569 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** Operation indicator → scalar operation code. -- [ ] **Step 6.2.1-6.2.3:** Test → Rule → Register - -### Task 6.3: PaintShop → ILP -- Model: `src/models/misc/paintshop.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8579 -- **ILP type:** `ILP`. **Value:** `Min` (optimization). **Extract:** Direct (x_i first-occurrence color bits, `dims() = vec![2; num_cars]`). -- [ ] **Step 6.3.1-6.3.3:** Test (optimization round-trip) → Rule → Register - -### Task 6.4: IsomorphicSpanningTree → ILP -- Model: `src/models/graph/isomorphic_spanning_tree.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8589 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** One-hot decode → bijection (`dims() = vec![n; n]`). -- **Formulation:** Pure bijection x_{u,v} with non-edge constraints (no flow needed). -- [ ] **Step 6.4.1-6.4.3:** Test → Rule → Register - -### Task 6.5: RootedTreeStorageAssignment → ILP -- Model: `src/models/set/rooted_tree_storage_assignment.rs` -- Paper: search for `#reduction-rule("ProblemName", "ILP")` ~line 8599 -- **ILP type:** `ILP`. **Value:** `Or`. **Extract:** One-hot parent decode → parent array (`dims() = vec![n; n]`). -- **Formulation:** Binary p_{v,u} parent indicators + integer depths + subset path extension costs. -- [ ] **Step 6.5.1-6.5.3:** Test → Rule → Register - -- [ ] **Step 6.6: Run full sequence/misc test suite + commit** - -```bash -cargo test --features ilp-solver -- shortestcommon stringtostring paintshop isomorphicspanning rootedtreestorage -cargo clippy --features ilp-solver -git commit -m "feat: add 5 sequence/misc Tier 3 ILP reductions" -``` - ---- - -## Task 7: Final verification and PR - -- [ ] **Step 7.1: Full test suite** - -```bash -make check -cargo test --features ilp-solver -``` - -- [ ] **Step 7.2: Paper completeness check** - -```bash -make paper -``` -Paper entries are already committed. Verify no new completeness warnings after Rust reductions are registered (the `#[reduction]` macro registrations should match the paper's `reduction-rule` entries). - -- [ ] **Step 7.3: Coverage check** - -```bash -make coverage -``` -Verify >95% coverage on new code. - -- [ ] **Step 7.4: Final commit and PR** - -```bash -git add -A -git commit -m "feat: add 39 Tier 3 ILP reductions + shared helpers - -Connects all remaining orphan NP-hard problems to ILP, enabling -DefaultSolver dispatch. Includes shared ilp_helpers module with -McCormick, MTZ, flow conservation, big-M, abs-diff, and minimax -linearization primitives. - -Closes #728, closes #733. -Ref #762." -``` - -Create PR targeting `main`. - -- [ ] **Step 7.5: Post-merge cleanup** - -- Update #762 body: move 39 problems from Tier 3 to Tier 1 -- Close #728 (TimetableDesign→ILP) and #733 (IntegralFlowHomologousArcs→ILP) -- File separate issues for deferred: PartialFeedbackEdgeSet→ILP, RootedTreeArrangement→ILP diff --git a/docs/superpowers/specs/2026-03-24-tier3-ilp-reductions-design.md b/docs/superpowers/specs/2026-03-24-tier3-ilp-reductions-design.md deleted file mode 100644 index cbdfc6bc2..000000000 --- a/docs/superpowers/specs/2026-03-24-tier3-ilp-reductions-design.md +++ /dev/null @@ -1,220 +0,0 @@ -# Tier 3 ILP Reductions — Design Spec - -**Date:** 2026-03-24 -**Scope:** One PR adding 39 `→ ILP` reductions for Tier 3 orphan problems, plus a shared helper module. -**Deferred:** PartialFeedbackEdgeSet (no polynomial-size correct ILP for L < n), RootedTreeArrangement (compound `vec![n; 2*n]` config too complex for batch). -**Tracking issue:** #762 (DefaultSolver classification) - ---- - -## Goal - -Connect 39 of 41 isolated Tier 3 problem types to the reduction graph via direct ILP reductions. Two problems (PartialFeedbackEdgeSet, RootedTreeArrangement) are deferred to separate issues due to formulation complexity. - -## Deliverables - -1. `src/rules/ilp_helpers.rs` — shared linearization helpers (with unit tests) -2. 39 new reduction files `src/rules/_ilp.rs` (feature-gated under `#[cfg(feature = "ilp-solver")]`) -3. 39 entries in `src/rules/mod.rs`: module declarations + `canonical_rule_example_specs()` aggregation -4. 39 closed-loop tests in corresponding unit test files -5. 39 `reduction-rule` entries in `docs/paper/reductions.typ` -6. Updated #762 body (move Tier 3 → Tier 1) - ---- - -## Problem Classification - -### Value types (optimization vs satisfaction) - -**Optimization** (`Min`/`Max` — use `assert_optimization_round_trip_from_optimization_target`): -- BottleneckTravelingSalesman (`Min`), MinimumTardinessSequencing (`Min`), - QuadraticAssignment (`Min`), BMF (`Min`), PaintShop (`Min`), - SteinerTreeInGraphs (`Min`) - -**Satisfaction** (`Or` — use `assert_satisfaction_round_trip` or satisfaction variant): -- All other 33 problems - -### Config-space encodings requiring non-trivial `extract_solution` - -| Encoding | Problems | Extraction strategy | -|----------|----------|-------------------| -| **Lehmer code** `[n, n-1, ..., 1]` | FlowShopScheduling, MinimumTardinessSequencing, SequencingToMinimizeWeightedTardiness | Sort jobs by ILP completion times → derive permutation → convert to Lehmer code | -| **Vertex permutation** `vec![n; n]` | HamiltonianPath, OptimalLinearArrangement, ConsecutiveBlockMinimization, AcyclicPartition | One-hot decode: for each position/vertex, find the 1 in the assignment row | -| **Arc permutation** `vec![m; m]` | StackerCrane | Position-assignment decode: for each position, find the selected arc | -| **Injection** `vec![m; k]` | SubgraphIsomorphism, QuadraticAssignment | One-hot decode per source element | -| **Parent array** `vec![n; n]` | RootedTreeStorageAssignment | Decode parent-selection one-hot matrix → parent index per node | -| **Bijection** `vec![n; n]` | IsomorphicSpanningTree | One-hot decode tree-vertex → graph-vertex | -| **Compound** `vec![n; 2*n]` | *(RootedTreeArrangement — deferred)* | — | -| **Binary** `vec![2; ...]` | All others | Direct identity or first-k prefix extraction | -| **Ternary** `vec![3; num_edges]` | RuralPostman | Integer flow variable → clamp to {0,1,2} per edge | - ---- - -## Phase 0: Helper Module - -**File:** `src/rules/ilp_helpers.rs` - -Seven helper functions returning `Vec` (or single `LinearConstraint`): - -```rust -/// McCormick linearization: y = x_a * x_b (both binary). -/// Returns 3 constraints: y ≤ x_a, y ≤ x_b, y ≥ x_a + x_b - 1. -pub fn mccormick_product(y_idx: usize, x_a: usize, x_b: usize) -> Vec - -/// MTZ topological ordering for directed arcs. -/// For each arc (u→v): o_v - o_u ≥ 1 - M*(x_u + x_v). -/// When both x_u=0, x_v=0 (both kept): enforces o_v > o_u. -/// When either x_u=1 or x_v=1 (removed): constraint is slack. -/// Also emits bound constraints: x_i ≤ 1, 0 ≤ o_i ≤ n-1. -/// Matches the pattern in minimumfeedbackvertexset_ilp.rs. -pub fn mtz_ordering( - arcs: &[(usize, usize)], - n: usize, - x_offset: usize, - o_offset: usize, -) -> Vec - -/// Flow conservation at each node. -/// For each node u: Σ_{(u,v)} f_{uv} - Σ_{(v,u)} f_{vu} = demand[u]. -pub fn flow_conservation( - arcs: &[(usize, usize)], - num_nodes: usize, - flow_idx: &dyn Fn(usize) -> usize, - demand: &[f64], -) -> Vec - -/// Big-M activation: f ≤ M * y. Single constraint. -pub fn big_m_activation(f_idx: usize, y_idx: usize, big_m: f64) -> LinearConstraint - -/// Absolute value linearization: |a - b| ≤ z. -/// Returns 2 constraints: a - b ≤ z, b - a ≤ z. -pub fn abs_diff_le(a_idx: usize, b_idx: usize, z_idx: usize) -> Vec - -/// Minimax: z ≥ expr_i for each expression. -/// Each expr is a list of (var_idx, coeff) terms. -pub fn minimax_constraints(z_idx: usize, expr_terms: &[Vec<(usize, f64)>]) -> Vec - -/// One-hot to index extraction: given n*k binary assignment vars, -/// decode position p → value v where x_{v,p} = 1. -/// Shared by all permutation/assignment-based reductions. -pub fn one_hot_decode(solution: &[usize], num_items: usize, num_slots: usize, var_offset: usize) -> Vec -``` - -The helper module gets its own unit tests verifying constraint correctness. - -No new types introduced. Existing Tier 1/2 reductions are **not** refactored — helpers are used only by new Tier 3 code. - ---- - -## Phase 1: Flow-based (9 reductions) - -| Problem | Value | ILP type | Variables | Key constraints | Helpers | Extract | -|---------|-------|----------|-----------|-----------------|---------|---------| -| IntegralFlowHomologousArcs | `Or` | `i32` | Integer f_a per arc | Capacity, conservation, homologous equality, requirement | `flow_conservation` | Direct (f_a values) | -| IntegralFlowWithMultipliers | `Or` | `i32` | Integer f_a per arc | Capacity, modified conservation (multiplier factors), requirement | `flow_conservation` | Direct | -| PathConstrainedNetworkFlow | `Or` | `i32` | Integer f_p per allowed path | Capacity aggregation per arc, flow requirement | — | Direct | -| DisjointConnectingPaths | `Or` | `bool` | Binary f^k_{uv} per commodity per arc | Conservation per commodity, vertex-disjointness (Σ_k ≤ 1 at non-terminals) | `flow_conservation` | Reconstruct edge selection from flow variables | -| LengthBoundedDisjointPaths | `Or` | `i32` | Binary f^k_{uv} + integer hop h^k_v per commodity | Conservation, disjointness, hop count h^k_v ≤ L per commodity | `flow_conservation` | Reconstruct edge selection from flow variables | -| MixedChinesePostman | `Or` | `i32` | Integer traversal t_a + binary orientation d_e | Euler balance (in = out), required edge/arc coverage ≥ 1 | `flow_conservation` | Direct (traversal counts) | -| RuralPostman | `Or` | `i32` | Integer t_e ∈ {0,1,2} per edge (traversal multiplicity) | Required edge coverage (t_e ≥ 1), Euler balance (even degree at each vertex), connectivity via flow, total cost ≤ bound | `flow_conservation`, `big_m_activation` | Direct (t_e values map to `dims() = vec![3; num_edges]`) | -| StackerCrane | `Or` | `i32` | Binary x_{a,k} (arc a at position k) + shortest-path cost auxiliaries | Position-assignment (each position gets one required arc, each arc used once), inter-arc connection cost via precomputed shortest paths, total ≤ bound | `big_m_activation` | One-hot decode → arc permutation (`dims() = vec![m; m]`) | -| SteinerTreeInGraphs | `Min` | `bool` | Binary y_e + multi-commodity flow f^t_{uv} | Conservation, capacity linking (same pattern as SteinerTree→ILP); minimize Σ w_e·y_e | `flow_conservation`, `big_m_activation` | Direct (edge selection) | - ---- - -## Phase 2: Scheduling (7 reductions) - -All scheduling problems with Lehmer-code configs share a common extraction pattern: ILP ordering variables → sort to get permutation → convert permutation to Lehmer code. - -| Problem | Value | ILP type | Variables | Key constraints | Helpers | Extract | -|---------|-------|----------|-----------|-----------------|---------|---------| -| FlowShopScheduling | `Or` | `i32` | Binary y_{ij} (job i before j) + integer C_{jm} (completion on machine m) | Machine precedence: C_{j,m+1} ≥ C_{j,m} + p_{j,m+1}; ordering via big-M; makespan ≤ deadline | `big_m_activation` | Completion times → sort → Lehmer code | -| MinimumTardinessSequencing | `Min` | `i32` | Binary y_{ij} + integer C_j | Ordering via big-M, precedence constraints; objective: minimize Σ tardy_j (binary indicators for C_j > d_j) | `big_m_activation` | Completion times → sort → Lehmer code | -| ResourceConstrainedScheduling | `Or` | `bool` | Binary x_{jt} (job j starts at time t) | One start per job, precedence, resource capacity per period, deadline | — | Time-indexed decode → Lehmer code | -| SequencingToMinimizeMaximumCumulativeCost | `Or` | `i32` | Binary y_{ij} + integer C_j | Ordering via big-M; cumulative cost ≤ bound (feasibility, not minimax) | `big_m_activation` | Completion times → sort → Lehmer code | -| SequencingToMinimizeWeightedTardiness | `Or` | `i32` | Binary y_{ij} + integer C_j | Ordering via big-M; Σ w_j * max(0, C_j - d_j) ≤ bound (feasibility) | `big_m_activation` | Completion times → sort → Lehmer code | -| SequencingWithReleaseTimesAndDeadlines | `Or` | `bool` | Binary x_{jt} (job j at time t) | Release: no start before r_j, deadline: finish by d_j, non-overlap | — | Time-indexed decode → Lehmer code | -| TimetableDesign | `Or` | `bool` | Binary x_{c,t,h} (craftsman c, task t, period h) | Craftsman exclusivity, task exclusivity, requirement satisfaction | — | Direct (binary) | - ---- - -## Phase 3: Position/Assignment + McCormick (6 reductions) - -| Problem | Value | ILP type | Variables | Key constraints | Helpers | Extract | -|---------|-------|----------|-----------|-----------------|---------|---------| -| HamiltonianPath | `Or` | `bool` | Binary x_{v,k} (vertex v at position k) | Row/column assignment, adjacency: McCormick for consecutive pairs | `mccormick_product` | One-hot decode → vertex permutation (`dims() = vec![n; n]`) | -| BottleneckTravelingSalesman | `Min` | `i32` | Binary x_{v,k} + integer z (bottleneck) | TSP assignment + z ≥ w(u,v) for each used edge (McCormick); minimize z | `mccormick_product`, `minimax_constraints` | Edge selection from assignment matrix (`dims() = vec![2; num_edges]`) | -| LongestCircuit | `Or` | `bool` | Binary y_e (edge selection) + binary s_v (vertex on circuit) + flow vars | Degree: Σ_{e∋v} y_e = 2·s_v; size: Σ y_e ≥ 3; connectivity via root-flow on selected edges; length: Σ w_e·y_e ≥ B | `flow_conservation` | Direct (y_e binary edge vector, `dims() = vec![2; num_edges]`) | -| QuadraticAssignment | `Min` | `bool` | Binary x_{i,p} (facility i at location p) | Assignment + McCormick for x_{i,p}·x_{j,q}; minimize Σ C_{ij}·D_{f(i),f(j)} | `mccormick_product` | One-hot decode → facility-to-location injection (`dims() = vec![num_locations; num_facilities]`) | -| OptimalLinearArrangement | `Or` | `i32` | Binary x_{v,p} + integer z_{uv} per edge | Assignment + z_{uv} ≥ |π(u)-π(v)| via abs_diff; Σ z_{uv} ≤ bound | `abs_diff_le` | One-hot decode → vertex-to-position (`dims() = vec![n; n]`) | -| SubgraphIsomorphism | `Or` | `bool` | Binary x_{v,u} (pattern v → host u) | Injection (each pattern vertex maps to exactly 1 host vertex, each host vertex used ≤ 1 time) + edge preservation: for each pattern edge (v,w) and host non-edge (u,u'), x_{v,u} + x_{w,u'} ≤ 1 (no McCormick needed) | — | One-hot decode → injection (`dims() = vec![n_host; n_pattern]`) | - ---- - -## Phase 4: Graph structure (7 reductions) - -| Problem | Value | ILP type | Variables | Key constraints | Helpers | Extract | -|---------|-------|----------|-----------|-----------------|---------|---------| -| AcyclicPartition | `Or` | `i32` | Binary x_{v,c} (vertex v in class c) + integer o_c (class ordering) + binary s_{uv,c} (same-class indicators per arc per class) | Assignment (Σ_c x_{v,c} = 1); weight bound per class; cost bound on inter-class arcs; same-class: s_{uv,c} via McCormick on x_{u,c}·x_{v,c}; DAG: for each arc (u→v), o_v_class - o_u_class ≥ 1 - M·Σ_c s_{uv,c} | `mccormick_product` | One-hot decode x_{v,c} → partition label (`dims() = vec![n; n]`) | -| BalancedCompleteBipartiteSubgraph | `Or` | `bool` | Binary x_v (side A), y_v (side B) | Balance: Σx = Σy = k; completeness: McCormick for x_u·y_v on non-edges | `mccormick_product` | Direct (binary) | -| BicliqueCover | `Or` | `bool` | Binary z_{v,j} (vertex v in biclique j) | Biclique validity via McCormick, edge coverage | `mccormick_product` | Direct (binary) | -| BiconnectivityAugmentation | `Or` | `i32` | Binary y_e (add edge e) + flow vars for 2-vertex-connectivity | For each vertex v: removing v must leave graph connected. Formulated via flow: for each vertex v and each pair (s,t) of v's neighbors, unit flow from s to t avoiding v, through original + selected edges | `flow_conservation`, `big_m_activation` | Direct (binary edge selection, `dims() = vec![2; num_potential_edges]`) | -| BoundedComponentSpanningForest | `Or` | `i32` | Binary y_e (edge in forest) + integer label l_v (component root ID) + flow vars | Forest structure (no cycles via MTZ on directed version); component assignment via labels; per-component total vertex **weight** ≤ B (not size) | `flow_conservation`, `mtz_ordering` | Edge selection → component label decode (`dims() = vec![2; num_edges]` or label-based) | -| MinimumCutIntoBoundedSets | `Or` | `bool` | Binary x_v (partition side) + binary y_e (cut edge) | Balance: L ≤ Σx_v ≤ U; cut linking: y_e ≥ x_u - x_v and y_e ≥ x_v - x_u; Σ w_e·y_e ≤ bound | — | Direct (binary partition) | -| StrongConnectivityAugmentation | `Or` | `i32` | Binary y_a (add arc) + multi-commodity flow | For each ordered pair (s,t): unit flow from s to t through original + selected arcs | `flow_conservation`, `big_m_activation` | Direct (binary arc selection) | - ---- - -## Phase 5: Matrix/encoding (5 reductions) - -| Problem | Value | ILP type | Variables | Key constraints | Helpers | Extract | -|---------|-------|----------|-----------|-----------------|---------|---------| -| BMF | `Min` | `bool` | Binary a_{ik}, b_{kj} + auxiliary p_{ijk} (McCormick for a_{ik}·b_{kj}) + binary w_{ij} (reconstructed entry) | p_{ijk} via McCormick; w_{ij} ≥ p_{ijk} for all k (OR-of-ANDs); w_{ij} ≤ Σ_k p_{ijk}; minimize Σ |A_{ij} - w_{ij}| | `mccormick_product` | Direct (binary factor matrices) | -| ConsecutiveBlockMinimization | `Or` | `bool` | Binary x_{c,p} (column c at position p) + binary b_{r,p} (block start at row r, position p) | Column permutation (one-hot assignment); block detection: b_{r,p} activated when row r transitions 0→1 at position p; Σ blocks ≤ bound | — | One-hot decode → column permutation (`dims() = vec![num_cols; num_cols]`) | -| ConsecutiveOnesMatrixAugmentation | `Or` | `bool` | Binary x_{c,p} (column permutation) + binary f_{r,j} (flip entry r,j) | Permutation + consecutive-ones property after flips; minimize/bound total flips | — | One-hot decode → column permutation (`dims() = vec![num_cols; num_cols]`) | -| ConsecutiveOnesSubmatrix | `Or` | `bool` | Binary s_j (select column j) + auxiliary binary x_{c,p} (column permutation of selected columns) | Exactly K columns selected (Σ s_j = K); permutation of selected columns; C1P enforced on every row within selected+permuted columns. s_j at indices 0..num_cols (extracted directly). x_{c,p} are auxiliary. | — | Direct (s_j binary selection, `dims() = vec![2; num_cols]`) | -| SparseMatrixCompression | `Or` | `bool` | Binary x_{i,g} (row i in group g) | Row-to-group assignment (one group per row); compatibility: conflicting rows not in same group; num groups ≤ K | — | One-hot decode → group assignment | - ---- - -## Phase 6: Sequence/misc (5 reductions) - -| Problem | Value | ILP type | Variables | Key constraints | Helpers | Extract | -|---------|-------|----------|-----------|-----------------|---------|---------| -| ShortestCommonSupersequence | `Or` | `bool` | Binary x_{p,a} (position p has symbol a) + match vars m_{s,j,p} | Symbol assignment + monotone matching for each input string; total length ≤ bound | — | Symbol sequence extraction | -| StringToStringCorrection | `Or` | `bool` | Binary d_{i,j,op} (edit operation at alignment point) | Alignment grid + operation exclusivity + cost ≤ bound | — | Direct (binary operation selection) | -| PaintShop | `Min` | `bool` | Binary x_i (color for car i's first occurrence) + binary c_p (color-change indicator at position p) | Pairing: second occurrence gets 1-x_i; c_p ≥ color_p - color_{p-1} and c_p ≥ color_{p-1} - color_p; minimize Σ c_p | — | Direct (x_i binary, `dims() = vec![2; num_cars]`) | -| IsomorphicSpanningTree | `Or` | `bool` | Binary x_{u,v} (tree vertex u maps to graph vertex v) | Bijection: one-hot per tree vertex and per graph vertex; edge preservation: for each tree edge {u,w} and graph non-edge {v,z}, x_{u,v} + x_{w,z} ≤ 1 (no McCormick or flow needed — bijection preserving tree edges automatically produces a spanning tree) | — | One-hot decode → bijection (`dims() = vec![n; n]`) | -| RootedTreeStorageAssignment | `Or` | `i32` | Binary p_{v,u} (node v's parent is u) + integer depth d_v | Tree structure: each non-root node has exactly one parent, acyclicity via depth ordering (d_v > d_u if u is parent of v), connectivity; per-subset path cost ≤ bound | — | One-hot parent decode → parent array (`dims() = vec![n; n]`) | - ---- - -## Testing Strategy - -- Each reduction gets one `test__to_ilp_closed_loop` test -- **Optimization problems** (BottleneckTSP, MinTardiness, QAP, BMF, PaintShop): use `assert_optimization_round_trip_from_optimization_target` -- **Satisfaction problems** (all others): use the satisfaction round-trip variant -- Test instances should be small enough for brute-force cross-check (n ≤ 6-8) -- All tests in `src/unit_tests/rules/_ilp.rs` -- Helper module gets standalone unit tests in `src/unit_tests/rules/ilp_helpers.rs` - -## Integration Checklist (per reduction) - -Each new reduction file requires: -1. `#[cfg(feature = "ilp-solver")] pub(crate) mod _ilp;` in `src/rules/mod.rs` -2. `specs.extend(_ilp::canonical_rule_example_specs());` in the `#[cfg(feature = "ilp-solver")]` block of `canonical_rule_example_specs()` in `src/rules/mod.rs` -3. `#[reduction(overhead = { ... })]` with verified overhead expressions referencing source-type getter methods -4. Closed-loop test + paper entry - -## Paper - -Each reduction gets a `reduction-rule` entry in `docs/paper/reductions.typ` with: -- Rule statement describing the formulation -- Proof sketch (variable layout, constraint count, correctness argument) -- Example flag set to `true` where pedagogically useful - -## Post-merge - -- Update #762 body: move 39 problems from Tier 3 to Tier 1 -- Close #728 (TimetableDesign→ILP) and #733 (IntegralFlowHomologousArcs→ILP) -- File separate issues for deferred problems: PartialFeedbackEdgeSet→ILP, RootedTreeArrangement→ILP From 92f269b2ad915018e7a74c6dfa3c1e2687889de9 Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 11:34:06 +0000 Subject: [PATCH 06/23] docs: add worked example for HamiltonianCircuit -> RuralPostman reduction Upgrade the prose-only reduction-rule entry to include data-driven example with load-example binding, pred-commands, three construction steps, and multiplicity note. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 50 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 48 insertions(+), 2 deletions(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 1229ec751..1099ee689 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13091,7 +13091,28 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ The mapping $c[i]$ (graph vertex assigned to tree vertex $i$) is the Hamiltonian path vertex ordering directly. ] -#reduction-rule("HamiltonianCircuit", "BottleneckTravelingSalesman")[ +#let hc_btsp = load-example("HamiltonianCircuit", "BottleneckTravelingSalesman") +#let hc_btsp_sol = hc_btsp.solutions.at(0) +#reduction-rule("HamiltonianCircuit", "BottleneckTravelingSalesman", + example: true, + example-caption: [$n = #graph-num-vertices(hc_btsp.source.instance)$ vertices, $|E| = #graph-num-edges(hc_btsp.source.instance)$ edges: HC $arrow.r$ BTSP], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(hc_btsp.source) + " -o hc.json", + "pred reduce hc.json --to " + target-spec(hc_btsp) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate hc.json --config " + hc_btsp_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical HC instance is a graph with $n = #graph-num-vertices(hc_btsp.source.instance)$ vertices and $|E| = #graph-num-edges(hc_btsp.source.instance)$ edges. + + *Step 2 -- Construction.* Build the complete graph $K_#graph-num-vertices(hc_btsp.target.instance)$ with #graph-num-edges(hc_btsp.target.instance) edges. Each original edge gets weight 1 and each non-edge gets weight 2. The target edge weights are $(#hc_btsp.target.instance.edge_weights.map(str).join(", "))$, where the #hc_btsp.target.instance.edge_weights.filter(w => w == 1).len() entries equal to 1 correspond to the $|E| = #graph-num-edges(hc_btsp.source.instance)$ source edges and the #hc_btsp.target.instance.edge_weights.filter(w => w == 2).len() entries equal to 2 correspond to the non-edges. + + *Step 3 -- Verify a solution.* The source tour visits vertices in order $(#hc_btsp_sol.source_config.map(str).join(", "))$. In the target, the selected edges are those with indicator 1 in $(#hc_btsp_sol.target_config.map(str).join(", "))$, all of which have weight 1, giving bottleneck cost $= 1$ #sym.checkmark. + + *Multiplicity:* The fixture stores one canonical witness. The $#graph-num-vertices(hc_btsp.source.instance)$-cycle has $#graph-num-vertices(hc_btsp.source.instance)$ rotations $times$ 2 reflections $= #(2 * graph-num-vertices(hc_btsp.source.instance))$ Hamiltonian circuits in total. + ], +)[ Construct a complete weighted graph with weight 1 on edges of $G$ and weight 2 on non-edges. A Hamiltonian tour of bottleneck cost 1 exists iff $G$ has a Hamiltonian circuit. ][ _Construction._ Let $G = (V, E)$ with $n = |V|$. Build $K_n$ with $w(u, v) = 1$ if ${u,v} in E$, else $w(u, v) = 2$. @@ -13191,7 +13212,32 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Follow unique successors from vertex 0 to recover the Hamiltonian permutation. ] -#reduction-rule("HamiltonianCircuit", "StackerCrane")[ +#let hc_sc = load-example("HamiltonianCircuit", "StackerCrane") +#let hc_sc_sol = hc_sc.solutions.at(0) +#let hc_sc_n = graph-num-vertices(hc_sc.source.instance) +#let hc_sc_source_edges = hc_sc.source.instance.graph.edges +#let hc_sc_target_arcs = hc_sc.target.instance.arcs +#let hc_sc_target_edges = hc_sc.target.instance.edges +#reduction-rule("HamiltonianCircuit", "StackerCrane", + example: true, + example-caption: [Cycle $C_#hc_sc_n$ ($n = #hc_sc_n$): vertex splitting to Stacker Crane], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(hc_sc.source) + " -o hc.json", + "pred reduce hc.json --to " + target-spec(hc_sc) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate hc.json --config " + hc_sc_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical source fixture is the cycle $C_#hc_sc_n$ on vertices ${0, dots, #(hc_sc_n - 1)}$ with #hc_sc_source_edges.len() edges: #hc_sc_source_edges.map(e => $(#e.at(0), #e.at(1))$).join(", "). The stored Hamiltonian-circuit witness is the permutation $[#hc_sc_sol.source_config.map(str).join(", ")]$.\ + + *Step 2 -- Construction.* Each vertex $v_i$ splits into $v_i^"in" = 2i$ and $v_i^"out" = 2i + 1$, giving $2 dot #hc_sc_n = #hc_sc.target.instance.num_vertices$ vertices. The reduction creates #hc_sc_target_arcs.len() mandatory arcs: #hc_sc_target_arcs.map(a => $(#a.at(0) arrow #a.at(1))$).join(", "), each of length 1. For each source edge, two undirected connector edges of length 1 are added, giving $2 dot #hc_sc_source_edges.len() = #hc_sc_target_edges.len()$ connector edges: #hc_sc_target_edges.map(e => ${#e.at(0), #e.at(1)}$).join(", ").\ + + *Step 3 -- Verify a solution.* The stored target configuration $[#hc_sc_sol.target_config.map(str).join(", ")]$ is a permutation of arcs. Following this order: arc #hc_sc_sol.target_config.at(0) serves $(#hc_sc_target_arcs.at(hc_sc_sol.target_config.at(0)).at(0) arrow #hc_sc_target_arcs.at(hc_sc_sol.target_config.at(0)).at(1))$, then a connector edge leads to the next arc, and so on. The tour traverses $#hc_sc_target_arcs.len()$ arcs (cost $#hc_sc_target_arcs.len()$) and $#hc_sc_target_arcs.len()$ connector edges (cost $#hc_sc_target_arcs.len()$), for total cost $2 dot #hc_sc_n = #(hc_sc_n * 2)$. Recovering the source witness: arc $i$ corresponds to vertex $i$, so the permutation $[#hc_sc_sol.source_config.map(str).join(", ")]$ is the Hamiltonian circuit #sym.checkmark\ + + *Multiplicity:* The fixture stores one canonical witness. For $C_#hc_sc_n$ there are $#hc_sc_n times 2 = #(hc_sc_n * 2)$ directed Hamiltonian circuits (choice of start vertex and direction), each yielding a distinct arc-service permutation. + ], +)[ Each vertex $v_i$ splits into $(v_i^"in", v_i^"out")$ with a mandatory directed arc of length 1. Undirected connector edges of length 1 encode original graph edges. A tour of cost $2n$ exists iff a Hamiltonian circuit exists. ][ _Construction._ Given $G = (V, E)$ with $n = |V|$. Create $2n$ vertices: $v_i^"in" = 2i$, $v_i^"out" = 2i + 1$. Add $n$ mandatory arcs $(v_i^"in", v_i^"out")$ of length 1. For each ${v_i, v_j} in E$: connector edges $(v_i^"out", v_j^"in")$ and $(v_j^"out", v_i^"in")$ of length 1. From 4530ec7bfa076c70a3d9b220cac529759e666a10 Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 11:35:12 +0000 Subject: [PATCH 07/23] docs(paper): add worked example for HamiltonianCircuit -> HamiltonianPath reduction Upgrade the prose-only reduction-rule entry with load-example data binding, pred-commands reproducibility block, and a 3-step worked example showing the vertex-splitting construction on a C4 cycle graph. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 1099ee689..8269b7e2a 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13051,7 +13051,32 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ For each source variable $x_i$, compute $x_i = sum_(j=0)^(K_i - 1) w_(i j) y_(i j)$ from the binary solution. ] -#reduction-rule("HamiltonianCircuit", "HamiltonianPath")[ +#let hc_hp = load-example("HamiltonianCircuit", "HamiltonianPath") +#let hc_hp_sol = hc_hp.solutions.at(0) +#let hc_hp_n = graph-num-vertices(hc_hp.source.instance) +#let hc_hp_source_edges = hc_hp.source.instance.graph.edges +#let hc_hp_target_edges = hc_hp.target.instance.graph.edges +#let hc_hp_target_n = graph-num-vertices(hc_hp.target.instance) +#reduction-rule("HamiltonianCircuit", "HamiltonianPath", + example: true, + example-caption: [Cycle $C_#hc_hp_n$ ($n = #hc_hp_n$): split $v_0$ into two copies with pendants], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(hc_hp.source) + " -o hc.json", + "pred reduce hc.json --to " + target-spec(hc_hp) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate hc.json --config " + hc_hp_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical source fixture is the cycle $C_#hc_hp_n$ on vertices ${0, dots, #(hc_hp_n - 1)}$ with #hc_hp_source_edges.len() edges: #hc_hp_source_edges.map(e => $(#e.at(0), #e.at(1))$).join(", "). The stored Hamiltonian-circuit witness is the permutation $[#hc_hp_sol.source_config.map(str).join(", ")]$.\ + + *Step 2 -- Construction.* Fix $v_0 = 0$. Its neighbors in the source are ${1, 3}$ ($deg = 2$). Introduce $v' = #hc_hp_n$, $s = #(hc_hp_n + 1)$, $t = #(hc_hp_n + 2)$. The target graph $G'$ has $#hc_hp_target_n = #hc_hp_n + 3$ vertices and #hc_hp_target_edges.len() edges: #hc_hp_target_edges.map(e => $(#e.at(0), #e.at(1))$).join(", "). The $#(hc_hp_target_edges.len() - hc_hp_source_edges.len())$ new edges are the duplicated adjacencies of $v'$ plus the two pendant edges.\ + + *Step 3 -- Verify a solution.* The stored target Hamiltonian-path permutation is $[#hc_hp_sol.target_config.map(str).join(", ")]$, visiting every vertex of $G'$ exactly once. The path starts at pendant $s = #hc_hp_sol.target_config.at(0)$ and ends at pendant $t = #hc_hp_sol.target_config.at(hc_hp_target_n - 1)$. Dropping $s$ at the front and the last two vertices $v', t$ at the back gives $[#hc_hp_sol.target_config.slice(1, hc_hp_target_n - 2).map(str).join(", ")]$, which is the source Hamiltonian circuit $[#hc_hp_sol.source_config.map(str).join(", ")]$.\ + + *Multiplicity:* The fixture stores one canonical witness. For $C_#hc_hp_n$ there are $#hc_hp_n times 2 = #(hc_hp_n * 2)$ directed Hamiltonian circuits (choice of start vertex and direction), each yielding a distinct target path. + ], +)[ To decide whether $G = (V, E)$ contains a Hamiltonian circuit, we split an arbitrary vertex $v_0$ into two copies and attach a private pendant to each copy, forcing any Hamiltonian path in the expanded graph to enter through one pendant, traverse the original circuit, and exit through the other. ][ _Construction._ Let $G = (V, E)$ with $n = |V|$ and $m = |E|$. Fix vertex $v_0 = 0$. Introduce three new vertices: a duplicate $v'$ of $v_0$, a pendant $s$, and a pendant $t$. Form $G' = (V', E')$ where $V' = V union {v', s, t}$, and $E'$ contains (i) every original edge of $E$; (ii) an edge ${v', u}$ for each neighbor $u$ of $v_0$ in $G$; (iii) the pendant edge ${s, v_0}$; and (iv) the pendant edge ${t, v'}$. Thus $|V'| = n + 3$ and $|E'| = m + deg(v_0) + 2$. From a03be86f6a1fe37203be0042cc3bb77ba8e84c44 Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 11:35:52 +0000 Subject: [PATCH 08/23] docs(paper): add worked example for HamiltonianCircuit -> BottleneckTravelingSalesman reduction Add data-driven example with load-example binding, pred-commands block, step-by-step construction walkthrough, and solution verification. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 33 ++++++++++++++++++++++++++++++++- 1 file changed, 32 insertions(+), 1 deletion(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 8269b7e2a..343388d27 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13217,7 +13217,38 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ For selected vertex $v$: clause $j = floor(v / 3)$, position $p = v mod 3$. If literal $ell_(j,p) = x_i$ set $x_i = 1$; if $ell_(j,p) = not x_i$ set $x_i = 0$. ] -#reduction-rule("HamiltonianCircuit", "BiconnectivityAugmentation")[ +#let hc_bicon = load-example("HamiltonianCircuit", "BiconnectivityAugmentation") +#let hc_bicon_sol = hc_bicon.solutions.at(0) +#reduction-rule("HamiltonianCircuit", "BiconnectivityAugmentation", + example: true, + example-caption: [4-cycle graph ($n = #graph-num-vertices(hc_bicon.source.instance)$, $|E| = #graph-num-edges(hc_bicon.source.instance)$): HC $arrow.r$ biconnectivity augmentation], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(hc_bicon.source) + " -o hc.json", + "pred reduce hc.json --to " + target-spec(hc_bicon) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate hc.json --config " + hc_bicon_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The source graph $G$ has $n = #graph-num-vertices(hc_bicon.source.instance)$ vertices and $|E| = #graph-num-edges(hc_bicon.source.instance)$ edges: $E = {#hc_bicon.source.instance.graph.edges.map(e => "(" + e.map(str).join(", ") + ")").join(", ")}$. This is a 4-cycle, which admits a Hamiltonian circuit. + + *Step 2 -- Construction.* Start with the edgeless graph $H = (V, emptyset)$ on $n = #graph-num-vertices(hc_bicon.target.instance)$ vertices. For each pair ${u, v}$, create a potential edge with weight 1 if ${u,v} in E$ and weight 2 otherwise. This yields #hc_bicon.target.instance.potential_weights.len() potential edges: #hc_bicon.target.instance.potential_weights.map(pw => "{" + str(pw.at(0)) + ", " + str(pw.at(1)) + "} (w=" + str(pw.at(2)) + ")").join(", "). The budget is $B = #hc_bicon.target.instance.budget = n$. + + *Step 3 -- Verify a solution.* The canonical Hamiltonian circuit visits vertices in order $(#hc_bicon_sol.source_config.map(str).join(", "))$. The target configuration $bold(x) = (#hc_bicon_sol.target_config.map(str).join(", "))$ selects potential edges #{ + let selected = hc_bicon_sol.target_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => { + let pw = hc_bicon.target.instance.potential_weights.at(i) + "{" + str(pw.at(0)) + ", " + str(pw.at(1)) + "}" + }) + selected.join(", ") + } — exactly the $n = #graph-num-vertices(hc_bicon.source.instance)$ cycle edges, each of weight 1, for a total cost of #hc_bicon_sol.target_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => hc_bicon.target.instance.potential_weights.at(i).at(2)).sum() $= n = B$ #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. The 4-cycle has $#{ + let n = graph-num-vertices(hc_bicon.source.instance) + let fac = range(1, n).fold(1, (acc, x) => acc * x) + str(int(fac / 2)) + }$ distinct Hamiltonian circuits ($(n-1)! slash 2$ directed cycles up to reversal). + ], +)[ Start with the edgeless graph on $n$ vertices. Price original edges at cost 1 and non-edges at cost 2. A budget-$n$ augmentation is achievable iff $G$ has a Hamiltonian circuit (the only way to biconnect with $n$ weight-1 edges is a Hamiltonian cycle). ][ _Construction._ Given $G = (V, E)$ with $n = |V|$, let $H = (V, emptyset)$. For every pair ${u, v}$: potential edge with weight 1 if ${u,v} in E$, else weight 2. Budget $B = n$. From 313103502329e857d295c8289c334b68467c57b7 Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 11:37:16 +0000 Subject: [PATCH 09/23] docs(paper): add worked example for HamiltonianCircuit -> StrongConnectivityAugmentation reduction Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 343388d27..f785518d3 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13258,7 +13258,33 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Walk the unique cycle from vertex 0 through selected edges to recover the circuit order. ] -#reduction-rule("HamiltonianCircuit", "StrongConnectivityAugmentation")[ +#let hc_sca = load-example("HamiltonianCircuit", "StrongConnectivityAugmentation") +#let hc_sca_sol = hc_sca.solutions.at(0) +#let hc_sca_n = graph-num-vertices(hc_sca.source.instance) +#let hc_sca_candidate_arcs = hc_sca.target.instance.candidate_arcs +#let hc_sca_selected = hc_sca_candidate_arcs.enumerate().filter(((i, _)) => hc_sca_sol.target_config.at(i) == 1).map(((i, a)) => a) +#let hc_sca_w1 = hc_sca_candidate_arcs.filter(a => a.at(2) == 1) +#let hc_sca_w2 = hc_sca_candidate_arcs.filter(a => a.at(2) == 2) +#reduction-rule("HamiltonianCircuit", "StrongConnectivityAugmentation", + example: true, + example-caption: [4-cycle ($n = #hc_sca_n$): HC to budget-#hc_sca_n SCA], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(hc_sca.source) + " -o hc.json", + "pred reduce hc.json --to " + target-spec(hc_sca) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate hc.json --config " + hc_sca_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The source graph is the cycle on $#hc_sca_n$ vertices with edges #hc_sca.source.instance.graph.edges.map(e => $(#e.at(0), #e.at(1))$).join(", "). The canonical Hamiltonian-circuit witness is the vertex permutation $[#hc_sca_sol.source_config.map(str).join(", ")]$. + + *Step 2 -- Construction.* Start with the empty digraph $D = (V, emptyset)$ on $#hc_sca_n$ vertices. Generate all $#hc_sca_candidate_arcs.len()$ ordered pairs as candidate arcs: #hc_sca_w1.len() weight-1 arcs #hc_sca_w1.map(a => $(#a.at(0) arrow #a.at(1))$).join(", ") corresponding to source edges (both orientations), and #hc_sca_w2.len() weight-2 arcs #hc_sca_w2.map(a => $(#a.at(0) arrow #a.at(1))$).join(", ") for non-edges. Budget $B = #hc_sca.target.instance.bound$. + + *Step 3 -- Verify a solution.* The target configuration $[#hc_sca_sol.target_config.map(str).join(", ")]$ selects arcs #hc_sca_selected.map(a => $(#a.at(0) arrow #a.at(1))$).join(", "), all weight 1. Total cost $= #hc_sca_selected.len() times 1 = #hc_sca_n = B$ #sym.checkmark. These $#hc_sca_n$ arcs form a single directed cycle visiting every vertex, so the augmented digraph is strongly connected. Extracting the circuit: follow successors from vertex 0 to recover $[#hc_sca_sol.source_config.map(str).join(", ")]$ #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. For the 4-cycle there are $#hc_sca_n times 2 = #{hc_sca_n * 2}$ Hamiltonian-circuit permutations (choice of start vertex and direction), each yielding a distinct set of directed arcs. + ], +)[ Start with the empty digraph on $n$ vertices. Weight-1 candidate arcs correspond to edges of $G$; weight-2 arcs for non-edges. A budget-$n$ augmentation that achieves strong connectivity must select exactly $n$ weight-1 arcs forming a directed Hamiltonian cycle. ][ _Construction._ Given $G = (V, E)$ with $n = |V|$. Build $D = (V, emptyset)$. For every ordered pair $(u, v)$ with $u != v$: candidate arc with weight 1 if ${u,v} in E$, else weight 2. Budget $B = n$. From e588b3851929d2cff18a535393fe8e49a60d45ff Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 11:38:13 +0000 Subject: [PATCH 10/23] docs: add worked example for HamiltonianCircuit -> QuadraticAssignment paper entry Upgrade the prose-only reduction-rule entry to include a data-driven worked example with load-example binding, pred-commands block, and step-by-step construction/verification using the C4 cycle graph fixture. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 1229ec751..a53dec2d8 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13231,7 +13231,28 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Vertex $v_i$ in independent set iff target solution at arc index $2i+1$ is positive. ] -#reduction-rule("HamiltonianCircuit", "QuadraticAssignment")[ +#let hc_qa = load-example("HamiltonianCircuit", "QuadraticAssignment") +#let hc_qa_sol = hc_qa.solutions.at(0) +#reduction-rule("HamiltonianCircuit", "QuadraticAssignment", + example: true, + example-caption: [Cycle graph $C_#hc_qa.source.instance.graph.num_vertices$ ($n = #hc_qa.source.instance.graph.num_vertices$, $|E| = #hc_qa.source.instance.graph.edges.len()$)], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(hc_qa.source) + " -o hc.json", + "pred reduce hc.json --to " + target-spec(hc_qa) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate hc.json --config " + hc_qa_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The graph $G$ has $n = #hc_qa.source.instance.graph.num_vertices$ vertices and edges ${#hc_qa.source.instance.graph.edges.map(e => "(" + str(e.at(0)) + "," + str(e.at(1)) + ")").join(", ")}$, forming a cycle $C_#hc_qa.source.instance.graph.num_vertices$. The penalty weight is $omega = n + 1 = #(hc_qa.source.instance.graph.num_vertices + 1)$. + + *Step 2 -- Construction.* The cost matrix $C$ encodes a directed cycle on positions: $c[i][(i+1) mod #hc_qa.source.instance.graph.num_vertices] = 1$, all other entries 0. The distance matrix $D$ encodes graph adjacency: $d[k][l] = 1$ if ${k,l} in E$, $d[k][l] = #(hc_qa.source.instance.graph.num_vertices + 1)$ for non-edges, $d[k][k] = 0$. Both matrices are $#hc_qa.source.instance.graph.num_vertices times #hc_qa.source.instance.graph.num_vertices$, so the QAP has $n = #hc_qa.target.instance.cost_matrix.len()$ facilities and $n = #hc_qa.target.instance.distance_matrix.len()$ locations. + + *Step 3 -- Verify a solution.* The canonical Hamiltonian circuit visits vertices in order $gamma = (#hc_qa_sol.source_config.map(str).join(", "))$. The QAP permutation is the same: $(#hc_qa_sol.target_config.map(str).join(", "))$. The QAP cost is $sum_(i=0)^(n-1) c[i][(i+1) mod n] dot d[gamma(i)][gamma((i+1) mod n)]$. Since $gamma$ maps each position $i$ to vertex $i$, each consecutive pair $(gamma(i), gamma(i+1 mod n))$ is an edge in $G$, contributing $1 dot 1 = 1$. Total cost $= #hc_qa.source.instance.graph.num_vertices = n$ #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. The cycle $C_#hc_qa.source.instance.graph.num_vertices$ has $#hc_qa.source.instance.graph.num_vertices$ rotations and 2 reflections, giving $2n = #(2 * hc_qa.source.instance.graph.num_vertices)$ distinct Hamiltonian circuits; the canonical one is the identity permutation. + ], +)[ Position-adjacency encoded in cost matrix $C$ (directed cycle on positions), graph-adjacency in distance matrix $D$ (1 for edges, $omega = n+1$ for non-edges). QAP optimum equals $n$ iff a Hamiltonian circuit exists. ][ _Construction._ Let $G = (V, E)$ with $n = |V|$ and $omega = n + 1$. Cost matrix: $c[i][j] = 1$ if $j equiv i+1 space (mod n)$, else 0. Distance matrix: $d[k][l] = 0$ if $k = l$; 1 if ${k,l} in E$; $omega$ otherwise. From a205853e2e7b32fa53af2be2f6bd8115cab0db32 Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 11:46:18 +0000 Subject: [PATCH 11/23] docs(paper): add worked example for Partition -> BinPacking reduction Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 136 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 131 insertions(+), 5 deletions(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index ca9e60ee2..395715fce 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13096,7 +13096,37 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ The subgraph-isomorphism solution $c in {0, dots, n-1}^k$ gives the host vertex assigned to each pattern vertex. The $k$-clique indicator $x in {0,1}^n$ sets $x[c[i]] := 1$ for each $i$. ] -#reduction-rule("Partition", "MultiprocessorScheduling")[ +#let part_mps = load-example("Partition", "MultiprocessorScheduling") +#let part_mps_sol = part_mps.solutions.at(0) +#let part_mps_sizes = part_mps.source.instance.sizes +#let part_mps_n = part_mps_sizes.len() +#let part_mps_total = part_mps_sizes.fold(0, (a, b) => a + b) +#let part_mps_deadline = part_mps.target.instance.deadline +#let part_mps_nproc = part_mps.target.instance.num_processors +#let part_mps_proc0 = part_mps_sol.source_config.enumerate().filter(((i, x)) => x == 0).map(((i, x)) => i) +#let part_mps_proc1 = part_mps_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => i) +#let part_mps_load0 = part_mps_proc0.map(i => part_mps_sizes.at(i)).fold(0, (a, b) => a + b) +#let part_mps_load1 = part_mps_proc1.map(i => part_mps_sizes.at(i)).fold(0, (a, b) => a + b) +#reduction-rule("Partition", "MultiprocessorScheduling", + example: true, + example-caption: [#part_mps_n elements, total sum $S = #part_mps_total$, deadline $D = #part_mps_deadline$], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(part_mps.source) + " -o partition.json", + "pred reduce partition.json --to " + target-spec(part_mps) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate partition.json --config " + part_mps_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical Partition instance has sizes $(#part_mps_sizes.map(str).join(", "))$ with total sum $S = #part_mps_total$. A balanced partition requires each subset to sum to $S / 2 = #part_mps_deadline$. + + *Step 2 -- Construction.* The reduction creates #part_mps_n tasks with lengths $(#part_mps.target.instance.lengths.map(str).join(", "))$, sets the number of processors to $m = #part_mps_nproc$, and the deadline to $D = floor(S / 2) = #part_mps_deadline$. No auxiliary variables are introduced: the target has the same #part_mps_n binary coordinates as the source. + + *Step 3 -- Verify a solution.* The canonical witness is $bold(x) = (#part_mps_sol.source_config.map(str).join(", "))$, which is the same binary vector on both sides. Processor 0 receives tasks at indices $\{#part_mps_proc0.map(str).join(", ")\}$ with sizes $(#part_mps_proc0.map(i => str(part_mps_sizes.at(i))).join(", "))$, giving load $#part_mps_load0 <= #part_mps_deadline = D$. Processor 1 receives tasks at indices $\{#part_mps_proc1.map(str).join(", ")\}$ with sizes $(#part_mps_proc1.map(i => str(part_mps_sizes.at(i))).join(", "))$, giving load $#part_mps_load1 <= #part_mps_deadline = D$. Total: $#part_mps_load0 + #part_mps_load1 = #part_mps_total = S$ #sym.checkmark + + *Multiplicity:* The example DB stores one canonical witness. This instance admits other balanced partitions (e.g., swapping the two subsets), but one witness suffices to demonstrate the reduction. + ], +)[ Each element $a_i$ becomes a task of length $a_i$ on $m = 2$ processors with deadline $D = floor(S / 2)$. A balanced partition exists iff a feasible schedule exists. ][ _Construction._ Let $A = (a_1, dots, a_n)$ with total sum $S = sum_(i=1)^n a_i$. Set task lengths $ell_i = a_i$, number of processors $m = 2$, and deadline $D = floor(S / 2)$. @@ -13187,7 +13217,29 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ For variable $x_i$, set $x_i = 1$ if the cover indicator at position $2i$ is 1. ] -#reduction-rule("Partition", "SequencingWithinIntervals")[ +#let part_swi = load-example("Partition", "SequencingWithinIntervals") +#let part_swi_sol = part_swi.solutions.at(0) +#reduction-rule("Partition", "SequencingWithinIntervals", + example: true, + example-caption: [$n = #part_swi.source.instance.sizes.len()$ elements, $S = #part_swi.source.instance.sizes.sum()$], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(part_swi.source) + " -o partition.json", + "pred reduce partition.json --to " + target-spec(part_swi) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate partition.json --config " + part_swi_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The Partition instance has $n = #part_swi.source.instance.sizes.len()$ elements with sizes $(#part_swi.source.instance.sizes.map(str).join(", "))$ and total sum $S = #part_swi.source.instance.sizes.sum()$. The half-sum is $h = floor(S\/2) = #calc.floor(part_swi.source.instance.sizes.sum() / 2)$. + + *Step 2 -- Construct tasks.* The reduction creates $n + 1 = #part_swi.target.instance.lengths.len()$ tasks. Each element $a_i$ becomes a task with release time $r_i = 0$, deadline $d_i = S + 1 = #part_swi.target.instance.deadlines.at(0)$, and length $p_i = a_i$. An enforcer task is pinned at the midpoint: $r = #part_swi.target.instance.release_times.at(part_swi.source.instance.sizes.len())$, $d = #part_swi.target.instance.deadlines.at(part_swi.source.instance.sizes.len())$, $p = 1$. This enforcer occupies $[h, h+1)$, splitting the timeline into two blocks of size $h = #calc.floor(part_swi.source.instance.sizes.sum() / 2)$ each. + + *Step 3 -- Verify a solution.* The canonical partition assigns elements to subsets via $(#part_swi_sol.source_config.map(str).join(", "))$: subset 0 = $\{#part_swi_sol.source_config.enumerate().filter(((i, x)) => x == 0).map(((i, x)) => str(part_swi.source.instance.sizes.at(i))).join(", ")\}$ (sum #part_swi_sol.source_config.enumerate().filter(((i, x)) => x == 0).map(((i, x)) => part_swi.source.instance.sizes.at(i)).sum()), subset 1 = $\{#part_swi_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(part_swi.source.instance.sizes.at(i))).join(", ")\}$ (sum #part_swi_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => part_swi.source.instance.sizes.at(i)).sum()). Both subsets sum to $S\/2 = #calc.floor(part_swi.source.instance.sizes.sum() / 2)$ #sym.checkmark \ + The target schedule has start-time offsets $(#part_swi_sol.target_config.map(str).join(", "))$: subset-0 tasks fill $[0, h)$ and subset-1 tasks fill $[h+1, S+1)$, with the enforcer at $[h, h+1)$ #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. Other valid partitions (e.g.\ swapping the two subsets) exist but are symmetric. + ], +)[ A unit-length enforcer task pinned at $[floor(S/2), floor(S/2)+1)$ splits the timeline into two blocks. A valid schedule exists iff the elements partition into two equal-sum subsets. ][ _Construction._ Let $A = {a_1, dots, a_n}$ with $S = sum a_i$ and $h = floor(S/2)$. Create $n+1$ tasks: element tasks with $r_i = 0$, $d_i = S+1$, $p_i = a_i$; enforcer task with $r = h$, $d = h+1$, $p = 1$. @@ -13329,7 +13381,29 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ The arc service permutation directly encodes the Hamiltonian circuit vertex order. ] -#reduction-rule("HamiltonianCircuit", "RuralPostman")[ +#let hc_rp = load-example("HamiltonianCircuit", "RuralPostman") +#let hc_rp_sol = hc_rp.solutions.at(0) +#let hc_rp_n = graph-num-vertices(hc_rp.source.instance) +#reduction-rule("HamiltonianCircuit", "RuralPostman", + example: true, + example-caption: [Cycle $C_#hc_rp_n$ ($n = #hc_rp_n$): vertex splitting to Rural Postman], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(hc_rp.source) + " -o hc.json", + "pred reduce hc.json --to " + target-spec(hc_rp) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate hc.json --config " + hc_rp_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical HC instance is a cycle $C_#hc_rp_n$ with $n = #hc_rp_n$ vertices and $|E| = #graph-num-edges(hc_rp.source.instance)$ edges. The stored witness is the permutation $(#hc_rp_sol.source_config.map(str).join(", "))$. + + *Step 2 -- Construction.* Each vertex splits into $(v_i^a, v_i^b)$, producing $2n = #graph-num-vertices(hc_rp.target.instance)$ vertices. The target graph has #graph-num-edges(hc_rp.target.instance) edges: #hc_rp.target.instance.required_edges.len() required edges (one per source vertex) and #(graph-num-edges(hc_rp.target.instance) - hc_rp.target.instance.required_edges.len()) connector edges (two per source edge). All edge lengths are 1. + + *Step 3 -- Verify a solution.* The target solution assigns edge multiplicities $(#hc_rp_sol.target_config.map(str).join(", "))$. The tour traverses all #hc_rp.target.instance.required_edges.len() required edges plus #hc_rp_n connector edges, for total cost $= #(2 * hc_rp_n) = 2n$ #sym.checkmark. + + *Multiplicity:* The fixture stores one canonical witness. The $#hc_rp_n$-cycle has $#hc_rp_n$ rotations $times$ 2 reflections $= #(2 * hc_rp_n)$ directed Hamiltonian circuits. + ], +)[ Each vertex splits into two copies connected by a required edge. Original graph edges become connector edges. A minimum-cost tour of cost $2n$ traversing all required edges exists iff $G$ has a Hamiltonian circuit. ][ _Construction._ Given $G = (V, E)$ with $n = |V|$. Build $H$ with $2n$ vertices: $v_i^a = 2i$, $v_i^b = 2i+1$. Required edges ${v_i^a, v_i^b}$ of weight 1 ($n$ total). For each ${v_i, v_j} in E$: connector edges ${v_i^b, v_j^a}$ and ${v_j^b, v_i^a}$ of weight 1. @@ -13400,7 +13474,38 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Selected columns identify edges; walk from a degree-1 vertex to recover the path ordering. ] -#reduction-rule("Partition", "BinPacking")[ +#let part_bp = load-example("Partition", "BinPacking") +#let part_bp_sol = part_bp.solutions.at(0) +#let part_bp_sizes = part_bp.source.instance.sizes +#let part_bp_n = part_bp_sizes.len() +#let part_bp_total = part_bp_sizes.fold(0, (a, b) => a + b) +#let part_bp_capacity = part_bp.target.instance.capacity +#let part_bp_bin0 = part_bp_sol.source_config.enumerate().filter(((i, x)) => x == 0).map(((i, x)) => i) +#let part_bp_bin1 = part_bp_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => i) +#let part_bp_bin0_sizes = part_bp_bin0.map(i => part_bp_sizes.at(i)) +#let part_bp_bin1_sizes = part_bp_bin1.map(i => part_bp_sizes.at(i)) +#let part_bp_bin0_sum = part_bp_bin0_sizes.fold(0, (a, b) => a + b) +#let part_bp_bin1_sum = part_bp_bin1_sizes.fold(0, (a, b) => a + b) +#reduction-rule("Partition", "BinPacking", + example: true, + example-caption: [#part_bp_n elements, total sum $S = #part_bp_total$], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(part_bp.source) + " -o partition.json", + "pred reduce partition.json --to " + target-spec(part_bp) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate partition.json --config " + part_bp_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical Partition instance has sizes $(#part_bp_sizes.map(str).join(", "))$ with total sum $S = #part_bp_total$, so a balanced partition requires each half to sum to $S / 2 = #part_bp_capacity$. + + *Step 2 -- Build the bin-packing instance.* The reduction copies each size into the item-size list and sets the bin capacity to $C = floor(S / 2) = #part_bp_capacity$ with $k = 2$ bins. The target instance has sizes $(#part_bp.target.instance.sizes.map(str).join(", "))$ and capacity $#part_bp_capacity$. No auxiliary variables are introduced, so the target has the same $#part_bp_n$ assignment coordinates as the source. + + *Step 3 -- Verify the canonical witness.* The witness assigns each element to bin 0 or bin 1 via the binary vector $bold(b) = (#part_bp_sol.source_config.map(str).join(", "))$, which equals the target config $(#part_bp_sol.target_config.map(str).join(", "))$. Bin 0 receives elements $\{#part_bp_bin0.map(str).join(", ")\}$ with sizes $(#part_bp_bin0_sizes.map(str).join(", "))$ summing to $#part_bp_bin0_sum <= #part_bp_capacity$ #sym.checkmark. Bin 1 receives elements $\{#part_bp_bin1.map(str).join(", ")\}$ with sizes $(#part_bp_bin1_sizes.map(str).join(", "))$ summing to $#part_bp_bin1_sum <= #part_bp_capacity$ #sym.checkmark. Both bins fit within the capacity, and the total $#part_bp_bin0_sum + #part_bp_bin1_sum = #part_bp_total$ accounts for all items. + + *Multiplicity:* The fixture stores one canonical witness. This instance may admit other balanced partitions, but one witness suffices to demonstrate the reduction. + ], +)[ Items with sizes $a_i$ packed into 2 bins of capacity $floor(S/2)$. A valid 2-bin packing exists iff a balanced partition exists. ][ _Construction._ Set item sizes $s_i = a_i$, bin capacity $C = floor(S/2)$, number of bins $k = 2$. @@ -13470,7 +13575,28 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Decode Lehmer code; walk job sequence incrementing group at each separator. ] -#reduction-rule("ThreePartition", "JobShopScheduling")[ +#let tp_jss = load-example("ThreePartition", "JobShopScheduling") +#let tp_jss_sol = tp_jss.solutions.at(0) +#reduction-rule("ThreePartition", "JobShopScheduling", + example: true, + example-caption: [$3m = #tp_jss.source.instance.sizes.len()$ elements, $B = #tp_jss.source.instance.bound$, #tp_jss.target.instance.num_processors machines], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(tp_jss.source) + " -o threepartition.json", + "pred reduce threepartition.json --to " + target-spec(tp_jss) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate threepartition.json --config " + tp_jss_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical 3-Partition instance has $3m = #tp_jss.source.instance.sizes.len()$ elements with sizes $S = (#tp_jss.source.instance.sizes.map(str).join(", "))$ and bound $B = #tp_jss.source.instance.bound$. Since $m = #{ let n = tp_jss.source.instance.sizes.len(); str(calc.div-euclid(n, 3)) }$, there are $m - 1 = #{ let n = tp_jss.source.instance.sizes.len(); str(calc.div-euclid(n, 3) - 1) }$ separators, separator length $L = m B + 1 = #{ let n = tp_jss.source.instance.sizes.len(); let m = calc.div-euclid(n, 3); str(m * tp_jss.source.instance.bound + 1) }$, and deadline $D = m B + (m-1) L = #{ let n = tp_jss.source.instance.sizes.len(); let m = calc.div-euclid(n, 3); let B = tp_jss.source.instance.bound; let L = m * B + 1; str(m * B + (m - 1) * L) }$. + + *Step 2 -- Construct jobs.* Each element $a_i$ becomes an element job with two tasks: $(text("machine") 0, a_i)$ then $(text("machine") 1, a_i)$. This gives #tp_jss.target.instance.jobs.len() jobs on #tp_jss.target.instance.num_processors processors. The target JSS instance has jobs: #{ let jobs = tp_jss.target.instance.jobs; let descs = jobs.map(j => { let tasks = j.map(t => "(" + str(t.at(0)) + "," + str(t.at(1)) + ")"); "[" + tasks.join(", ") + "]" }); descs.join("; ") }. + + *Step 3 -- Verify a solution.* The source config $(#tp_jss_sol.source_config.map(str).join(", "))$ assigns all #tp_jss.source.instance.sizes.len() elements to group 0. With $m = 1$ and no separators, any ordering of the #tp_jss.source.instance.sizes.len() element tasks on each machine is valid. The target Lehmer code $(#tp_jss_sol.target_config.map(str).join(", "))$ encodes identity orderings on both machines. The resulting makespan is $sum a_i = #{ tp_jss.source.instance.sizes.sum() } = B = #tp_jss.source.instance.bound <= D$ #sym.checkmark, and all #tp_jss.source.instance.sizes.len() elements land in a single processor slot containing exactly 3 elements #sym.checkmark. + + *Multiplicity:* The fixture stores one canonical witness. With $m = 1$ there is only one valid group assignment (all elements in group 0); the $3! times 3!$ task orderings on the two machines yield multiple target configs, but only one source partition. + ], +)[ On 2 machines, $m-1$ long separator jobs on machine 0 force element jobs into $m$ windows of width $B$. Size constraints ensure 3 elements per window. ][ _Construction._ Given $(S, B)$ with $|S| = 3m$, $L = m B + 1$, $D = m B + (m-1)L$. Element jobs ($3m$): two tasks $(text("machine") 0, a_i)$ then $(text("machine") 1, a_i)$. Separator jobs ($m-1$): single task $(text("machine") 0, L)$. From 0de33c4e586bc30421d964338c55b30f1171c06a Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 11:47:47 +0000 Subject: [PATCH 12/23] docs(paper): add worked example for ThreePartition -> JobShopScheduling reduction Add data-driven example with load-example binding, pred-commands block, and step-by-step walkthrough showing the m=1 canonical instance with 3 elements, sizes (4,5,6), bound B=15 on 2 machines. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 85 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 84 insertions(+), 1 deletion(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 395715fce..dd1d8adbf 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -9743,6 +9743,55 @@ where $P$ is a penalty weight large enough that any constraint violation costs m _Solution extraction._ Return the same binary selection vector: element $i$ is in the partition subset if and only if it is selected in the Subset Sum witness. ] +#let part_swcp = load-example("Partition", "ShortestWeightConstrainedPath") +#let part_swcp_sol = part_swcp.solutions.at(0) +#let part_swcp_sizes = part_swcp.source.instance.sizes +#let part_swcp_n = part_swcp_sizes.len() +#let part_swcp_total = part_swcp_sizes.fold(0, (a, b) => a + b) +#let part_swcp_half = calc.floor(part_swcp_total / 2) +#let part_swcp_wb = part_swcp.target.instance.weight_bound +#let part_swcp_nv = part_swcp.target.instance.graph.num_vertices +#let part_swcp_ne = part_swcp.target.instance.graph.edges.len() +#let part_swcp_subset0 = part_swcp_sol.source_config.enumerate().filter(((i, x)) => x == 0).map(((i, x)) => i) +#let part_swcp_subset1 = part_swcp_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => i) +#let part_swcp_sum0 = part_swcp_subset0.map(i => part_swcp_sizes.at(i)).fold(0, (a, b) => a + b) +#let part_swcp_sum1 = part_swcp_subset1.map(i => part_swcp_sizes.at(i)).fold(0, (a, b) => a + b) +#let part_swcp_path_length = part_swcp_sol.target_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => part_swcp.target.instance.edge_lengths.at(i)).fold(0, (a, b) => a + b) +#let part_swcp_path_weight = part_swcp_sol.target_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => part_swcp.target.instance.edge_weights.at(i)).fold(0, (a, b) => a + b) +#reduction-rule("Partition", "ShortestWeightConstrainedPath", + example: true, + example-caption: [#part_swcp_n elements, total sum $S = #part_swcp_total$], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(part_swcp.source) + " -o partition.json", + "pred reduce partition.json --to " + target-spec(part_swcp) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate partition.json --config " + part_swcp_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical Partition instance has sizes $(#part_swcp_sizes.map(str).join(", "))$ with total sum $S = #part_swcp_total$, so a balanced witness must hit exactly $floor(S\/2) = #part_swcp_half$. + + *Step 2 -- Build the layered digraph.* The reduction creates a chain of $n + 1 = #part_swcp_nv$ vertices $v_0, dots, v_#part_swcp_n$ with $s = v_0$ and $t = v_#part_swcp_n$. For each element $a_i$, two parallel edges connect $v_i$ to $v_(i+1)$: an _include_ edge (index $2i$) with length $a_i + 1$ and weight $1$, and an _exclude_ edge (index $2i + 1$) with length $1$ and weight $a_i + 1$. This produces $#part_swcp_ne$ edges total. The weight bound is $W = floor(S\/2) + n = #part_swcp_half + #part_swcp_n = #part_swcp_wb$. + + Edge lengths: $(#part_swcp.target.instance.edge_lengths.map(str).join(", "))$. \ + Edge weights: $(#part_swcp.target.instance.edge_weights.map(str).join(", "))$. + + *Step 3 -- Verify the canonical witness.* The source config $bold(x) = (#part_swcp_sol.source_config.map(str).join(", "))$ assigns elements at indices $\{#part_swcp_subset1.map(str).join(", ")\}$ to subset 1 (exclude edges) and $\{#part_swcp_subset0.map(str).join(", ")\}$ to subset 0 (include edges). Subset 0 sums to $#part_swcp_sum0$ and subset 1 sums to $#part_swcp_sum1$, confirming a balanced partition. + + The target config is $(#part_swcp_sol.target_config.map(str).join(", "))$, selecting one edge per layer. The path length is $#part_swcp_path_length$ and path weight is $#part_swcp_path_weight <= #part_swcp_wb = W$ #sym.checkmark, so the weight constraint is satisfied. + + *Multiplicity:* The fixture stores one canonical witness. This instance has multiple balanced partitions, but one witness suffices to demonstrate the reduction. + ], +)[ + A layered digraph with include/exclude edges per element: include edge has length $a_i + 1$ and weight 1; exclude has length 1 and weight $a_i + 1$. A feasible path witnesses a balanced partition. +][ + _Construction._ Build layered graph $v_0, dots, v_n$ with $s = v_0$, $t = v_n$. For each element $a_i$: include edge with length $a_i + 1$, weight 1; exclude edge with length 1, weight $a_i + 1$. Weight bound $W = floor(S/2) + n$. + + _Correctness._ ($arrow.r.double$) A balanced partition uses include edges for one subset and exclude for the other, achieving feasible weight and minimum length. ($arrow.l.double$) The weight constraint forces included elements to sum to at least $ceil(S/2)$; minimizing length forces equality at $floor(S/2)$. + + _Solution extraction._ Element $i$ assigned to subset 0 if include edge selected, subset 1 if exclude edge selected. +] + #let ks_qubo = load-example("Knapsack", "QUBO") #let ks_qubo_sol = ks_qubo.solutions.at(0) #let ks_qubo_num_items = ks_qubo.source.instance.weights.len() @@ -13555,7 +13604,41 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Decode the Lehmer code, simulate the schedule tracking start times; assign elements to groups by $floor("start" / (B+1))$. ] -#reduction-rule("ThreePartition", "SequencingToMinimizeWeightedTardiness")[ +#let tp_smwt = load-example("ThreePartition", "SequencingToMinimizeWeightedTardiness") +#let tp_smwt_sol = tp_smwt.solutions.at(0) +#reduction-rule("ThreePartition", "SequencingToMinimizeWeightedTardiness", + example: true, + example-caption: [$m = #(tp_smwt.source.instance.sizes.len() / 3)$ groups, $B = #tp_smwt.source.instance.bound$, $3m = #tp_smwt.source.instance.sizes.len()$ elements], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(tp_smwt.source) + " -o threepartition.json", + "pred reduce threepartition.json --to " + target-spec(tp_smwt) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate threepartition.json --config " + tp_smwt_sol.source_config.map(str).join(","), + ) + + #{ + let sizes = tp_smwt.source.instance.sizes + let B = tp_smwt.source.instance.bound + let m = sizes.len() / 3 + let n = sizes.len() + let H = m * B + (m - 1) + let Wf = m * B + 1 + let tgt = tp_smwt.target.instance + + [*Step 1 -- Source instance.* The canonical ThreePartition instance has $3m = #n$ elements with sizes $(#sizes.map(str).join(", "))$ and bound $B = #B$. The total sum is $#sizes.fold(0, (a, b) => a + b) = m B = #m times #B$. Each element satisfies $B\/4 < a_i < B\/2$, i.e.\ $#(calc.floor(B / 4) + 1) <= a_i <= #(calc.ceil(B / 2) - 1)$.] + + [*Step 2 -- Construct target tasks.* The horizon is $H = m B + (m - 1) = #H$, and the filler weight is $W_f = m B + 1 = #Wf$. The reduction creates #tgt.lengths.len() tasks: + - *#n element tasks* with lengths $(#tgt.lengths.slice(0, n).map(str).join(", "))$, weights $(#tgt.weights.slice(0, n).map(str).join(", "))$, and deadlines $(#tgt.deadlines.slice(0, n).map(str).join(", "))$ (all equal to $H$). + - *#(m - 1) filler task#if m - 1 != 1 [s]* with length $#tgt.lengths.at(n)$, weight $W_f = #tgt.weights.at(n)$, and deadline $#tgt.deadlines.at(n)$ (tight: $(1) dot B + 1 = #tgt.deadlines.at(n)$). + The tardiness bound is $K = #tgt.bound$.] + + [*Step 3 -- Verify a solution.* The source assignment $(#tp_smwt_sol.source_config.map(str).join(", "))$ places elements ${0, 1, 2}$ (sizes $#sizes.at(0), #sizes.at(1), #sizes.at(2)$, sum $= #(sizes.at(0) + sizes.at(1) + sizes.at(2))$) in group 0 and elements ${3, 4, 5}$ (sizes $#sizes.at(3), #sizes.at(4), #sizes.at(5)$, sum $= #(sizes.at(3) + sizes.at(4) + sizes.at(5))$) in group 1. Both groups sum to $B = #B$ #sym.checkmark. The target Lehmer code is $(#tp_smwt_sol.target_config.map(str).join(", "))$, which decodes to the schedule: tasks $0, 1, 2$ (slot 0, total length $#B$), then filler (length 1, completes at $#(B + 1) <= #tgt.deadlines.at(n)$ #sym.checkmark), then tasks $3, 4, 5$ (slot 1, completes at $#H$). All element deadlines are $#H$ #sym.checkmark, and the filler meets its tight deadline. Zero tardiness achieved #sym.checkmark.] + + [*Multiplicity:* The fixture stores one canonical witness. This instance admits other valid orderings within each slot (e.g.\ permuting elements 0, 1, 2 within slot 0), but the group assignment is unique up to slot relabeling.] + } + ], +)[ High-weight filler tasks with tight deadlines force zero-tardiness schedules to leave exactly $m$ slots of width $B$ for element tasks. Size constraints ensure 3 elements per slot. ][ _Construction._ Given $(S, B)$ with $|S| = 3m$. Horizon $H = m B + (m-1)$, filler weight $W_f = m B + 1$. Element tasks: $p_i = a_i$, $w_i = 1$, $d_i = H$. Filler tasks ($m-1$): $p = 1$, $w = W_f$, $d_j = (j+1)B + (j+1)$. Tardiness bound $K = 0$. From ecf3d74a853021c15d7cef25409ee1105966406b Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 11:48:58 +0000 Subject: [PATCH 13/23] =?UTF-8?q?docs:=20add=20worked=20example=20for=20Pa?= =?UTF-8?q?rtition=20=E2=86=92=20MultiprocessorScheduling=20reduction?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Upgrade the prose-only reduction-rule entry to include a data-driven worked example with load-example binding, pred-commands block, and step-by-step verification of the canonical witness. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 1229ec751..4f8aa25fc 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13071,7 +13071,37 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ The subgraph-isomorphism solution $c in {0, dots, n-1}^k$ gives the host vertex assigned to each pattern vertex. The $k$-clique indicator $x in {0,1}^n$ sets $x[c[i]] := 1$ for each $i$. ] -#reduction-rule("Partition", "MultiprocessorScheduling")[ +#let part_mps = load-example("Partition", "MultiprocessorScheduling") +#let part_mps_sol = part_mps.solutions.at(0) +#let part_mps_sizes = part_mps.source.instance.sizes +#let part_mps_n = part_mps_sizes.len() +#let part_mps_total = part_mps_sizes.fold(0, (a, b) => a + b) +#let part_mps_deadline = part_mps.target.instance.deadline +#let part_mps_nproc = part_mps.target.instance.num_processors +#let part_mps_proc0 = part_mps_sol.source_config.enumerate().filter(((i, x)) => x == 0).map(((i, x)) => i) +#let part_mps_proc1 = part_mps_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => i) +#let part_mps_load0 = part_mps_proc0.map(i => part_mps_sizes.at(i)).fold(0, (a, b) => a + b) +#let part_mps_load1 = part_mps_proc1.map(i => part_mps_sizes.at(i)).fold(0, (a, b) => a + b) +#reduction-rule("Partition", "MultiprocessorScheduling", + example: true, + example-caption: [#part_mps_n elements, total sum $S = #part_mps_total$, deadline $D = #part_mps_deadline$], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(part_mps.source) + " -o partition.json", + "pred reduce partition.json --to " + target-spec(part_mps) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate partition.json --config " + part_mps_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical Partition instance has sizes $(#part_mps_sizes.map(str).join(", "))$ with total sum $S = #part_mps_total$. A balanced partition requires each subset to sum to $S / 2 = #part_mps_deadline$. + + *Step 2 -- Construction.* The reduction creates #part_mps_n tasks with lengths $(#part_mps.target.instance.lengths.map(str).join(", "))$, sets the number of processors to $m = #part_mps_nproc$, and the deadline to $D = floor(S / 2) = #part_mps_deadline$. No auxiliary variables are introduced: the target has the same #part_mps_n binary coordinates as the source. + + *Step 3 -- Verify a solution.* The canonical witness is $bold(x) = (#part_mps_sol.source_config.map(str).join(", "))$, which is the same binary vector on both sides. Processor 0 receives tasks at indices $\{#part_mps_proc0.map(str).join(", ")\}$ with sizes $(#part_mps_proc0.map(i => str(part_mps_sizes.at(i))).join(", "))$, giving load $#part_mps_load0 <= #part_mps_deadline = D$. Processor 1 receives tasks at indices $\{#part_mps_proc1.map(str).join(", ")\}$ with sizes $(#part_mps_proc1.map(i => str(part_mps_sizes.at(i))).join(", "))$, giving load $#part_mps_load1 <= #part_mps_deadline = D$. Total: $#part_mps_load0 + #part_mps_load1 = #part_mps_total = S$ #sym.checkmark + + *Multiplicity:* The example DB stores one canonical witness. This instance admits other balanced partitions (e.g., swapping the two subsets), but one witness suffices to demonstrate the reduction. + ], +)[ Each element $a_i$ becomes a task of length $a_i$ on $m = 2$ processors with deadline $D = floor(S / 2)$. A balanced partition exists iff a feasible schedule exists. ][ _Construction._ Let $A = (a_1, dots, a_n)$ with total sum $S = sum_(i=1)^n a_i$. Set task lengths $ell_i = a_i$, number of processors $m = 2$, and deadline $D = floor(S / 2)$. From c88dd1806d11238575bbdbae3977c381c1c7533a Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 11:50:22 +0000 Subject: [PATCH 14/23] docs(paper): add worked example for ThreePartition -> ResourceConstrainedScheduling reduction Also remove duplicate Partition -> ShortestWeightConstrainedPath entry that caused a label collision. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 37 ++++++++++++++++++++++++++----------- 1 file changed, 26 insertions(+), 11 deletions(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index dd1d8adbf..d3318b31f 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13462,16 +13462,6 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Identify used connector edges and follow the successor map from vertex 0. ] -#reduction-rule("Partition", "ShortestWeightConstrainedPath")[ - A layered digraph with include/exclude edges per element: include edge has length $a_i + 1$ and weight 1; exclude has length 1 and weight $a_i + 1$. A feasible path witnesses a balanced partition. -][ - _Construction._ Build layered graph $v_0, dots, v_n$ with $s = v_0$, $t = v_n$. For each element $a_i$: include edge with length $a_i + 1$, weight 1; exclude edge with length 1, weight $a_i + 1$. Weight bound $W = floor(S/2) + n$. - - _Correctness._ ($arrow.r.double$) A balanced partition uses include edges for one subset and exclude for the other, achieving feasible weight and minimum length. ($arrow.l.double$) The weight constraint forces included elements to sum to at least $ceil(S/2)$; minimizing length forces equality at $floor(S/2)$. - - _Solution extraction._ Element $i$ assigned to subset 0 if include edge selected, subset 1 if exclude edge selected. -] - #reduction-rule("MaximumIndependentSet", "IntegralFlowBundles")[ Each vertex $v_i$ maps to a flow unit through an intermediate node; edge-bundle constraints cap combined outflow of adjacent pairs at 1, so feasible flow of value $k$ exists iff an independent set of size $>= k$ exists. ][ @@ -13584,7 +13574,32 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Set $x_i = top$ if $v_i^+$ is in side 1, else $x_i = bot$. ] -#reduction-rule("ThreePartition", "ResourceConstrainedScheduling")[ +#let tp_rcs = load-example("ThreePartition", "ResourceConstrainedScheduling") +#let tp_rcs_sol = tp_rcs.solutions.at(0) +#reduction-rule("ThreePartition", "ResourceConstrainedScheduling", + example: true, + example-caption: [#tp_rcs.source.instance.sizes.len() elements, $B = #tp_rcs.source.instance.bound$], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(tp_rcs.source) + " -o threepartition.json", + "pred reduce threepartition.json --to " + target-spec(tp_rcs) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate threepartition.json --config " + tp_rcs_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical Three-Partition instance has $3m = #tp_rcs.source.instance.sizes.len()$ elements with sizes $(#tp_rcs.source.instance.sizes.map(str).join(", "))$ and bound $B = #tp_rcs.source.instance.bound$, so $m = #(tp_rcs.source.instance.sizes.len() / 3)$ groups are required. + + *Step 2 -- Build the Resource-Constrained Scheduling instance.* Each element $a_i$ becomes a unit-length task with resource requirement $r_i = a_i$. The reduction sets $p = #tp_rcs.target.instance.num_processors$ processors, a single resource with bound $#tp_rcs.target.instance.resource_bounds.at(0)$, and deadline $D = #tp_rcs.target.instance.deadline$. The target has #tp_rcs.target.instance.resource_requirements.len() tasks with resource requirements $(#tp_rcs.target.instance.resource_requirements.map(r => str(r.at(0))).join(", "))$. + + *Step 3 -- Verify the canonical witness.* The source config $(#tp_rcs_sol.source_config.map(str).join(", "))$ assigns elements to groups: + #for g in range(tp_rcs.target.instance.deadline) [ + - Slot #g: elements ${#tp_rcs_sol.source_config.enumerate().filter(((i, x)) => x == g).map(((i, x)) => str(i)).join(", ")}$ with sizes $#tp_rcs_sol.source_config.enumerate().filter(((i, x)) => x == g).map(((i, x)) => str(tp_rcs.source.instance.sizes.at(i))).join(" + ") = #tp_rcs.source.instance.bound = B$ #sym.checkmark + ] + Each slot has exactly 3 tasks and each slot's resource usage sums to $B$. The target config is $(#tp_rcs_sol.target_config.map(str).join(", "))$, matching the source config since task $t_i$ is assigned to the same slot as element $a_i$. + + *Multiplicity:* The fixture stores one canonical witness. Other valid 3-partitions (if any) would yield equally valid schedules. + ], +)[ Each element becomes a unit-length task requiring $a_i$ units of a shared resource with bound $B$. With 3 processors and deadline $m$, every slot receives exactly 3 tasks summing to $B$. ][ _Construction._ Given $(S, B)$ with $|S| = 3m$ and $B/4 < a_i < B/2$. Create $3m$ unit-length tasks with resource requirement $r_i = a_i$, $p = 3$ processors, resource bound $B$, deadline $D = m$. From 13d6dc8c53d2e5f9e49c8c7a7dd46ce00eed2a54 Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 11:51:55 +0000 Subject: [PATCH 15/23] docs(paper): add worked example for ThreePartition -> SequencingWithReleaseTimesAndDeadlines reduction Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index dd1d8adbf..435842442 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13594,7 +13594,30 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Task $t_i$ assigned to slot $k$ means element $a_i$ belongs to group $k$. ] -#reduction-rule("ThreePartition", "SequencingWithReleaseTimesAndDeadlines")[ +#let tp_srd = load-example("ThreePartition", "SequencingWithReleaseTimesAndDeadlines") +#let tp_srd_sol = tp_srd.solutions.at(0) +#reduction-rule("ThreePartition", "SequencingWithReleaseTimesAndDeadlines", + example: true, + example-caption: [3-Partition with $3m = #tp_srd.source.instance.sizes.len()$ elements and $B = #tp_srd.source.instance.bound$ mapped to #tp_srd.target.instance.lengths.len() sequencing tasks], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(tp_srd.source) + " -o tp.json", + "pred reduce tp.json --to " + target-spec(tp_srd) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate tp.json --config " + tp_srd_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical 3-Partition instance has $3m = #tp_srd.source.instance.sizes.len()$ elements with sizes $(#tp_srd.source.instance.sizes.map(str).join(", "))$ and bound $B = #tp_srd.source.instance.bound$. Since $m = #(tp_srd.source.instance.sizes.len() / 3)$, we must partition the elements into $#(tp_srd.source.instance.sizes.len() / 3)$ groups, each summing to $B$. + + *Step 2 -- Construct element tasks.* Each element $a_i$ becomes a task with processing time $p_i = a_i$, release time $r_i = 0$, and deadline $d_i = H$ where $H = m(B+1) - 1 = #(tp_srd.source.instance.sizes.len() / 3) dot (#tp_srd.source.instance.bound + 1) - 1 = #(tp_srd.source.instance.sizes.len() / 3 * (tp_srd.source.instance.bound + 1) - 1)$. This gives #tp_srd.source.instance.sizes.len() element tasks with lengths $(#tp_srd.target.instance.lengths.slice(0, tp_srd.source.instance.sizes.len()).map(str).join(", "))$. + + *Step 3 -- Construct filler tasks.* Add $m - 1 = #(tp_srd.source.instance.sizes.len() / 3 - 1)$ filler task(s). Filler $j$ has length $1$, release time $r_j = (j+1)B + j = #tp_srd.target.instance.release_times.at(tp_srd.source.instance.sizes.len())$, and deadline $d_j = r_j + 1 = #tp_srd.target.instance.deadlines.at(tp_srd.source.instance.sizes.len())$. This tight window pins each filler to a single time unit, splitting the timeline into $m$ slots of width $B = #tp_srd.source.instance.bound$. + + *Step 4 -- Verify a solution.* The source witness assigns elements to groups: $[#tp_srd_sol.source_config.map(str).join(", ")]$. Group 0 contains elements with sizes $(#{ let s = tp_srd.source.instance.sizes; tp_srd_sol.source_config.enumerate().filter(((i, x)) => x == 0).map(((i, x)) => str(s.at(i))).join(", ")})$, summing to $#{ let s = tp_srd.source.instance.sizes; tp_srd_sol.source_config.enumerate().filter(((i, x)) => x == 0).map(((i, x)) => s.at(i)).sum() } = B$ #sym.checkmark. Group 1 contains sizes $(#{ let s = tp_srd.source.instance.sizes; tp_srd_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(s.at(i))).join(", ")})$, summing to $#{ let s = tp_srd.source.instance.sizes; tp_srd_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => s.at(i)).sum() } = B$ #sym.checkmark. The target Lehmer code is $[#tp_srd_sol.target_config.map(str).join(", ")]$: element tasks fill slot $[0, B)$, the filler occupies its tight window $[B, B+1)$, and remaining elements fill slot $[B+1, 2B+1)$. + + *Multiplicity:* The fixture stores one canonical witness. A second valid partition (swapping groups) exists, but both map to distinct Lehmer codes. + ], +)[ $m-1$ filler tasks with tight release windows partition the timeline into $m$ slots of width $B$. Element tasks must fill these slots, and size constraints force exactly 3 elements per slot. ][ _Construction._ Given $(S, B)$ with $|S| = 3m$. Create $3m$ element tasks with $p_i = a_i$, $r_i = 0$, $d_i = H$ where $H = m(B+1) - 1$. Add $m-1$ filler tasks with $p = 1$, $r_j = (j+1)B + j$, $d_j = (j+1)B + j + 1$. From d52bd2dec3727c2796df54fa95492c9b89e0084c Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 11:52:33 +0000 Subject: [PATCH 16/23] docs: add worked example for ThreePartition -> FlowShopScheduling reduction Add data-driven example with load-example binding, pred-commands block, and step-by-step walkthrough showing source instance, job construction, separator window mechanism, and end-to-end solution verification. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 3ff1ba9d8..bd3b4ca3c 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -11657,6 +11657,39 @@ The following reductions to Integer Linear Programming are straightforward formu // Scheduling +#let tp_fss = load-example("ThreePartition", "FlowShopScheduling") +#let tp_fss_sol = tp_fss.solutions.at(0) +#reduction-rule("ThreePartition", "FlowShopScheduling", + example: true, + example-caption: [$3m = #tp_fss.source.instance.sizes.len()$ elements, $B = #tp_fss.source.instance.bound$, $m = #(tp_fss.source.instance.sizes.len() / 3)$ groups, deadline $D = #tp_fss.target.instance.deadline$], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(tp_fss.source) + " -o threepartition.json", + "pred reduce threepartition.json --to " + target-spec(tp_fss) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate threepartition.json --config " + tp_fss_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical 3-Partition instance has $3m = #tp_fss.source.instance.sizes.len()$ elements with sizes $(#tp_fss.source.instance.sizes.map(str).join(", "))$ and bound $B = #tp_fss.source.instance.bound$. A valid partition splits them into $m = #(tp_fss.source.instance.sizes.len() / 3)$ groups of 3 elements, each summing to $B$. + + *Step 2 -- Construct element and separator jobs.* The reduction creates #tp_fss.source.instance.sizes.len() element jobs, one per element $a_i$, each with task lengths $(a_i, a_i, a_i)$ on the #tp_fss.target.instance.num_processors machines. It also creates $m - 1 = #(tp_fss.source.instance.sizes.len() / 3 - 1)$ separator jobs with task lengths $(0, L, 0)$ where $L = m B + 1 = #(tp_fss.source.instance.sizes.len() / 3) dot #tp_fss.source.instance.bound + 1 = #(tp_fss.source.instance.sizes.len() / 3 * tp_fss.source.instance.bound + 1)$. The target has $3m + (m-1) = #tp_fss.target.instance.task_lengths.len()$ jobs in total with deadline $D = #tp_fss.target.instance.deadline$. + + *Step 3 -- Separator jobs create $m$ windows of width $B$.* The large machine-2 task of length $L = #(tp_fss.source.instance.sizes.len() / 3 * tp_fss.source.instance.bound + 1)$ on each separator job forces any feasible schedule to place separators between groups of element jobs on machine 2. This partitions the machine-2 timeline into $m = #(tp_fss.source.instance.sizes.len() / 3)$ windows, each of width $B = #tp_fss.source.instance.bound$. Since each element job has equal task lengths on all 3 machines, the same grouping structure is imposed across all machines. + + *Step 4 -- Verify a solution.* The canonical witness assigns elements to groups $(#tp_fss_sol.source_config.map(str).join(", "))$, i.e.\ elements $\{0, 1, 2\}$ to group 0 and $\{3, 4, 5\}$ to group 1. Group 0 sums: $#tp_fss.source.instance.sizes.at(0) + #tp_fss.source.instance.sizes.at(1) + #tp_fss.source.instance.sizes.at(2) = #(tp_fss.source.instance.sizes.at(0) + tp_fss.source.instance.sizes.at(1) + tp_fss.source.instance.sizes.at(2)) = B$ #sym.checkmark. Group 1 sums: $#tp_fss.source.instance.sizes.at(3) + #tp_fss.source.instance.sizes.at(4) + #tp_fss.source.instance.sizes.at(5) = #(tp_fss.source.instance.sizes.at(3) + tp_fss.source.instance.sizes.at(4) + tp_fss.source.instance.sizes.at(5)) = B$ #sym.checkmark. The target Lehmer code is $(#tp_fss_sol.target_config.map(str).join(", "))$, encoding the job order where elements of group 0 come first, then the separator, then elements of group 1. The resulting makespan $<= D = #tp_fss.target.instance.deadline$ #sym.checkmark. + + *Multiplicity:* The fixture stores one canonical witness. Other valid partitions may exist (e.g., swapping elements of equal size between groups), but one witness suffices to demonstrate the reduction. + ], +)[ + On 3 machines, $m-1$ separator jobs with large machine-2 tasks force element jobs into $m$ windows of width $B$. Size constraints ensure 3 elements per window. +][ + _Construction._ Given $(S, B)$ with $|S| = 3m$ and $L = m B + 1$. Element jobs: task lengths $(a_i, a_i, a_i)$ on machines 1, 2, 3. Separator jobs ($m-1$): task lengths $(0, L, 0)$. Deadline $D$ computed from canonical schedule. + + _Correctness._ ($arrow.r.double$) A valid 3-partition interleaves element groups with separators, meeting the deadline. ($arrow.l.double$) Separators on machine 2 create $m$ windows of capacity $B$; element tasks fill exactly; size constraints give 3 per window. + + _Solution extraction._ Decode Lehmer code; walk job sequence incrementing group at each separator. +] + #reduction-rule("FlowShopScheduling", "ILP")[ Order the jobs with pairwise precedence bits and completion-time variables on every machine; the deadline becomes a makespan bound. ][ From fdfaae6e4837ec2dcefdf7d8e7d7582df7f18dea Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 12:02:30 +0000 Subject: [PATCH 17/23] docs(paper): add worked example for KSatisfiability -> KClique reduction Add data-driven worked example with load-example binding, pred-commands reproducibility block, step-by-step construction walkthrough, and end-to-end solution verification for the 3-SAT to k-clique reduction. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 246 +++++++++++++++++++++++++++++++++++--- 1 file changed, 229 insertions(+), 17 deletions(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index e827e1447..941ab2d33 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13168,7 +13168,52 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Orient the path so $s$ is the start and $t$ the end. Drop $s$ and the last two elements $v', t$; the remaining sequence is the Hamiltonian circuit witness. ] -#reduction-rule("KClique", "SubgraphIsomorphism")[ +#let kc_si = load-example("KClique", "SubgraphIsomorphism") +#let kc_si_sol = kc_si.solutions.at(0) +#reduction-rule("KClique", "SubgraphIsomorphism", + example: true, + example-caption: [5-vertex graph with $k = #kc_si.source.instance.k$: clique detection via subgraph isomorphism], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(kc_si.source) + " -o kclique.json", + "pred reduce kclique.json --to " + target-spec(kc_si) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate kclique.json --config " + kc_si_sol.source_config.map(str).join(","), + ) + + #{ + let n = kc_si.source.instance.graph.num_vertices + let k = kc_si.source.instance.k + let clique-verts = kc_si_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => i) + let fills = kc_si_sol.source_config.map(c => if c == 1 { graph-colors.at(0) } else { white }) + // Simple circular layout for the source graph + let angle-step = 360deg / n + let radius = 2.0 + let positions = range(n).map(i => { + let a = 90deg - i * angle-step + (calc.cos(a) * radius, calc.sin(a) * radius) + }) + align(center, canvas(length: 0.8cm, { + for (u, v) in kc_si.source.instance.graph.edges { + g-edge(positions.at(u), positions.at(v)) + } + for (i, pos) in positions.enumerate() { + g-node(pos, name: str(i), fill: fills.at(i), label: str(i)) + } + })) + } + + *Step 1 -- Source instance.* The canonical KClique instance has $n = #kc_si.source.instance.graph.num_vertices$ vertices, $|E| = #kc_si.source.instance.graph.edges.len()$ edges, and clique size $k = #kc_si.source.instance.k$. The task is to decide whether $G$ contains $k$ pairwise-adjacent vertices. + + *Step 2 -- Construct the pattern graph.* The reduction sets the host graph $H := G$ (unchanged) and builds the pattern graph $P := K_k$, the complete graph on $k = #kc_si.source.instance.k$ vertices with $binom(k, 2) = #kc_si.target.instance.pattern_graph.edges.len()$ edges. The target instance has $#kc_si.target.instance.host_graph.num_vertices$ host vertices and $#kc_si.target.instance.pattern_graph.num_vertices$ pattern vertices. + + *Step 3 -- Variable semantics.* The source uses $n = #kc_si.source.instance.graph.num_vertices$ binary indicator variables ($x_v = 1$ iff vertex $v$ is in the clique). The target uses $k = #kc_si.source.instance.k$ variables, each ranging over ${0, dots, n - 1}$, specifying which host vertex each pattern vertex maps to. + + *Step 4 -- Verify a solution.* The canonical witness has source config $bold(x) = (#kc_si_sol.source_config.map(str).join(", "))$, selecting vertices ${#kc_si_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(i)).join(", ")}$. The target config is $(#kc_si_sol.target_config.map(str).join(", "))$, mapping pattern vertex $i$ to host vertex $c_i$. The image vertices ${#kc_si_sol.target_config.map(str).join(", ")}$ are pairwise adjacent in $G$ (they form a triangle), confirming the isomorphism #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. The $k!$ permutations of the clique vertices each give a valid subgraph isomorphism, so the target side has $#kc_si.source.instance.k ! = #calc.fact(kc_si.source.instance.k)$ witnesses for this single source clique. + ], +)[ A $k$-clique in $G$ is precisely a subgraph of $G$ isomorphic to $K_k$. Constructing $K_k$ as the pattern and passing $G$ as the host reduces $k$-clique detection to a single subgraph-isomorphism query. ][ _Construction._ Given a $k$-clique instance $(G, k)$ with $G = (V, E)$, $|V| = n$, $|E| = m$, build the subgraph-isomorphism instance $(H, P)$ where $H := G$ (host) and $P := K_k$ (pattern, with $k$ vertices and $binom(k, 2)$ edges). @@ -13218,7 +13263,28 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ The processor assignment $p_i in {0, 1}$ is the partition assignment directly. ] -#reduction-rule("HamiltonianPath", "IsomorphicSpanningTree")[ +#let hp_ist = load-example("HamiltonianPath", "IsomorphicSpanningTree") +#let hp_ist_sol = hp_ist.solutions.at(0) +#reduction-rule("HamiltonianPath", "IsomorphicSpanningTree", + example: true, + example-caption: [$n = #graph-num-vertices(hp_ist.source.instance)$ vertices, $|E| = #graph-num-edges(hp_ist.source.instance)$ edges: HP $arrow.r$ IST], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(hp_ist.source) + " -o hp.json", + "pred reduce hp.json --to " + target-spec(hp_ist) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate hp.json --config " + hp_ist_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical Hamiltonian Path instance has $n = #graph-num-vertices(hp_ist.source.instance)$ vertices and $|E| = #graph-num-edges(hp_ist.source.instance)$ edges. The source configuration is a permutation giving the vertex visit order. + + *Step 2 -- Construction.* The reduction keeps the host graph $G$ unchanged and constructs the path graph $P_n$ with $n = #hp_ist.target.instance.tree.num_vertices$ vertices and $#hp_ist.target.instance.tree.edges.len()$ edges ($0 - 1 - dots - #(hp_ist.target.instance.tree.num_vertices - 1)$). The target IST instance is the pair $(G, P_n)$. No auxiliary variables are introduced: the target has the same $n = #graph-num-vertices(hp_ist.source.instance)$ configuration coordinates as the source. + + *Step 3 -- Verify a solution.* The canonical source witness is the path ordering $(#hp_ist_sol.source_config.map(str).join(", "))$, meaning the path visits vertices in that order. The target witness is the tree-vertex mapping $(#hp_ist_sol.target_config.map(str).join(", "))$, where entry $i$ gives the graph vertex assigned to tree vertex $i$. Since $P_n$ has edges $\{i, i+1\}$, the mapping must send each such edge to a graph edge $\{c[i], c[i+1]\}$, and the image must span all $n$ vertices. Here the mapping is the identity, confirming a valid spanning tree isomorphic to $P_n$ #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. The path graph $P_n$ admits two orientations (forward and reverse), so the source problem generically has at least two witnesses per Hamiltonian path; one suffices to demonstrate the reduction. + ], +)[ A Hamiltonian path in $G$ is a spanning tree isomorphic to the path graph $P_n$. The reduction asks whether $G$ admits a spanning tree isomorphic to $P_n$. ][ _Construction._ Given $G = (V, E)$ with $|V| = n$, the target instance is $(G, P_n)$ where $P_n$ is the path $0 - 1 - dots - (n-1)$ with $n - 1$ edges. @@ -13259,7 +13325,46 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Recover the vertex cycle order from the tour edge set. ] -#reduction-rule("KClique", "ConjunctiveBooleanQuery")[ +#let kc_cbq = load-example("KClique", "ConjunctiveBooleanQuery") +#let kc_cbq_sol = kc_cbq.solutions.at(0) +#reduction-rule("KClique", "ConjunctiveBooleanQuery", + example: true, + example-caption: [5-vertex graph ($n = #kc_cbq.source.instance.graph.num_vertices$, $|E| = #kc_cbq.source.instance.graph.edges.len()$, $k = #kc_cbq.source.instance.k$)], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(kc_cbq.source) + " -o kclique.json", + "pred reduce kclique.json --to " + target-spec(kc_cbq) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate kclique.json --config " + kc_cbq_sol.source_config.map(str).join(","), + ) + + #{ + let verts = ((0, 1.5), (1.5, 2.5), (3, 1.5), (1.5, 0), (4.5, 0)) + let edges = kc_cbq.source.instance.graph.edges + let clique-verts = kc_cbq_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => i) + align(center, canvas(length: 0.8cm, { + for (u, v) in edges { + let in-clique = clique-verts.contains(u) and clique-verts.contains(v) + g-edge(verts.at(u), verts.at(v), stroke: if in-clique { 2pt + blue } else { 1pt + gray }) + } + for (k, pos) in verts.enumerate() { + let in-clique = clique-verts.contains(k) + g-node(pos, name: str(k), fill: if in-clique { graph-colors.at(1) } else { white }, label: str(k)) + } + })) + } + + *Step 1 -- Source instance.* The graph $G$ has $n = #kc_cbq.source.instance.graph.num_vertices$ vertices and $|E| = #kc_cbq.source.instance.graph.edges.len()$ edges, with $k = #kc_cbq.source.instance.k$. Vertices ${0, 1, 2}$ form a triangle (the unique 3-clique). + + *Step 2 -- Build the edge relation.* Define the domain $D = {0, dots, #(kc_cbq.source.instance.graph.num_vertices - 1)}$ with $|D| = #kc_cbq.target.instance.domain_size$. The single binary relation $R$ contains both orientations of every edge: $|R| = 2 |E| = #kc_cbq.target.instance.relations.at(0).tuples.len()$ tuples of arity #kc_cbq.target.instance.relations.at(0).arity. + + *Step 3 -- Form the conjunctive query.* Introduce $k = #kc_cbq.target.instance.num_variables$ existential variables $y_0, y_1, y_2$ over $D$. The conjunction has $binom(k, 2) = #kc_cbq.target.instance.conjuncts.len()$ conjuncts: $R(y_0, y_1) and R(y_0, y_2) and R(y_1, y_2)$. + + *Step 4 -- Verify a solution.* The satisfying assignment $(y_0, y_1, y_2) = (#kc_cbq_sol.target_config.map(str).join(", "))$ maps to vertices ${#kc_cbq_sol.target_config.map(str).join(", ")}$. Check each conjunct: $(#kc_cbq_sol.target_config.at(0), #kc_cbq_sol.target_config.at(1)) in R$ #sym.checkmark, $(#kc_cbq_sol.target_config.at(0), #kc_cbq_sol.target_config.at(2)) in R$ #sym.checkmark, $(#kc_cbq_sol.target_config.at(1), #kc_cbq_sol.target_config.at(2)) in R$ #sym.checkmark --- all pairs are adjacent, confirming the 3-clique. The source indicator is $(#kc_cbq_sol.source_config.map(str).join(", "))$, marking exactly vertices ${0, 1, 2}$. + + *Multiplicity:* The fixture stores one canonical witness. The triangle ${0, 1, 2}$ is the unique 3-clique in this graph, but the CBQ query has $3! = 6$ satisfying tuples (one per permutation of the three vertices); the canonical witness selects the sorted order $(0, 1, 2)$. + ], +)[ Introduce $k$ existential variables over the vertex set and require the edge relation $R(y_i, y_j)$ for every pair $i < j$. The query is satisfiable iff $G$ contains a $k$-clique. ][ _Construction._ Given $G = (V, E)$ with $|V| = n$ and parameter $k$, set domain $D = V$, define binary relation $R = {(u,v), (v,u) : {u,v} in E}$, and form the conjunction $phi = and.big_(0 <= i < j < k) R(y_i, y_j)$ with $binom(k, 2)$ conjuncts. @@ -13269,7 +13374,35 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ The satisfying tuple $(d_0, dots, d_(k-1))$ gives vertex indices; set $x[d_i] = 1$ for the clique indicator. ] -#reduction-rule("ExactCoverBy3Sets", "StaffScheduling")[ +#let x3c_ss = load-example("ExactCoverBy3Sets", "StaffScheduling") +#let x3c_ss_sol = x3c_ss.solutions.at(0) +#reduction-rule("ExactCoverBy3Sets", "StaffScheduling", + example: true, + example-caption: [$|X| = #x3c_ss.source.instance.universe_size$, $|cal(C)| = #x3c_ss.source.instance.subsets.len()$ subsets], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(x3c_ss.source) + " -o x3c.json", + "pred reduce x3c.json --to " + target-spec(x3c_ss) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate x3c.json --config " + x3c_ss_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The X3C instance has universe $X = {0, dots, #(x3c_ss.source.instance.universe_size - 1)}$ ($|X| = #x3c_ss.source.instance.universe_size = 3q$, so $q = #(x3c_ss.source.instance.universe_size / 3)$) and $|cal(C)| = #x3c_ss.source.instance.subsets.len()$ subsets: #x3c_ss.source.instance.subsets.enumerate().map(((j, s)) => $S_#j = {#s.map(str).join(", ")}$).join(", "). + + *Step 2 -- Construct StaffScheduling instance.* The reduction creates $#x3c_ss.target.instance.requirements.len()$ periods (one per universe element), each with requirement $r[i] = 1$. Each subset $S_j$ becomes a schedule $sigma_j$ with $sigma_j [i] = 1$ iff $i in S_j$, giving shifts_per_schedule $= #x3c_ss.target.instance.shifts_per_schedule$. The worker budget is $W = q = #x3c_ss.target.instance.num_workers$. + + *Step 3 -- Verify a solution.* The canonical cover selects subsets via $(#x3c_ss_sol.source_config.map(str).join(", "))$: #{ + let selected = x3c_ss_sol.source_config.enumerate().filter(((j, x)) => x == 1).map(((j, x)) => { + let s = x3c_ss.source.instance.subsets.at(j) + $S_#j = {#s.map(str).join(", ")}$ + }) + selected.join(", ") + }. These $#x3c_ss_sol.source_config.filter(x => x == 1).len()$ subsets are pairwise disjoint and cover all $#x3c_ss.source.instance.universe_size$ elements #sym.checkmark \ + The target config $(#x3c_ss_sol.target_config.map(str).join(", "))$ assigns $w[j]$ workers to schedule $j$: total workers $= #x3c_ss_sol.target_config.sum() = q$ #sym.checkmark. Each period is covered by exactly one worker #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. The instance admits a second exact cover ${S_2, S_3}$, but the fixture records only the first found. + ], +)[ Each universe element becomes a time period requiring exactly one worker, each 3-element subset becomes a schedule active on those three periods, and the worker budget is $q = |X|/3$. A feasible assignment selects $q$ schedules covering every period exactly once. ][ _Construction._ Let $(X, cal(C))$ with $|X| = 3q$. Set $m_p = 3q$ periods (one per element), requirement $r[i] = 1$ for all $i$, worker budget $W = q$. Each subset $S_j = {a, b, c}$ defines schedule $sigma_j$ with $sigma_j[i] = 1$ iff $i in S_j$. @@ -13341,7 +13474,36 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Internal arcs at positions $0, dots, n-1$; the cover is $c[0 : n]$. ] -#reduction-rule("KSatisfiability", "KClique")[ +#let ksat_kc = load-example("KSatisfiability", "KClique") +#let ksat_kc_sol = ksat_kc.solutions.at(0) +#reduction-rule("KSatisfiability", "KClique", + example: true, + example-caption: [3-SAT with $m = #ksat_kc.source.instance.clauses.len()$ clauses, $n = #ksat_kc.source.instance.num_vars$ variables $arrow.r$ $k$-clique on $#ksat_kc.target.instance.graph.num_vertices$ vertices], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(ksat_kc.source) + " -o ksat.json", + "pred reduce ksat.json --to " + target-spec(ksat_kc) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate ksat.json --config " + ksat_kc_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The 3-CNF formula $phi$ has $m = #ksat_kc.source.instance.clauses.len()$ clauses over $n = #ksat_kc.source.instance.num_vars$ variables: + $ phi = (#ksat_kc.source.instance.clauses.enumerate().map(((j, c)) => { + let lits = c.literals.map(l => { + if l > 0 { $x_#l$ } else { $overline(x_#(-l))$ } + }) + lits.join($or$) + }).join($) and ($)) $ + + *Step 2 -- Construct the conflict graph.* Create one vertex per literal position: vertex $3j + p$ represents position $p$ ($0$-indexed) in clause $j$, giving $|V| = 3 dot #ksat_kc.source.instance.clauses.len() = #ksat_kc.target.instance.graph.num_vertices$ vertices. Connect $(j_1, p_1)$ and $(j_2, p_2)$ whenever $j_1 != j_2$ and the two literals are not contradictory. The resulting graph has $|E| = #ksat_kc.target.instance.graph.edges.len()$ edges: $E = {#ksat_kc.target.instance.graph.edges.map(e => "(" + e.map(str).join(", ") + ")").join(", ")}$. Set $k = m = #ksat_kc.target.instance.k$. + + *Step 3 -- Verify a solution.* The satisfying assignment $(x_1, x_2, x_3) = (#ksat_kc_sol.source_config.map(str).join(", "))$ makes literal $x_3$ true in clause 0 (position 2, vertex 2) and literal $overline(x_1)$ true in clause 1 (position 0, vertex 3). The target configuration $bold(x) = (#ksat_kc_sol.target_config.map(str).join(", "))$ selects vertices #{ + ksat_kc_sol.target_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(i)).join(" and ") + }, which form a clique (edge $(2, 3) in E$ #sym.checkmark) of size $k = #ksat_kc.target.instance.k$ spanning both clause groups #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. The formula has multiple satisfying assignments; each induces at least one $k$-clique by choosing one true literal per clause. + ], +)[ Assign one vertex per literal position $(j, p)$; connect vertices from different clauses whose literals are not contradictory. A $k$-clique selects one consistent true literal per clause. ][ _Construction._ Given 3-CNF $phi = C_1 and dots.c and C_m$ over $n$ variables, construct $G = (V, E)$ with $V = {(j, p) : 1 <= j <= m, 1 <= p <= 3}$, $|V| = 3m$. Edge between $(j_1, p_1)$ and $(j_2, p_2)$ iff $j_1 != j_2$ and $ell_(j_1, p_1) != not ell_(j_2, p_2)$. Set $k = m$. @@ -13536,7 +13698,31 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ The QAP permutation $gamma$ is the Hamiltonian circuit visit order directly. ] -#reduction-rule("HamiltonianPath", "ConsecutiveOnesSubmatrix")[ +#let hp_cos = load-example("HamiltonianPath", "ConsecutiveOnesSubmatrix") +#let hp_cos_sol = hp_cos.solutions.at(0) +#reduction-rule("HamiltonianPath", "ConsecutiveOnesSubmatrix", + example: true, + example-caption: [Path graph $P_#hp_cos.source.instance.graph.num_vertices$ ($n = #hp_cos.source.instance.graph.num_vertices$, $|E| = #hp_cos.source.instance.graph.edges.len()$)], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(hp_cos.source) + " -o hp.json", + "pred reduce hp.json --to " + target-spec(hp_cos) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate hp.json --config " + hp_cos_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The graph $G$ has $n = #hp_cos.source.instance.graph.num_vertices$ vertices and $m = #hp_cos.source.instance.graph.edges.len()$ edges: ${#hp_cos.source.instance.graph.edges.map(e => "(" + str(e.at(0)) + "," + str(e.at(1)) + ")").join(", ")}$. + + *Step 2 -- Build the incidence matrix.* Construct the $n times m = #hp_cos.target.instance.matrix.len() times #hp_cos.target.instance.matrix.at(0).len()$ vertex-edge incidence matrix $A$ where $A_(i,j) = 1$ iff vertex $i$ is an endpoint of edge $j$. Set bound $K = n - 1 = #hp_cos.target.instance.bound$. The matrix is: + $ A = mat( + #hp_cos.target.instance.matrix.map(row => row.map(v => if v { "1" } else { "0" }).join(", ")).join(";\n ") + ) $ + + *Step 3 -- Verify a solution.* The canonical Hamiltonian path visits vertices in order $(#hp_cos_sol.source_config.map(str).join(", "))$. The target configuration $(#hp_cos_sol.target_config.map(str).join(", "))$ selects #hp_cos_sol.target_config.filter(x => x == 1).len() columns (all $K = #hp_cos.target.instance.bound$ edges). Ordering the selected columns by path position, each row has at most two consecutive ones #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. The path $P_#hp_cos.source.instance.graph.num_vertices$ has exactly 2 Hamiltonian paths (forward and reverse); the canonical one is $(#hp_cos_sol.source_config.map(str).join(", "))$. + ], +)[ The vertex-edge incidence matrix has the consecutive-ones property on a selected subset of $n-1$ columns iff those columns correspond to a Hamiltonian path. ][ _Construction._ Given $G = (V, E)$ with $|V| = n$, $|E| = m$, build $n times m$ Boolean matrix $A$ with $A_(i,j) = 1$ iff vertex $i$ is an endpoint of edge $j$. Set bound $K = n - 1$. @@ -13597,7 +13783,43 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ The selection vector is unchanged. ] -#reduction-rule("NAESatisfiability", "MaxCut")[ +#let nae_mc = load-example("NAESatisfiability", "MaxCut") +#let nae_mc_sol = nae_mc.solutions.at(0) +#reduction-rule("NAESatisfiability", "MaxCut", + example: true, + example-caption: [$n = #nae_mc.source.instance.num_vars$ variables, $m = #nae_mc.source.instance.clauses.len()$ clauses], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(nae_mc.source) + " -o naesat.json", + "pred reduce naesat.json --to " + target-spec(nae_mc) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate naesat.json --config " + nae_mc_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The canonical NAE-SAT instance has $n = #nae_mc.source.instance.num_vars$ variables and $m = #nae_mc.source.instance.clauses.len()$ clauses: + #for (j, clause) in nae_mc.source.instance.clauses.enumerate() [ + - $C_#(j+1) = (#clause.literals.map(l => { if l > 0 { $x_#l$ } else { $overline(x)_#(-l)$ } }).join($, $))$ + ] + + *Step 2 -- Construct the MaxCut graph.* Each variable $x_i$ yields a literal pair $(v_i^+, v_i^-)$ connected by a heavy variable edge of weight $M = m + 1 = #(nae_mc.source.instance.clauses.len() + 1)$. This gives $2n = #nae_mc.target.instance.graph.num_vertices$ vertices and $n = #nae_mc.source.instance.num_vars$ variable edges. For each clause, unit-weight edges connect all pairs of literal vertices: $binom(3, 2) = 3$ edges per clause, yielding $#(nae_mc.source.instance.clauses.len()) dot 3 = #(nae_mc.source.instance.clauses.len() * 3)$ clause edges. Total: $#nae_mc.target.instance.graph.edges.len()$ edges with weights $(#nae_mc.target.instance.edge_weights.map(str).join(", "))$. + + *Step 3 -- Verify the canonical witness.* Source assignment $(#nae_mc_sol.source_config.map(str).join(", "))$: $x_1 = top, x_2 = bot, x_3 = top$. Target partition $(#nae_mc_sol.target_config.map(str).join(", "))$: vertices $v_1^+ = 1, v_1^- = 0, v_2^+ = 0, v_2^- = 1, v_3^+ = 1, v_3^- = 0$. + + All $n = #nae_mc.source.instance.num_vars$ variable edges are cut (literal pairs separated), contributing $#nae_mc.source.instance.num_vars dot #(nae_mc.source.instance.clauses.len() + 1) = #(nae_mc.source.instance.num_vars * (nae_mc.source.instance.clauses.len() + 1))$. + #let clause_cut_counts = nae_mc.source.instance.clauses.map(clause => { + let verts = clause.literals.map(l => { if l > 0 { 2 * (l - 1) } else { 2 * (-l - 1) + 1 } }) + let sides = verts.map(v => nae_mc_sol.target_config.at(v)) + let cuts = if sides.at(0) != sides.at(1) { 1 } else { 0 } + if sides.at(0) != sides.at(2) { 1 } else { 0 } + if sides.at(1) != sides.at(2) { 1 } else { 0 } + cuts + }) + #for (j, clause) in nae_mc.source.instance.clauses.enumerate() [ + - $C_#(j+1)$: #clause_cut_counts.at(j) of 3 clause edges cut (literals on both sides) #sym.checkmark + ] + Total cut value: $#(nae_mc.source.instance.num_vars * (nae_mc.source.instance.clauses.len() + 1)) + #clause_cut_counts.sum() = #(nae_mc.source.instance.num_vars * (nae_mc.source.instance.clauses.len() + 1) + clause_cut_counts.sum())$ #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. The complementary assignment (flipping all variables) also NAE-satisfies the formula and yields an equally valid cut. + ], +)[ Each variable $x_i$ becomes a literal pair $(v_i^+, v_i^-)$ joined by a heavy edge of weight $M = m+1$. Clause edges of weight 1 connect literal vertices within each clause. The optimal cut separates each literal pair and has at least one literal on each side per clause, encoding NAE-satisfiability. ][ _Construction._ Given $n$ variables and $m$ clauses. For each variable $x_i$: vertices $v_i^+ = 2(i-1)$ and $v_i^- = 2(i-1)+1$ with variable edge of weight $M = m + 1$. For each clause $C_j$: unit-weight edges between all pairs of literal vertices in $C_j$. @@ -13719,16 +13941,6 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Decode Lehmer code; scan left to right incrementing group index at each filler. ] -#reduction-rule("ThreePartition", "FlowShopScheduling")[ - On 3 machines, $m-1$ separator jobs with large machine-2 tasks force element jobs into $m$ windows of width $B$. Size constraints ensure 3 elements per window. -][ - _Construction._ Given $(S, B)$ with $|S| = 3m$ and $L = m B + 1$. Element jobs: task lengths $(a_i, a_i, a_i)$ on machines 1, 2, 3. Separator jobs ($m-1$): task lengths $(0, L, 0)$. Deadline $D$ computed from canonical schedule. - - _Correctness._ ($arrow.r.double$) A valid 3-partition interleaves element groups with separators, meeting the deadline. ($arrow.l.double$) Separators on machine 2 create $m$ windows of capacity $B$; element tasks fill exactly; size constraints give 3 per window. - - _Solution extraction._ Decode Lehmer code; walk job sequence incrementing group at each separator. -] - #let tp_jss = load-example("ThreePartition", "JobShopScheduling") #let tp_jss_sol = tp_jss.solutions.at(0) #reduction-rule("ThreePartition", "JobShopScheduling", From 770e29d635bef07129f997c16c3a215b8469170a Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 12:03:30 +0000 Subject: [PATCH 18/23] docs(paper): add worked example for MinimumVertexCover -> MinimumFeedbackArcSet reduction Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 48 ++++++++++++++++++++++++++++++--------- 1 file changed, 37 insertions(+), 11 deletions(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index e827e1447..7b4d48cb6 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13331,7 +13331,43 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Task $t_i$ starting at time $s_i$: assign to subset 0 if $s_i <= h$, else subset 1. ] -#reduction-rule("MinimumVertexCover", "MinimumFeedbackArcSet")[ +#let mvc_mfas = load-example("MinimumVertexCover", "MinimumFeedbackArcSet") +#let mvc_mfas_sol = mvc_mfas.solutions.at(0) +#let mvc_mfas_cover = mvc_mfas_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => i) +#let mvc_mfas_fas = mvc_mfas_sol.target_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => i) +#reduction-rule("MinimumVertexCover", "MinimumFeedbackArcSet", + example: true, + example-caption: [Triangle graph ($n = #graph-num-vertices(mvc_mfas.source.instance)$, $|E| = #graph-num-edges(mvc_mfas.source.instance)$): VC $arrow.r$ FAS via vertex splitting], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(mvc_mfas.source) + " -o mvc.json", + "pred reduce mvc.json --to " + target-spec(mvc_mfas) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate mvc.json --config " + mvc_mfas_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The source graph $G$ has $n = #graph-num-vertices(mvc_mfas.source.instance)$ vertices and $|E| = #graph-num-edges(mvc_mfas.source.instance)$ edges: $E = {#mvc_mfas.source.instance.graph.edges.map(e => "(" + e.map(str).join(", ") + ")").join(", ")}$ with weights $bold(w) = (#mvc_mfas.source.instance.weights.map(str).join(", "))$. + + *Step 2 -- Construction.* Each vertex $v$ splits into $v^"in"$ and $v^"out"$, yielding $2n = #mvc_mfas.target.instance.graph.num_vertices$ nodes in the target digraph $H$. Internal arcs $(v^"in", v^"out")$ carry weight $w(v)$: #{ + let arcs = mvc_mfas.target.instance.graph.arcs + let ws = mvc_mfas.target.instance.weights + let n = mvc_mfas.source.instance.graph.num_vertices + arcs.enumerate().filter(((i, a)) => i < n).map(((i, a)) => "(" + str(a.at(0)) + ", " + str(a.at(1)) + ") w=" + str(ws.at(i))).join(", ") + }. Each undirected edge produces two crossing arcs with weight $M = 1 + sum w(v) = #mvc_mfas.target.instance.weights.slice(mvc_mfas.source.instance.graph.num_vertices).at(0)$: #{ + let arcs = mvc_mfas.target.instance.graph.arcs + let ws = mvc_mfas.target.instance.weights + let n = mvc_mfas.source.instance.graph.num_vertices + arcs.enumerate().filter(((i, a)) => i >= n).map(((i, a)) => "(" + str(a.at(0)) + ", " + str(a.at(1)) + ") w=" + str(ws.at(i))).join(", ") + }. Total: #mvc_mfas.target.instance.graph.arcs.len() arcs ($#mvc_mfas.source.instance.graph.num_vertices$ internal + $#(mvc_mfas.target.instance.graph.arcs.len() - mvc_mfas.source.instance.graph.num_vertices)$ crossing). + + *Step 3 -- Verify a solution.* The source cover is $C = {#mvc_mfas_cover.map(str).join(", ")}$ (size #mvc_mfas_cover.len()). The target FAS selects arcs at indices ${#mvc_mfas_fas.map(str).join(", ")}$, which are the internal arcs #{ + let arcs = mvc_mfas.target.instance.graph.arcs + mvc_mfas_fas.map(i => "(" + str(arcs.at(i).at(0)) + ", " + str(arcs.at(i).at(1)) + ")").join(", ") + } -- exactly the internal arcs of cover vertices $v^"in" arrow.r v^"out"$ for $v in C$. No crossing arc (weight $M = #mvc_mfas.target.instance.weights.slice(mvc_mfas.source.instance.graph.num_vertices).at(0)$) is selected, confirming the optimal FAS uses only internal arcs #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. By symmetry of the triangle, any two-vertex cover is optimal. + ], +)[ Each vertex $v$ splits into $v^"in"$ and $v^"out"$ joined by an internal arc weighted $w(v)$. Each edge becomes two crossing arcs weighted $M = 1 + sum_v w(v)$. The optimal FAS never includes crossing arcs; selecting internal arcs for cover vertices breaks every cycle. ][ _Construction._ Given $(G, w)$ with $G = (V, E)$, $n = |V|$. Build directed graph $H$ on $2n$ nodes. Internal arcs $(v^"in", v^"out")$ with weight $w(v)$. For each ${u,v} in E$: crossing arcs $(u^"out", v^"in")$ and $(v^"out", u^"in")$ with weight $M = 1 + sum_(v in V) w(v)$. @@ -13719,16 +13755,6 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Decode Lehmer code; scan left to right incrementing group index at each filler. ] -#reduction-rule("ThreePartition", "FlowShopScheduling")[ - On 3 machines, $m-1$ separator jobs with large machine-2 tasks force element jobs into $m$ windows of width $B$. Size constraints ensure 3 elements per window. -][ - _Construction._ Given $(S, B)$ with $|S| = 3m$ and $L = m B + 1$. Element jobs: task lengths $(a_i, a_i, a_i)$ on machines 1, 2, 3. Separator jobs ($m-1$): task lengths $(0, L, 0)$. Deadline $D$ computed from canonical schedule. - - _Correctness._ ($arrow.r.double$) A valid 3-partition interleaves element groups with separators, meeting the deadline. ($arrow.l.double$) Separators on machine 2 create $m$ windows of capacity $B$; element tasks fill exactly; size constraints give 3 per window. - - _Solution extraction._ Decode Lehmer code; walk job sequence incrementing group at each separator. -] - #let tp_jss = load-example("ThreePartition", "JobShopScheduling") #let tp_jss_sol = tp_jss.solutions.at(0) #reduction-rule("ThreePartition", "JobShopScheduling", From 8c5ccef9e9bc04851a7168b20ca33db5d734942b Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 12:05:37 +0000 Subject: [PATCH 19/23] docs(paper): add worked example for Satisfiability -> NAESatisfiability reduction Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 3ff1ba9d8..15a16bb55 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -10084,6 +10084,45 @@ where $P$ is a penalty weight large enough that any constraint violation costs m _Solution extraction._ Return the values of the circuit input variables $x_1, dots, x_n$. ] +#let sat_nae = load-example("Satisfiability", "NAESatisfiability") +#let sat_nae_sol = sat_nae.solutions.at(0) +#reduction-rule("Satisfiability", "NAESatisfiability", + example: true, + example-caption: [$n = #sat_nae.source.instance.num_vars$ variables, $m = #sat_nae.source.instance.clauses.len()$ clauses], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(sat_nae.source) + " -o sat.json", + "pred reduce sat.json --to " + target-spec(sat_nae) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate sat.json --config " + sat_nae_sol.source_config.map(str).join(","), + ) + + #{ + let fmt-lit(l) = if l > 0 { $x_#l$ } else { $not x_#(-l)$ } + let fmt-clause(c) = $paren.l #c.literals.map(fmt-lit).join($, $) paren.r$ + let src = sat_nae.source.instance + let tgt = sat_nae.target.instance + [ + *Step 1 -- Source instance.* The SAT formula has $n = #src.num_vars$ variables and $m = #src.clauses.len()$ clauses: $phi = #src.clauses.map(fmt-clause).join($and.big$)$. + + *Step 2 -- Construct target NAE-SAT.* Introduce sentinel variable $s = x_#(src.num_vars + 1)$ (index #(src.num_vars + 1)). Append $s$ to every clause, yielding #tgt.num_vars variables and #tgt.clauses.len() NAE clauses: $phi' = #tgt.clauses.map(fmt-clause).join($and.big$)$. The overhead is $+1$ variable and $+#src.clauses.len()$ total literals (one sentinel literal per clause). + + *Step 3 -- Verify a solution.* The canonical source assignment is $(#sat_nae_sol.source_config.map(str).join(", "))$, i.e.\ $x_1 = #sat_nae_sol.source_config.at(0)$, $x_2 = #sat_nae_sol.source_config.at(1)$, $x_3 = #sat_nae_sol.source_config.at(2)$. In the target, the sentinel is set to $s = #sat_nae_sol.target_config.at(src.num_vars)$, giving the full target config $(#sat_nae_sol.target_config.map(str).join(", "))$. Each NAE clause contains at least one true and one false literal: the sentinel $s = 0$ provides the false literal, and the original satisfying assignment provides the true literal #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. By NAE symmetry, the complement $(#sat_nae_sol.target_config.map(v => str(1 - v)).join(", "))$ (with $s = 1$) is also NAE-satisfying. + ] + } + ], +)[ + Introduce a fresh sentinel variable $s$ and append it to every clause. The sentinel forces NAE-SAT to simulate ordinary disjunction. +][ + _Construction._ Given SAT instance $phi$ with variables $x_1, dots, x_n$ and clauses $C_1, dots, C_m$, introduce $s$ (index $n+1$). For each clause $C_j = (ell_1 or dots or ell_k)$, form NAE clause $C'_j = (ell_1, dots, ell_k, s)$. + + _Correctness._ ($arrow.r.double$) Set $s = 0$. A satisfying assignment makes at least one $ell_i = 1$ per clause; with $s = 0$, the clause is neither all-true nor all-false. ($arrow.l.double$) If $beta$ NAE-satisfies $phi'$, let $v = beta(s)$. If $v = 0$, at least one literal per clause is true. If $v = 1$, flip all variables; by NAE symmetry the complement also works. + + _Solution extraction._ If $beta(s) = 0$, return $(beta(x_1), dots, beta(x_n))$. If $beta(s) = 1$, return $(1 - beta(x_1), dots, 1 - beta(x_n))$. +] + #let cs_sg = load-example("CircuitSAT", "SpinGlass") #let cs_sg_sol = cs_sg.solutions.at(0) #reduction-rule("CircuitSAT", "SpinGlass", From 107868f3b028dc4d1e1bd0f520484020df5e79a9 Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 12:05:56 +0000 Subject: [PATCH 20/23] docs(paper): add worked example for ExactCoverBy3Sets -> MaximumSetPacking reduction Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 941ab2d33..2423f4402 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13773,7 +13773,37 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Normalize bin labels: element $i$ in subset 0 if $b_i = b_0$, else subset 1. ] -#reduction-rule("ExactCoverBy3Sets", "MaximumSetPacking")[ +#let x3c_msp = load-example("ExactCoverBy3Sets", "MaximumSetPacking") +#let x3c_msp_sol = x3c_msp.solutions.at(0) +#reduction-rule("ExactCoverBy3Sets", "MaximumSetPacking", + example: true, + example-caption: [#x3c_msp.source.instance.subsets.len() subsets over $3q = #x3c_msp.source.instance.universe_size$ elements], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(x3c_msp.source) + " -o x3c.json", + "pred reduce x3c.json --to " + target-spec(x3c_msp) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate x3c.json --config " + x3c_msp_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The X3C instance has universe $X = {0, dots, #(x3c_msp.source.instance.universe_size - 1)}$ with $q = #(x3c_msp.source.instance.universe_size / 3)$ and $#x3c_msp.source.instance.subsets.len()$ candidate triples: + #for (i, s) in x3c_msp.source.instance.subsets.enumerate() [ + $S_#i = {#s.map(str).join(", ")}$#if i < x3c_msp.source.instance.subsets.len() - 1 [, ] else [.] + ] + + *Step 2 -- Construct the target.* The identity map copies each triple as a unit-weight set: $#x3c_msp.target.instance.sets.len()$ sets with weights $(#x3c_msp.target.instance.weights.map(str).join(", "))$. The target asks for a maximum packing of pairwise-disjoint sets. + + *Step 3 -- Verify the canonical witness.* Source config $(#x3c_msp_sol.source_config.map(str).join(", "))$ selects subsets ${#x3c_msp_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(i)).join(", ")}$: + #let selected = x3c_msp_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => i) + #for idx in selected [ + - $S_#idx = {#x3c_msp.source.instance.subsets.at(idx).map(str).join(", ")}$ + ] + These $#selected.len()$ triples are pairwise disjoint and cover all $#x3c_msp.source.instance.universe_size = 3 dot #(x3c_msp.source.instance.universe_size / 3)$ elements #sym.checkmark \\ + Target config is identical: $(#x3c_msp_sol.target_config.map(str).join(", "))$ — packing value $= #selected.len() = q$ #sym.checkmark + + *Multiplicity:* The fixture stores one canonical witness. For this instance there are no other exact covers since every pair of triples that covers all 6 elements is unique. + ], +)[ The identity map embeds exact cover as set packing: unit-weight 3-element subsets with a $3q$-element universe. A maximum packing of $q$ disjoint sets is an exact cover. ][ _Construction._ Given $(X, cal(C))$ with $|X| = 3q$, the MaximumSetPacking instance is $(X, cal(C))$ with unit weights. From 62185249da86ae2b435cb574090ea1c470007897 Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 12:06:47 +0000 Subject: [PATCH 21/23] docs(paper): add worked example for KSatisfiability -> MinimumVertexCover reduction Also remove duplicate ThreePartition -> FlowShopScheduling entry that caused a duplicate label compilation error. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 77 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 75 insertions(+), 2 deletions(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 2423f4402..5b492ad18 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13422,7 +13422,38 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ If $beta(s) = 0$, return $(beta(x_1), dots, beta(x_n))$. If $beta(s) = 1$, return $(1 - beta(x_1), dots, 1 - beta(x_n))$. ] -#reduction-rule("KSatisfiability", "MinimumVertexCover")[ +#let ksat_mvc = load-example("KSatisfiability", "MinimumVertexCover") +#let ksat_mvc_sol = ksat_mvc.solutions.at(0) +#reduction-rule("KSatisfiability", "MinimumVertexCover", + example: true, + example-caption: [3-SAT with $n = #ksat_mvc.source.instance.num_vars$ variables, $m = #sat-num-clauses(ksat_mvc.source.instance)$ clauses], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(ksat_mvc.source) + " -o ksat.json", + "pred reduce ksat.json --to " + target-spec(ksat_mvc) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate ksat.json --config " + ksat_mvc_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The 3-SAT formula has $n = #ksat_mvc.source.instance.num_vars$ variables and $m = #sat-num-clauses(ksat_mvc.source.instance)$ clauses: #{ksat_mvc.source.instance.clauses.enumerate().map(((j, c)) => { + let lits = c.literals.map(l => if l > 0 { $x_#l$ } else { $overline(x)_#calc.abs(l)$ }) + [$c_#j = (#lits.join($or$))$] + }).join(", ")}. A satisfying assignment is $(#ksat_mvc_sol.source_config.map(str).join(", "))$, i.e.\ #{range(ksat_mvc.source.instance.num_vars).map(i => { + let v = ksat_mvc_sol.source_config.at(i) + if v == 1 { $x_#(i+1) = 1$ } else { $x_#(i+1) = 0$ } + }).join(", ")}. + + *Step 2 -- Truth-setting edges.* For each variable $x_i$, create vertices $u_i$ (index $2(i-1)$) and $overline(u)_i$ (index $2(i-1)+1$) connected by a truth-setting edge. This gives $2n = #(2 * ksat_mvc.source.instance.num_vars)$ literal vertices and $n = #ksat_mvc.source.instance.num_vars$ edges. + + *Step 3 -- Clause triangles and communication edges.* For each clause $c_j$, create a triangle of 3 vertices at indices $2n + 3j, 2n + 3j + 1, 2n + 3j + 2$, connected by 3 internal edges. Each triangle vertex $t^j_k$ is also connected to its literal vertex by a communication edge (3 per clause). Total: $3m = #(3 * sat-num-clauses(ksat_mvc.source.instance))$ clause vertices, $3m = #(3 * sat-num-clauses(ksat_mvc.source.instance))$ triangle edges, $3m = #(3 * sat-num-clauses(ksat_mvc.source.instance))$ communication edges. + + *Step 4 -- Target graph dimensions.* The resulting graph has $|V| = 2n + 3m = #ksat_mvc.target.instance.graph.num_vertices$ vertices and $|E| = n + 6m = #ksat_mvc.target.instance.graph.edges.len()$ edges, with unit weights. + + *Step 5 -- Verify a solution.* The satisfying assignment $(#ksat_mvc_sol.source_config.map(str).join(", "))$ maps to a vertex cover of size $n + 2m = #(ksat_mvc.source.instance.num_vars + 2 * sat-num-clauses(ksat_mvc.source.instance))$. The target configuration is $(#ksat_mvc_sol.target_config.map(str).join(", "))$: the cover selects #ksat_mvc_sol.target_config.filter(x => x == 1).len() vertices. For each truth-setting edge, exactly one endpoint is in the cover #sym.checkmark. For each clause triangle, exactly two of three vertices are covered #sym.checkmark. Each communication edge has at least one endpoint in the cover #sym.checkmark. + + *Multiplicity:* The fixture stores one canonical witness. Other valid covers correspond to different satisfying assignments of the formula. + ], +)[ Each variable contributes a truth-setting edge; each clause contributes a satisfaction-testing triangle. The formula is satisfiable iff the graph has a vertex cover of size $n + 2m$. ][ _Construction._ Given 3-CNF $phi$ with $n$ variables and $m$ clauses, construct $G = (V, E)$ with $|V| = 2n + 3m$. For each variable $x_i$: vertices $u_i$ (index $2i$) and $overline(u)_i$ (index $2i+1$) with edge $(u_i, overline(u)_i)$. For each clause $c_j$: triangle vertices $t^j_0, t^j_1, t^j_2$ at indices $2n + 3j, 2n+3j+1, 2n+3j+2$. Communication edges connect each $t^j_k$ to the literal vertex of its $k$-th literal. @@ -13657,7 +13688,49 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Identify used connector edges and follow the successor map from vertex 0. ] -#reduction-rule("MaximumIndependentSet", "IntegralFlowBundles")[ +#let mis_ifb = load-example("MaximumIndependentSet", "IntegralFlowBundles") +#let mis_ifb_sol = mis_ifb.solutions.at(0) +#reduction-rule("MaximumIndependentSet", "IntegralFlowBundles", + example: true, + example-caption: [Path graph $P_#mis_ifb.source.instance.graph.num_vertices$ ($n = #mis_ifb.source.instance.graph.num_vertices$, $|E| = #mis_ifb.source.instance.graph.edges.len()$)], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(mis_ifb.source) + " -o mis.json", + "pred reduce mis.json --to " + target-spec(mis_ifb) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate mis.json --config " + mis_ifb_sol.source_config.map(str).join(","), + ) + + #{ + let graph = mis_ifb.source.instance.graph + let n = graph.num_vertices + let verts = range(n).map(k => (k * 1.5, 0)) + let is-in-set = mis_ifb_sol.source_config.map(c => c > 0) + let blue = graph-colors.at(0) + align(center, canvas(length: 0.8cm, { + import draw: * + for (u, v) in graph.edges { + g-edge(verts.at(u), verts.at(v)) + } + for (k, pos) in verts.enumerate() { + g-node(pos, name: str(k), + fill: if is-in-set.at(k) { blue } else { white }, + label: [$v_#k$]) + } + })) + } + + *Step 1 -- Source instance.* The path graph $P_#mis_ifb.source.instance.graph.num_vertices$ has $n = #mis_ifb.source.instance.graph.num_vertices$ vertices and edges ${#mis_ifb.source.instance.graph.edges.map(e => "(" + str(e.at(0)) + "," + str(e.at(1)) + ")").join(", ")}$, with unit weights $(#mis_ifb.source.instance.weights.map(str).join(", "))$. + + *Step 2 -- Build the flow network.* The reduction creates a directed graph with $#mis_ifb.target.instance.graph.num_vertices$ nodes: source $s = #mis_ifb.target.instance.source$, intermediates $w_0, dots, w_(#(mis_ifb.source.instance.graph.num_vertices - 1))$, and sink $t = #mis_ifb.target.instance.sink$. There are $#mis_ifb.target.instance.graph.arcs.len()$ arcs ($2n = #(2 * mis_ifb.source.instance.graph.num_vertices)$): for each vertex $v_i$, arc $a_i^"in" = (s, w_i)$ at index $2i$ and $a_i^"out" = (w_i, t)$ at index $2i+1$. + + *Step 3 -- Create bundles.* There are $#mis_ifb.target.instance.bundles.len()$ bundles total. For each original edge ${v_i, v_j} in E$, an edge bundle ${a_i^"out", a_j^"out"}$ with capacity 1 enforces that at most one endpoint is selected (#mis_ifb.source.instance.graph.edges.len() edge bundles). For each vertex $v_i$, a vertex bundle ${a_i^"in", a_i^"out"}$ with capacity 2 links the in/out arcs (#mis_ifb.source.instance.graph.num_vertices vertex bundles). Bundle capacities: $(#mis_ifb.target.instance.bundle_capacities.map(str).join(", "))$. Flow requirement $R = #mis_ifb.target.instance.requirement$. + + *Step 4 -- Verify a solution.* The canonical IS selects vertices ${#mis_ifb_sol.source_config.enumerate().filter(((i, x)) => x > 0).map(((i, x)) => $v_#i$).join(", ")}$ (config $(#mis_ifb_sol.source_config.map(str).join(", "))$). Each selected vertex $v_i$ contributes flow 1 on arcs $a_i^"in"$ and $a_i^"out"$, giving target config $(#mis_ifb_sol.target_config.map(str).join(", "))$. The total flow equals the IS size (#mis_ifb_sol.source_config.fold(0, (a, b) => a + b)). Every edge bundle is satisfied because no two adjacent vertices are both selected, and vertex bundles are satisfied with capacity 2 $>=$ individual flow of 1. + + *Multiplicity:* The fixture stores one canonical witness. The path $P_#mis_ifb.source.instance.graph.num_vertices$ admits larger independent sets (e.g., ${v_0, v_2}$ or ${v_0, v_3}$), but the canonical witness suffices to demonstrate the reduction. + ], +)[ Each vertex $v_i$ maps to a flow unit through an intermediate node; edge-bundle constraints cap combined outflow of adjacent pairs at 1, so feasible flow of value $k$ exists iff an independent set of size $>= k$ exists. ][ _Construction._ Directed graph on $n + 2$ nodes: source $s$, intermediates $w_0, dots, w_(n-1)$, sink $t$. Arcs $a_i^"in" = (s, w_i)$ (index $2i$) and $a_i^"out" = (w_i, t)$ (index $2i+1$). Edge bundles ${a_i^"out", a_j^"out"}$ with capacity 1 for each ${v_i, v_j} in E$. Vertex bundles ${a_i^"in", a_i^"out"}$ with capacity 2. Flow requirement $R = 1$. From eef8a899543e979c044d72d0f8ffbfe10e604994 Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 12:07:06 +0000 Subject: [PATCH 22/23] docs(paper): add worked example for MaxCut -> MinimumCutIntoBoundedSets reduction Also remove duplicate ThreePartition -> FlowShopScheduling prose entry that was left over from a merge conflict. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 35 ++++++++++++++++++++++++----------- 1 file changed, 24 insertions(+), 11 deletions(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index e827e1447..0074f34ef 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13719,16 +13719,6 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Decode Lehmer code; scan left to right incrementing group index at each filler. ] -#reduction-rule("ThreePartition", "FlowShopScheduling")[ - On 3 machines, $m-1$ separator jobs with large machine-2 tasks force element jobs into $m$ windows of width $B$. Size constraints ensure 3 elements per window. -][ - _Construction._ Given $(S, B)$ with $|S| = 3m$ and $L = m B + 1$. Element jobs: task lengths $(a_i, a_i, a_i)$ on machines 1, 2, 3. Separator jobs ($m-1$): task lengths $(0, L, 0)$. Deadline $D$ computed from canonical schedule. - - _Correctness._ ($arrow.r.double$) A valid 3-partition interleaves element groups with separators, meeting the deadline. ($arrow.l.double$) Separators on machine 2 create $m$ windows of capacity $B$; element tasks fill exactly; size constraints give 3 per window. - - _Solution extraction._ Decode Lehmer code; walk job sequence incrementing group at each separator. -] - #let tp_jss = load-example("ThreePartition", "JobShopScheduling") #let tp_jss_sol = tp_jss.solutions.at(0) #reduction-rule("ThreePartition", "JobShopScheduling", @@ -13760,7 +13750,30 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Decode machine 0 Lehmer code; walk permutation incrementing group at each separator. ] -#reduction-rule("MaxCut", "MinimumCutIntoBoundedSets")[ +#let mc_mcbs = load-example("MaxCut", "MinimumCutIntoBoundedSets") +#let mc_mcbs_sol = mc_mcbs.solutions.at(0) +#reduction-rule("MaxCut", "MinimumCutIntoBoundedSets", + example: true, + example-caption: [Triangle graph ($n = #mc_mcbs.source.instance.graph.num_vertices$, $|E| = #mc_mcbs.source.instance.graph.edges.len()$, unit weights) mapped to $K_#mc_mcbs.target.instance.graph.num_vertices$], + extra: [ + #pred-commands( + "pred create --example " + problem-spec(mc_mcbs.source) + " -o maxcut.json", + "pred reduce maxcut.json --to " + target-spec(mc_mcbs) + " -o bundle.json", + "pred solve bundle.json", + "pred evaluate maxcut.json --config " + mc_mcbs_sol.source_config.map(str).join(","), + ) + + *Step 1 -- Source instance.* The source MaxCut instance is a triangle $G = (V, E)$ with $n = #mc_mcbs.source.instance.graph.num_vertices$ vertices, $|E| = #mc_mcbs.source.instance.graph.edges.len()$ edges, and unit weights $w = (#mc_mcbs.source.instance.edge_weights.map(str).join(", "))$. A maximum cut partitions vertices into two sides to maximize crossing-edge weight; here the optimum is $2$ (any single vertex versus the other two). + + *Step 2 -- Pad to even vertex count.* Since $n = 3$ is odd, set $n' = n + 1 = 4$. The target complete graph has $N = 2 n' = #mc_mcbs.target.instance.graph.num_vertices$ vertices, giving $#mc_mcbs.target.instance.graph.num_vertices dot (#mc_mcbs.target.instance.graph.num_vertices - 1) slash 2 = #mc_mcbs.target.instance.graph.edges.len()$ edges. + + *Step 3 -- Invert weights on $K_#mc_mcbs.target.instance.graph.num_vertices$.* Compute $w_"max" = 1 + max w(e) = 2$. For each original edge $(i, j) in E$, the inverted weight is $tilde(w)(i,j) = w_"max" - w(i,j) = 2 - 1 = 1$. All other edges (including those to padding vertices) receive weight $w_"max" = 2$. Designate source $s = #mc_mcbs.target.instance.source$, sink $t = #mc_mcbs.target.instance.sink$, size bound $b = #mc_mcbs.target.instance.size_bound$. The target edge weights are $(#mc_mcbs.target.instance.edge_weights.map(str).join(", "))$. + + *Step 4 -- Verify a solution.* The canonical source witness is $(#mc_mcbs_sol.source_config.map(str).join(", "))$: vertices $0, 1$ on side $0$ and vertex $2$ on side $1$, cutting $2$ of $3$ edges (max cut value $= 2$). The target witness is $(#mc_mcbs_sol.target_config.map(str).join(", "))$. Check: (1) the first $n = #mc_mcbs.source.instance.graph.num_vertices$ entries match the source partition #sym.checkmark; (2) source vertex $s = #mc_mcbs.target.instance.source$ and sink vertex $t = #mc_mcbs.target.instance.sink$ are on opposite sides #sym.checkmark; (3) each side has exactly $b = #mc_mcbs.target.instance.size_bound$ vertices (balanced bisection) #sym.checkmark. + + *Multiplicity:* The fixture stores one canonical witness. The triangle has $3$ maximum cuts of value $2$ (isolate any one vertex); padding vertices can be assigned to balance both sides, yielding multiple valid target configurations. + ], +)[ Invert edge weights relative to $w_"max"$ on a complete graph $K_N$ with $N = 2n'$. A minimum balanced bisection in the inverted graph corresponds to a maximum cut in the original. ][ _Construction._ Given $G = (V, E, w)$ with $n = |V|$. Set $n' = n + (n mod 2)$, $N = 2n'$, $w_"max" = 1 + max_(e in E) w(e)$. Build $K_N$ with $tilde(w)(i,j) = w_"max" - w(i,j)$ for edges in $E$, else $w_"max"$. Designate $s = n'$, $t = n' + 1$, bound $b = n'$. From b95f0222eb7a78b883e61d70d560b1a696efdcdb Mon Sep 17 00:00:00 2001 From: zazabap Date: Sat, 4 Apr 2026 12:12:43 +0000 Subject: [PATCH 23/23] =?UTF-8?q?fix:=20remove=20duplicate=20SAT=E2=86=92N?= =?UTF-8?q?AESatisfiability=20prose-only=20entry?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Leftover from parallel worktree merge — the data-driven entry was placed at line ~10136 by the SAT→NAE agent, but the original prose-only entry at line ~13454 was not cleaned up. Co-Authored-By: Claude Opus 4.6 (1M context) --- docs/paper/reductions.typ | 9 --------- 1 file changed, 9 deletions(-) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 883c1d296..11b2af370 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -13451,15 +13451,6 @@ The following table shows concrete variable overhead for example instances, take _Solution extraction._ Map $w in NN^m$ to $c in {0,1}^m$ by $c[j] = 1$ if $w[j] > 0$. ] -#reduction-rule("Satisfiability", "NAESatisfiability")[ - Introduce a fresh sentinel variable $s$ and append it to every clause. The sentinel forces NAE-SAT to simulate ordinary disjunction. -][ - _Construction._ Given SAT instance $phi$ with variables $x_1, dots, x_n$ and clauses $C_1, dots, C_m$, introduce $s$ (index $n+1$). For each clause $C_j = (ell_1 or dots or ell_k)$, form NAE clause $C'_j = (ell_1, dots, ell_k, s)$. - - _Correctness._ ($arrow.r.double$) Set $s = 0$. A satisfying assignment makes at least one $ell_i = 1$ per clause; with $s = 0$, the clause is neither all-true nor all-false. ($arrow.l.double$) If $beta$ NAE-satisfies $phi'$, let $v = beta(s)$. If $v = 0$, at least one literal per clause is true. If $v = 1$, flip all variables; by NAE symmetry the complement also works. - - _Solution extraction._ If $beta(s) = 0$, return $(beta(x_1), dots, beta(x_n))$. If $beta(s) = 1$, return $(1 - beta(x_1), dots, 1 - beta(x_n))$. -] #let ksat_mvc = load-example("KSatisfiability", "MinimumVertexCover") #let ksat_mvc_sol = ksat_mvc.solutions.at(0)