-
Notifications
You must be signed in to change notification settings - Fork 3
[Enhancement] GNN-guided ordering for the existing MaximumIndependentSet -> KingsSubgraph reduction #993
Description
Motivation
The repository already contains an exact reduction from MaximumIndependentSet/SimpleGraph/One to MaximumIndependentSet/KingsSubgraph/One. The proof-backed construction is correct, but its practical embedding quality depends strongly on the chosen vertex ordering and routing layout.
This issue proposes a GNN-guided ordering heuristic to improve embedding quality on many graph instances, without changing the underlying exact reduction.
Goal
Use a graph neural network to predict a good vertex ordering or layout hint for the existing copy-line and crossing-gadget construction, so that the resulting Kings-subgraph embedding is smaller and cleaner on average.
The GNN is only a heuristic oracle. It is not the reduction itself and does not carry the correctness argument.
Proposed Behavior
- Take a source graph
G = (V, E). - Run a GNN to predict either:
- a vertex ordering
pi, or - a small set of layout hints consumed by the existing mapper.
- a vertex ordering
- Feed the predicted ordering into the current deterministic Kings-subgraph construction.
- Run deterministic structural verification on the produced embedding.
- If verification fails, or if the learned model is unavailable, fall back to the current deterministic ordering method.
Correctness Requirement
Correctness must remain exactly the same as in the current reduction:
- each source vertex maps to a certified copy line,
- each source edge induces the required blocking gadget behavior,
- solution extraction maps target MIS solutions back to valid source MIS solutions,
- the learned component may change embedding quality, but never soundness.
Metrics To Improve
Compare against the current automatic ordering using:
- target
num_vertices - target
num_edges - grid area
- number of crossing gadgets
- total MIS overhead
Delta - mapping runtime
- fallback rate
Benchmark Set
Evaluate on many instances, not a single hand-picked example:
- named small graphs already used in tests:
bull,diamond,house,petersen,cubical - Erdos-Renyi graphs at multiple densities
- random regular graphs
- Barabasi-Albert graphs
- sparse structured graphs
Use multiple graph sizes and multiple random seeds.
Acceptance Criteria
- no correctness regressions on existing round-trip and mapping tests
- extracted source solutions remain valid and optimal on all checked instances
- learned ordering is optional and always has deterministic fallback
- average embedding quality improves on several benchmark families
- reporting includes both wins and losses, not only best cases
Non-Goals
- adding a new reduction rule
- replacing the proof-backed construction with a learned black box
- accepting empirical success in place of deterministic verification
Validation
- compare learned ordering vs current ordering on the same benchmark suite
- verify equality of extracted source objective values after reduction
- record per-instance size metrics and aggregate statistics
- include ablation results for learned ordering, deterministic ordering, and fallback-triggered runs
Metadata
Metadata
Assignees
Labels
Type
Projects
Status