Skip to content

feat: forester dashboard + compression improvements#2310

Open
sergeytimoshin wants to merge 15 commits intomainfrom
sergey/forester-compressible
Open

feat: forester dashboard + compression improvements#2310
sergeytimoshin wants to merge 15 commits intomainfrom
sergey/forester-compressible

Conversation

@sergeytimoshin
Copy link
Contributor

@sergeytimoshin sergeytimoshin commented Feb 24, 2026

  1. feat: forester dashboard

  2. fix: improve transaction handling in compressors

  • Enhanced error handling in CTokenCompressor, MintCompressor, and PdaCompressor to manage pending states more effectively.
  • Added checks to ensure accounts are marked as pending during transaction processing.
  1. chore: update epoch manager to check eligibility for compression
  • Modified dispatch_compression, dispatch_pda_compression, and dispatch_mint_compression to include eligibility checks based on the current light slot.
  1. refactor: improve account tracking with atomic counters
  • Introduced compressed_count and pending sets in account trackers for better management of compression states.
  • Updated CompressibleTracker trait to include methods for managing pending accounts and counting compressed accounts.
  1. fix: ensure proper handling of closed accounts in trackers
  • Added logic to remove closed accounts from trackers in CTokenAccountTracker, MintAccountTracker, and PdaAccountTracker.

Summary by CodeRabbit

  • New Features

    • Photon statistics panel with periodic refresh and a resilient backend proxy
    • Balance-history trend tracking for foresters
  • UI/UX Improvements

    • Consolidated dashboard with live slot indicators, auto-refresh status, and refreshed header
    • Richer tables and panels (compressible, metrics, photon stats, epoch timeline, forester schedule legend), explorer links, and simplified layout/navigation
  • Performance

    • Improved pending/batch accounting and rate-limited, structured logging for quieter telemetry

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a Postgres-backed Photon stats API and hook, rewires the dashboard into a consolidated Dashboard with new components and hooks, extends compressible trackers with pending/compressed lifecycle and verification, threads run_id and helius_rpc through bootstrap/pipeline/API server, and hardens logging/telemetry and CLI/config.

Changes

Cohort / File(s) Summary
Dashboard deps
forester/dashboard/package.json
Added pg runtime dependency and @types/pg dev dependency.
Dashboard API / Proxy
forester/dashboard/src/app/api/[...path]/route.ts, forester/dashboard/src/app/api/photon-stats/route.ts
New generic proxy route with abort/timeout handling; new /api/photon-stats route queries Postgres via a singleton pool, runs parallel aggregations, converts owners, and returns structured JSON with timeout/error handling.
Dashboard pages & layout
forester/dashboard/src/app/page.tsx, .../compressible/page.tsx, .../metrics/page.tsx, .../trees/page.tsx, forester/dashboard/src/app/layout.tsx
Consolidated Overview → Dashboard (new hooks wired), removed separate Trees/Metrics/Compressible pages and Sidebar, simplified layout.
Dashboard components
forester/dashboard/src/components/*
New PhotonStatsPanel; CompressiblePanel refactor to table layout; updates to EpochCard, ForesterList (new props and warnings), TreeTable (schedule legend, explorer links), QueuePressureChart (batch/item keys), MetricsPanel (simplified), Sidebar removed and assorted UI/UX adjustments.
Dashboard hooks & client utils
forester/dashboard/src/hooks/*, forester/dashboard/src/lib/api.ts, forester/dashboard/src/lib/utils.ts, forester/dashboard/src/types/forester.ts
New hooks usePhotonStats, useBalanceHistory; reduced SWR poll for compressible; fetcher gains abort/timeout, server-message extraction; explorerUrl, formatNumber, age/slot helpers; types extended (CompressibleResponse, PhotonStats, new stats interfaces, batch counters).
Postgres usage
forester/dashboard/src/app/api/photon-stats/route.ts
Uses PHOTON_DATABASE_URL via a pooled client; executes three parallel queries (accounts, token_accounts, onchain-by-owner), aggregates by owner and timestamps, and returns PhotonStats JSON.
Compressible trackers & lifecycle
forester/src/compressible/*, forester/src/compressible/traits.rs
Expanded CompressibleTracker trait with compressed_counter and pending DashSet; trackers (ctoken/pda/mint) now track compressed_count and pending; compressors mark/unmark pending, verify transaction execution, remove_compressed on success, and perform on-chain existence checks.
Bootstrap / Helius RPC routing
forester/src/compressible/bootstrap_helpers.rs, .../ctoken/bootstrap.rs, .../pda/bootstrap.rs, .../mint/bootstrap.rs
Introduced use_helius_rpc and threaded helius_rpc flag to select standard vs Helius V2 getProgramAccounts APIs; updated bootstrap signatures and call sites.
API server, run_id & tracker init
forester/src/api_server.rs, forester/src/lib.rs, forester/src/main.rs
API server extended with run_id and upstream handling, richer compressible snapshot aggregation; added CompressibleTrackerHandles, initialize_compressible_trackers, and run_pipeline_with_run_id to support preconfigured trackers and run_id propagation.
CLI & config
forester/src/cli.rs, forester/src/config.rs, forester/tests/*
Made indexer_url required (Option→String); added helius_rpc flag and forester_api_urls; tests updated to pass new args/defaults.
Logging & telemetry
forester/src/logging.rs, forester/src/telemetry.rs, various processor/*, tree_data_sync.rs
Added ServiceHeartbeat and HeartbeatSnapshot; rate-limited warning helper; many logs converted to structured events including run_id; file JSON logging layer and compact stdout.
Processor & v2 plumbing
forester/src/processor/v2/*, forester/src/processor/v1/helpers.rs
Added run_id to BatchContext; reduced verbosity in some v2 helpers; introduced structured, rate-limited warnings and richer event logs in processors.
Forester status & errors
forester/src/forester_status.rs, forester/src/errors.rs, forester/src/tree_data_sync.rs
Added per-type pending batch counters and total_pending_batches; new RegistrationError::FinalizeRegistrationPhaseEnded; fetch_protocol_group_authority now accepts run_id.
SDK & macros
sdk-libs/client/src/rpc/client.rs, sdk-libs/macros/src/light_pdas/*
RPC client fixes owner handling for cold ATAs; macros now accept a CodegenBackend for backend-aware dispatch generation (generate_dispatch_fn(backend)), safer discriminator handling and per-backend compress arms.
Tests
forester/tests/*
Updated tests to supply required indexer_url, added helius_rpc fields and adjusted bootstrap calls to pass the helius_rpc flag (false in tests).

Sequence Diagram(s)

sequenceDiagram
    participant UI as Dashboard UI
    participant Hook as usePhotonStats Hook
    participant Proxy as /api/photon-stats
    participant DB as PostgreSQL
    UI->>Hook: subscribe / mount
    Hook->>Proxy: GET /api/photon-stats (poll)
    Proxy->>DB: acquire pooled client
    Proxy->>DB: query accounts totals
    Proxy->>DB: query token_accounts totals
    Proxy->>DB: query onchain by-owner
    DB-->>Proxy: results
    Proxy->>Proxy: aggregate, convert base64→hex
    Proxy-->>Hook: JSON payload
    Hook-->>UI: update PhotonStatsPanel
Loading
sequenceDiagram
    participant Forester as Tracker (Compressible)
    participant Tracker as Tracker state (pending/compressed)
    participant Compressor as Compressor logic
    participant RPC as Solana RPC
    Forester->>Tracker: mark_pending(pubkeys)
    Tracker->>Tracker: add to pending DashSet
    Compressor->>RPC: send compress transaction
    alt send fails
        RPC-->>Compressor: send error
        Compressor->>Tracker: unmark_pending(pubkeys)
    else sent
        Compressor->>RPC: await confirmation
        alt not confirmed
            Compressor->>Tracker: unmark_pending(pubkeys)
        else confirmed
            RPC-->>Compressor: confirmed
            Compressor->>RPC: verify_transaction_execution
            alt verified
                Compressor->>Tracker: remove_compressed(pubkeys)
                Tracker->>Tracker: increment compressed_counter
            else verification fails
                Compressor->>Tracker: unmark_pending(pubkeys)
            end
        end
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested labels

ai-review

Suggested reviewers

  • SwenSchaeferjohann
  • ananas-block

Poem

🌲 Run IDs hum, dashboards glow with PostgreSQL light,
Pending marks wait while compressors hustle through the night.
Hooks poll, panels bloom, structured logs keep time,
Snapshots and counts marching in tidy rhyme. ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main changes: a new forester dashboard feature and compression workflow improvements including better pending state management, eligibility checks, and account tracker robustness.
Docstring Coverage ✅ Passed Docstring coverage is 94.12% which is sufficient. The required threshold is 70.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sergey/forester-compressible

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

feat: add support for getProgramAccounts standard rpc calls for compression
feat: structured error logging
- Introduced `forester_api_urls` argument in `DashboardArgs` for specifying multiple API base URLs.
- Enhanced `EpochManager` to handle non-retryable registration errors gracefully.
- Implemented `CompressibleTrackerHandles` struct to manage multiple trackers for compressible data.
- Refactored `initialize_compressible_trackers` to streamline tracker initialization and bootstrap processes.
- Updated `run_pipeline_with_run_id` to accept preconfigured tracker handles, improving modularity.
- Modified `main` function to initialize compressible trackers and manage shutdown signals effectively.
- Enhanced error handling in `CTokenCompressor`, `MintCompressor`, and `PdaCompressor` to manage pending states more effectively.
- Added checks to ensure accounts are marked as pending during transaction processing.

chore: update epoch manager to check eligibility for compression

- Modified `dispatch_compression`, `dispatch_pda_compression`, and `dispatch_mint_compression` to include eligibility checks based on the current light slot.

refactor: improve account tracking with atomic counters

- Introduced `compressed_count` and `pending` sets in account trackers for better management of compression states.
- Updated `CompressibleTracker` trait to include methods for managing pending accounts and counting compressed accounts.

fix: ensure proper handling of closed accounts in trackers

- Added logic to remove closed accounts from trackers in `CTokenAccountTracker`, `MintAccountTracker`, and `PdaAccountTracker`.

feat: add usePhotonStats hook for fetching photon statistics

- Implemented a new hook `usePhotonStats` using SWR for fetching photon statistics from the API.
- Introduced error handling for API responses.

refactor: enhance utility functions for address exploration

- Added `explorerUrl` function to generate Solana explorer URLs based on the current network.
- Improved `formatSlotCountdown` to handle additional parameters for better status reporting.

feat: extend forester types with new statistics

- Updated `AggregateQueueStats`, `ForesterStatus`, and related interfaces to include new fields for batch processing statistics.
- Introduced `PhotonStats` interface for tracking photon-related metrics.
@sergeytimoshin sergeytimoshin force-pushed the sergey/forester-compressible branch from b16d0ce to 93c98c4 Compare February 24, 2026 15:59
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (9)
sdk-libs/macros/src/light_pdas/program/compress.rs (1)

137-145: ⚠️ Potential issue | 🟠 Major

Avoid silently skipping unknown discriminators.

_ => Ok(()) turns malformed or unexpected accounts into a silent no‑op, which can leave accounts uncompressed without surfacing errors. Prefer returning an error (or a dedicated InvalidDiscriminator) so callers can detect misconfiguration or corruption.

🔧 Proposed fix (preserve failure signal)
-                match discriminator {
+                match discriminator {
                     #(`#compress_arms`)*
-                    _ => Ok(()),
+                    _ => Err(light_account::LightSdkTypesError::InvalidInstructionData),
                 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/macros/src/light_pdas/program/compress.rs` around lines 137 - 145,
The match arm that currently swallows unknown discriminators (`_ => Ok(())`)
should instead return an error so malformed or unexpected account data isn't
treated as a successful no‑op; update the match in compress.rs (the block that
matches on the `discriminator: [u8; 8]` and the `#(`#compress_arms`)*`) to return
a specific error (e.g.,
light_account::LightSdkTypesError::InvalidInstructionData or a new
InvalidDiscriminator variant) for the `_` case so callers can detect and handle
invalid discriminators instead of silently continuing.
forester/src/telemetry.rs (1)

43-55: 🧹 Nitpick | 🔵 Trivial

Structured JSON file logging is well-configured, but both layers share a single EnvFilter.

The env_filter on line 54 applies to the entire registry, so both stdout and file receive the same log level. This means you can't have verbose file logs (debug/trace) with quiet stdout (info). If you ever need differentiated verbosity, you'd add per-layer filters:

let file_filter = EnvFilter::try_from_env("RUST_LOG_FILE")
    .unwrap_or_else(|_| EnvFilter::new("debug"));
// then: .with(file_layer.with_filter(file_filter))

Not blocking — the current shared filter is a reasonable default. Just flagging it as a future consideration when operators inevitably ask "can I get debug logs in the file but info on stdout?"

The WorkerGuard stored in LOG_GUARD ensures the non-blocking writer flushes on shutdown — that's correct and important.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/telemetry.rs` around lines 43 - 55, The current telemetry setup
uses a single env_filter applied to the whole tracing_subscriber::registry(), so
stdout_layer and file_layer cannot have different verbosity; to allow file logs
to be more verbose than stdout, create a separate EnvFilter (e.g., file_filter
via EnvFilter::try_from_env or EnvFilter::new("debug")) and attach it only to
file_layer using file_layer.with_filter(file_filter) when building the registry
(keep stdout_layer using the existing env_filter); reference env_filter,
file_layer, stdout_layer, EnvFilter, .with_filter(), and
tracing_subscriber::registry() to locate and modify the code.
forester/tests/test_utils.rs (1)

105-119: 🧹 Nitpick | 🔵 Trivial

Field added correctly. Consider using ..Default::default() here too.

The new helius_rpc: false is necessary for compilation. However, this test helper explicitly lists every field of GeneralConfig, which means every future field addition requires updating this function. The production test_address_v2() and test_state_v2() were just cleaned up to use ..Default::default() — the same pattern would reduce maintenance here:

♻️ Optional: use struct update syntax
         general_config: GeneralConfig {
-            slot_update_interval_seconds: 10,
-            tree_discovery_interval_seconds: 5,
             enable_metrics: false,
-            skip_v1_state_trees: false,
-            skip_v2_state_trees: false,
-            skip_v1_address_trees: false,
-            skip_v2_address_trees: false,
-            tree_ids: vec![],
+            tree_discovery_interval_seconds: 5,
             sleep_after_processing_ms: 50,
             sleep_when_idle_ms: 100,
             queue_polling_mode: QueuePollingMode::OnChain,
-            group_authority: None,
-            helius_rpc: false,
+            ..Default::default()
         },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/tests/test_utils.rs` around lines 105 - 119, The GeneralConfig
literal in the test helper lists every field explicitly (including the newly
added helius_rpc) which makes future changes brittle; update the instantiation
to use struct update syntax with Default::default() so only the overrides remain
(e.g., keep the specific overrides like slot_update_interval_seconds,
tree_discovery_interval_seconds, queue_polling_mode, helius_rpc, etc.) and
append ..Default::default() to fill the rest automatically; locate the
GeneralConfig construction in the test helper in tests/test_utils.rs and replace
the full-field literal with the minimal overrides + ..Default::default().
forester/src/processor/v2/helpers.rs (1)

577-583: ⚠️ Potential issue | 🟡 Minor

Log level change looks good, but note the pre-existing division-by-zero risk.

The debug! demotion is fine. However, unlike fetch_paginated_batches (line 128–130) which guards against zkp_batch_size == 0, this function does not. If zkp_batch_size is ever 0, line 574 produces page_size_elements = 0, and total_elements.div_ceil(0) on line 575 will panic. Line 580's total_elements / zkp_batch_size in the log macro would also panic.

Not introduced by this PR, but worth a defensive guard:

🛡️ Suggested guard
 pub async fn fetch_streaming_address_batches<R: Rpc + 'static>(
     context: &BatchContext<R>,
     total_elements: u64,
     zkp_batch_size: u64,
 ) -> crate::Result<Option<StreamingAddressQueue>> {
+    if zkp_batch_size == 0 {
+        return Err(anyhow::anyhow!("zkp_batch_size cannot be zero"));
+    }
     if total_elements == 0 {
         return Ok(None);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/processor/v2/helpers.rs` around lines 577 - 583, This log-site
(and the preceding math) can panic when zkp_batch_size == 0; add the same
defensive guard used in fetch_paginated_batches to the start of this function:
check if zkp_batch_size == 0, emit an error-level log identifying the bad value
(referencing zkp_batch_size and the function name), and return/propagate an
error (or otherwise short-circuit) instead of continuing so page_size_elements,
total_elements.div_ceil(zkp_batch_size) and the debug log won't divide by zero;
update references to total_elements / zkp_batch_size in the tracing::debug call
accordingly after the guard.
forester/src/cli.rs (1)

34-39: ⚠️ Potential issue | 🟡 Minor

indexer_url is now a required argument — breaking change for existing deployments.

Previously Option<String>, now String. Any existing CI scripts, docker-compose files, or operator configs that didn't supply --indexer-url or INDEXER_URL will fail at startup. This is likely intentional given the queue polling mode depends on it, but make sure this is called out in release notes or migration docs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/cli.rs` around lines 34 - 39, The CLI field indexer_url was
changed from Option<String> to String, making the argument required and breaking
existing deployments; revert the CLI field back to Option<String> (i.e., change
pub indexer_url: String to pub indexer_url: Option<String> with the same
#[arg(...)] attributes) and update any runtime code that reads indexer_url to
handle None (introduce explicit checks where the indexer URL is required,
returning a clear error or skipping queue-polling behavior). Ensure all uses of
indexer_url in initialization/queue-polling code perform an Option
unwrap-or-early-return so behavior is preserved for deployments that don’t set
the env/flag.
forester/tests/legacy/priority_fee_test.rs (1)

23-72: ⚠️ Potential issue | 🔴 Critical

Legacy test is missing multiple required StartArgs fields — this will fail to compile.

The StartArgs struct literal in the legacy test file (lines 23–72) is missing required fields that exist in the current struct definition in forester/src/cli.rs. While the non-legacy forester/tests/priority_fee_test.rs (lines 23–87) includes all required fields—such as processor_mode, queue_polling_mode, tree_ids, enable_compressible, api_server_port, light_pda_programs, helius_rpc, and group_authority—the legacy version stops at send_tx_rate_limit with no spread operator or additional fields. Since StartArgs has no Default impl and these fields are non-optional (not Option<T>), the compiler will reject missing fields like helius_rpc: bool, processor_mode: ProcessorMode, and others that lack direct Option wrappers.

Update the legacy test to match the complete field set from the non-legacy version.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/tests/legacy/priority_fee_test.rs` around lines 23 - 72, The
StartArgs struct literal in the legacy test is incomplete and must be updated to
include the same required fields as the current StartArgs definition; locate the
StartArgs construction in the legacy test and add all missing fields present in
the non-legacy forester/tests/priority_fee_test.rs (e.g. processor_mode,
queue_polling_mode, tree_ids, enable_compressible, api_server_port,
light_pda_programs, helius_rpc, group_authority and any other non-Option fields)
with appropriate test values so the StartArgs literal matches the full struct
used by the application.
forester/tests/test_compressible_ctoken.rs (1)

536-543: ⚠️ Potential issue | 🟡 Minor

Mainnet bootstrap test no longer exercises the V2 branch.
run_bootstrap_test now always passes helius_rpc = false, so the ignored mainnet test won’t hit the new V2 path it’s intended to validate. Consider parameterizing the helper so the mainnet test can pass true.

🔧 Suggested adjustment
-async fn run_bootstrap_test(
-    rpc_url: String,
-    expected_count: usize,
-    expected_data: Option<(Vec<Pubkey>, Pubkey)>,
-) {
+async fn run_bootstrap_test(
+    rpc_url: String,
+    expected_count: usize,
+    expected_data: Option<(Vec<Pubkey>, Pubkey)>,
+    helius_rpc: bool,
+) {
@@
-        if let Err(e) = forester::compressible::bootstrap_ctoken_accounts(
+        if let Err(e) = forester::compressible::bootstrap_ctoken_accounts(
             rpc_url_clone,
             tracker_clone,
             Some(shutdown_rx),
-            false,
+            helius_rpc,
         )
@@
-    run_bootstrap_test(
+    run_bootstrap_test(
         "http://localhost:8899".to_string(),
         pre_existing + 3,
         Some((created_pubkeys, mint)),
+        false,
     )
@@
-    run_bootstrap_test(rpc_url, 0, None).await;
+    run_bootstrap_test(rpc_url, 0, None, true).await;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/tests/test_compressible_ctoken.rs` around lines 536 - 543, The
mainnet bootstrap test never exercises the V2 (Helius) branch because
run_bootstrap_test always passes helius_rpc = false; update the
run_bootstrap_test helper to accept a helius_rpc: bool parameter, thread that
flag into the call to forester::compressible::bootstrap_ctoken_accounts (the
last argument currently set to false in the spawn block), and change the mainnet
test invocation to call run_bootstrap_test(true) while leaving other tests that
should use the original behavior calling run_bootstrap_test(false); ensure the
helper signature and all call sites are updated consistently.
forester/src/compressible/pda/compressor.rs (1)

216-338: ⚠️ Potential issue | 🟠 Major

Batch compression should filter closed PDAs before building proofs.
If any PDA in the batch is already closed/missing, the whole batched transaction can fail and the valid accounts get retried indefinitely. The single-account path now removes closed PDAs; the batch path should mirror that before proof/tx build.

🛠️ Suggested fix (filter missing PDAs early)
-        let mut rpc = self.rpc_pool.get_connection().await?;
-
-        // Derive compressed addresses for all accounts
-        let compressed_addresses: Vec<[u8; 32]> = account_states
-            .iter()
+        let mut rpc = self.rpc_pool.get_connection().await?;
+
+        // Filter out PDAs that no longer exist on-chain to avoid batch failures
+        let mut live_states = Vec::new();
+        for state in account_states {
+            let account_info = rpc
+                .get_account(state.pubkey)
+                .await
+                .map_err(|e| anyhow::anyhow!("Failed to check PDA {}: {:?}", state.pubkey, e))?;
+            if account_info.is_none() {
+                debug!(
+                    "PDA {} no longer exists on-chain, removing from tracker",
+                    state.pubkey
+                );
+                self.tracker.remove_compressed(&state.pubkey);
+                continue;
+            }
+            live_states.push(state.clone());
+        }
+
+        if live_states.is_empty() {
+            return Err(anyhow::anyhow!("No accounts to compress"));
+        }
+
+        // Derive compressed addresses for all accounts
+        let compressed_addresses: Vec<[u8; 32]> = live_states
+            .iter()
             .map(|state| {
                 derive_address(
                     &state.pubkey.to_bytes(),
                     &cached_config.address_tree.to_bytes(),
                     &program_id.to_bytes(),
                 )
             })
             .collect();
@@
-        let pubkeys: Vec<Pubkey> = account_states.iter().map(|s| s.pubkey).collect();
+        let pubkeys: Vec<Pubkey> = live_states.iter().map(|s| s.pubkey).collect();
@@
-        debug!(
-            "Built batched compress_accounts_idempotent for {} PDAs (program {})",
-            account_states.len(),
-            program_id
-        );
+        debug!(
+            "Built batched compress_accounts_idempotent for {} PDAs (program {})",
+            live_states.len(),
+            program_id
+        );
@@
-        let pubkeys: Vec<Pubkey> = account_states.iter().map(|s| s.pubkey).collect();
+        let pubkeys: Vec<Pubkey> = live_states.iter().map(|s| s.pubkey).collect();
@@
-            for state in account_states {
+            for state in &live_states {
                 self.tracker.remove_compressed(&state.pubkey);
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/pda/compressor.rs` around lines 216 - 338, The
batch path must drop PDAs that are closed/missing before building the proof/tx:
after fetching compressed_accounts (the Vec from the get_compressed_account
futures), zip them with the original account_states and filter out any pairs
where the compressed account indicates a missing/closed state (e.g., no value or
whatever sentinel the compressed account struct uses), then rebuild the derived
collections (account_states, pubkeys, hashes, compressed_accounts) from that
filtered list; ensure you call tracker.remove_compressed for any accounts
filtered out and use the filtered hashes/pubkeys when calling
rpc.get_validity_proof and build_compress_accounts_idempotent so the proof and
transaction only include live PDAs (affecting variables: compressed_accounts,
hashes, pubkeys, account_states, proof_with_context, ix, and tracker).
forester/src/compressible/mint/compressor.rs (1)

133-155: ⚠️ Potential issue | 🟠 Major

Clear pending entries when confirmation RPC fails.
confirm_transaction errors currently return via ? without unmarking pending, leaving those mints stuck.

🐛 Suggested fix
-        let confirmed = rpc
-            .confirm_transaction(signature)
-            .await
-            .map_err(|e| anyhow::anyhow!("Failed to confirm transaction: {:?}", e))?;
+        let confirmed = match rpc.confirm_transaction(signature).await {
+            Ok(confirmed) => confirmed,
+            Err(e) => {
+                self.tracker.unmark_pending(&pubkeys);
+                return Err(anyhow::anyhow!("Failed to confirm transaction: {:?}", e));
+            }
+        };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/mint/compressor.rs` around lines 133 - 155, When
rpc.confirm_transaction(await) fails it returns early via the `?` and never
calls `self.tracker.unmark_pending(&pubkeys)`, leaving mints stuck; modify the
`confirm_transaction(signature).await` error handling in the
`compress_and_close` (or surrounding) flow so that on any error from
`confirm_transaction` you first call `self.tracker.unmark_pending(&pubkeys)` (or
otherwise clear the pending flags for the `mint_states`/`pubkeys`) and then
return the error, ensuring the existing success path still calls
`self.tracker.remove_compressed(&mint_state.pubkey)` for each `mint_state` and
the failure path that returns `Err(...)` also unmarks pending.
♻️ Duplicate comments (1)
sdk-libs/macros/src/light_pdas/program/compress.rs (1)

410-418: Same match/arm shape issue as above.

The non‑Pinocchio backend has the identical match wrapping compress_arms. Apply the same fix here to avoid invalid generated code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/macros/src/light_pdas/program/compress.rs` around lines 410 - 418,
The generated code wraps the generated compress arms inside an extra `match
discriminator { #(`#compress_arms`)* _ => Ok(()) }`, causing a duplicate/invalid
match shape; remove the outer `match discriminator { ... }` wrapper and emit the
`#compress_arms` expansion directly (or ensure the generated arms themselves
form the complete `match discriminator { ... }`), keeping the `discriminator:
[u8; 8]` extraction intact so `discriminator` is available to the arms.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@forester/dashboard/src/app/layout.tsx`:
- Line 15: Add the React suppression flag to the root HTML element: in
layout.tsx update the <html lang="en"> element to include
suppressHydrationWarning so client/server attribute differences injected by
browser extensions (e.g., dark-mode toggles) don't produce hydration errors;
locate the HTML element in the Layout component (the <html> tag in layout.tsx)
and add the suppressHydrationWarning prop to it.
- Line 16: Add a Next.js font (e.g., Inter via next/font/google) and apply its
generated class to the root layout so the "antialiased" utility has effect and
you avoid an external font request; specifically, import Inter from
"next/font/google" in layout.tsx, instantiate it (e.g., Inter({ subsets:
['latin'], variable or display as needed }), and replace or augment the body
element's className (currently "bg-gray-50 text-gray-900 antialiased") to
include the font's class (e.g., font.className or font.variable) so the
layout.tsx body uses the loaded font. Ensure the import and the body className
reference the exact symbols you add (the Inter import and the body element's
className).

In `@forester/dashboard/src/components/EpochCard.tsx`:
- Around line 92-109: The grid currently re-renders the same epoch countdown via
the epochCountdown variable (seen where epochCountdown is used and above the
progress bar), which duplicates the timer; either remove the duplicated
epochCountdown from the stats grid or replace it with a distinct value/label
(e.g., show only approximate remaining hours or change the label to "Next epoch
(remaining)" vs the top exact countdown). Locate the usage of epochCountdown in
EpochCard (the <p> that prints epochCountdown), update it to use an alternative
display (compute a short form or remove it) and ensure related helpers like
formatNumber and status.current_light_slot remain unchanged; adjust the UI label
text accordingly so users no longer see two identical countdowns.
- Around line 14-41: The countdown can slowly drift because slotsRef.current
only updates when we reset totalMs (delta > 5); to fix, update slotsRef.current
on every slots change so the next comparison uses the latest prop while still
only calling setTotalMs when the jump exceeds the threshold—i.e., inside
useCountdown's first useEffect (function useCountdown), always assign
slotsRef.current = slots and only call setTotalMs(slots * SLOT_MS) when
Math.abs(slots - slotsRef.current) > 5 (or by capturing the previous value
before overwriting), preserving smooth local ticking but preventing cumulative
drift.

In `@forester/dashboard/src/hooks/useCompressible.ts`:
- Line 7: The refreshInterval in the useCompressible hook is set to 5000ms which
increases request rate; change it back to 15000ms or make it configurable so the
polling interval is adjustable per deployment (e.g., expose a parameter or read
from env/feature flag) and ensure revalidateOnFocus behavior is documented or
can be toggled; update the useCompressible hook (and any callers) to use the new
configurable variable instead of hardcoding 5000 and add a short TODO/comment
explaining the performance implications.

In `@forester/dashboard/src/lib/utils.ts`:
- Around line 53-67: In formatSlotCountdown, treat equality of currentSlot and
nextReadySlot as ready by changing the comparison in the readiness check from
currentSlot > nextReadySlot to currentSlot >= nextReadySlot so the function
returns "ready now" when slots are equal (avoid the "0 slots" output); adjust
the branch that computes remaining (uses slotsToTime(remaining)) to only run
when currentSlot < nextReadySlot to keep behavior consistent.
- Around line 6-16: The function explorerUrl currently defaults network to
"mainnet" and only reads process.env.NEXT_PUBLIC_SOLANA_NETWORK when window
exists; change explorerUrl to initialize network from
process.env.NEXT_PUBLIC_SOLANA_NETWORK ?? "mainnet" and then, if typeof window
!== "undefined", read window.location.hostname to override network to "devnet"
or "mainnet" based on host.includes("-devnet") / host.includes("-mainnet"); keep
the cluster construction and return logic the same so SSR uses the env var and
the browser can still override via hostname (refer to explorerUrl, network,
window, and NEXT_PUBLIC_SOLANA_NETWORK).

In `@forester/dashboard/src/types/forester.ts`:
- Around line 110-125: CompressibleTypeStats and PdaProgramStats duplicate the
same fields; make PdaProgramStats extend CompressibleTypeStats and add only
program_id to remove duplication. Update the interface declaration for
PdaProgramStats to extend CompressibleTypeStats (keeping nullable/optional
fields) and remove the repeated tracked/compressed/ready/waiting/next_ready_slot
properties so the shape is inherited from CompressibleTypeStats.

In `@forester/src/api_server.rs`:
- Around line 169-200: In summarize_slots, the readiness check currently uses
`if slot > compressible_slot` which makes the computed slot lag by one; change
the condition to `if slot >= compressible_slot` so the computed slot counts as
ready, and ensure the `else` branch (which sets `next_ready_slot`) only runs for
slots strictly greater than the current `slot` (i.e., the adjusted comparison
will naturally make `next_ready_slot` only consider future slots); update the
condition in the loop inside summarize_slots accordingly, referencing `slot`,
`compressible_slot`, `ready`, `next_ready_slot`, and the summarize_slots
function name.
- Around line 212-229: Replace the manual PDA computation in is_ctoken_ata by
calling the canonical helper derive_associated_token_account from
light_token::instruction: extract the owner and mint Pubkeys from the
CTokenAccountState (as currently done), call
derive_associated_token_account(&owner, &mint) to get expected_ata, and compare
state.pubkey to that result instead of using find_program_address with
LIGHT_TOKEN_PROGRAM_ID.

In `@forester/src/compressible/ctoken/compressor.rs`:
- Around line 214-259: The confirm_transaction error path currently returns
early without clearing pending flags, leaving accounts stuck; update the
compress_and_close flow so that if rpc.confirm_transaction(...) returns Err you
call self.tracker.unmark_pending(&pubkeys) before propagating the error (i.e.,
inside the map_err closure or immediately after the await error), similarly
ensure any other early-return error paths around create_and_send_transaction and
confirmation always call self.tracker.unmark_pending(&pubkeys) so pending state
is cleared on failure.

In `@forester/src/compressible/mint/compressor.rs`:
- Around line 239-254: The branch in compressible::mint::Compressor that handles
a missing Mint PDA calls self.tracker.remove_compressed(mint_pda), which
increments compressed_count; change this to remove the PDA from the tracker
without bumping the compressed_count (either by adding a new Tracker method like
remove_without_increment/remove_closed or adding a boolean flag to
remove_compressed to skip the counter), then call that new non-incrementing
method in the "Mint PDA no longer exists on-chain" path so closing/cleanup does
not count as a successful compression; update any Tracker internals that adjust
compressed_count accordingly (symbols: self.tracker, remove_compressed,
compressed_count).

In `@forester/src/compressible/mint/state.rs`:
- Around line 77-85: Replace the direct accounts removal with the tracker's
removal so pending entries get cleared: where the code currently calls
self.accounts.remove(&pubkey) (in the branch handling
mint.metadata.mint_decompressed false and the similar block around lines
163-168), call the tracker's remove method (e.g., self.tracker.remove(&pubkey))
instead so both the account and any pending work for that pubkey are cleared
from the tracker; keep the same debug log and return behavior.

In `@forester/src/compressible/pda/state.rs`:
- Around line 118-123: The code currently calls self.accounts.remove(&pubkey)
directly when a PDA is already compressed or closed, leaving stale entries in
the pending set; use the CompressibleTracker::remove method instead so both
pending and accounts are cleaned. Locate the branches handling compressed (in
forester/src/compressible/pda/state.rs around the is_compressed() check) and
closed handling (the analogous block later) and replace
self.accounts.remove(&pubkey) with self.remove(&pubkey); apply the same change
in forester/src/compressible/mint/state.rs to ensure pending is cleared
consistently. Ensure you call the trait method remove(&pubkey) on self (not just
accounts.remove) in both compressed and closed branches.

In `@sdk-libs/macros/src/light_pdas/program/compress.rs`:
- Around line 387-395: The generated code currently wraps the generated
compress_arms in a match on the local variable discriminator, but those arms are
bare if/early-return blocks and thus produce invalid Rust; replace the match
discriminator { #(`#compress_arms`)* _ => Ok(()) } wrapper with a sequential
if-chain dispatch that evaluates the generated compress_arms blocks in order and
returns Ok(()) at the end if none matched. Locate the two occurrences that
declare let discriminator: [u8; 8] = ... and reference the compress_arms
expansion (Pinocchio backend around the first occurrence and Anchor backend
around the second) and change the surrounding match to simply execute the
generated blocks as-is followed by Ok(()).

---

Outside diff comments:
In `@forester/src/cli.rs`:
- Around line 34-39: The CLI field indexer_url was changed from Option<String>
to String, making the argument required and breaking existing deployments;
revert the CLI field back to Option<String> (i.e., change pub indexer_url:
String to pub indexer_url: Option<String> with the same #[arg(...)] attributes)
and update any runtime code that reads indexer_url to handle None (introduce
explicit checks where the indexer URL is required, returning a clear error or
skipping queue-polling behavior). Ensure all uses of indexer_url in
initialization/queue-polling code perform an Option unwrap-or-early-return so
behavior is preserved for deployments that don’t set the env/flag.

In `@forester/src/compressible/mint/compressor.rs`:
- Around line 133-155: When rpc.confirm_transaction(await) fails it returns
early via the `?` and never calls `self.tracker.unmark_pending(&pubkeys)`,
leaving mints stuck; modify the `confirm_transaction(signature).await` error
handling in the `compress_and_close` (or surrounding) flow so that on any error
from `confirm_transaction` you first call
`self.tracker.unmark_pending(&pubkeys)` (or otherwise clear the pending flags
for the `mint_states`/`pubkeys`) and then return the error, ensuring the
existing success path still calls
`self.tracker.remove_compressed(&mint_state.pubkey)` for each `mint_state` and
the failure path that returns `Err(...)` also unmarks pending.

In `@forester/src/compressible/pda/compressor.rs`:
- Around line 216-338: The batch path must drop PDAs that are closed/missing
before building the proof/tx: after fetching compressed_accounts (the Vec from
the get_compressed_account futures), zip them with the original account_states
and filter out any pairs where the compressed account indicates a missing/closed
state (e.g., no value or whatever sentinel the compressed account struct uses),
then rebuild the derived collections (account_states, pubkeys, hashes,
compressed_accounts) from that filtered list; ensure you call
tracker.remove_compressed for any accounts filtered out and use the filtered
hashes/pubkeys when calling rpc.get_validity_proof and
build_compress_accounts_idempotent so the proof and transaction only include
live PDAs (affecting variables: compressed_accounts, hashes, pubkeys,
account_states, proof_with_context, ix, and tracker).

In `@forester/src/processor/v2/helpers.rs`:
- Around line 577-583: This log-site (and the preceding math) can panic when
zkp_batch_size == 0; add the same defensive guard used in
fetch_paginated_batches to the start of this function: check if zkp_batch_size
== 0, emit an error-level log identifying the bad value (referencing
zkp_batch_size and the function name), and return/propagate an error (or
otherwise short-circuit) instead of continuing so page_size_elements,
total_elements.div_ceil(zkp_batch_size) and the debug log won't divide by zero;
update references to total_elements / zkp_batch_size in the tracing::debug call
accordingly after the guard.

In `@forester/src/telemetry.rs`:
- Around line 43-55: The current telemetry setup uses a single env_filter
applied to the whole tracing_subscriber::registry(), so stdout_layer and
file_layer cannot have different verbosity; to allow file logs to be more
verbose than stdout, create a separate EnvFilter (e.g., file_filter via
EnvFilter::try_from_env or EnvFilter::new("debug")) and attach it only to
file_layer using file_layer.with_filter(file_filter) when building the registry
(keep stdout_layer using the existing env_filter); reference env_filter,
file_layer, stdout_layer, EnvFilter, .with_filter(), and
tracing_subscriber::registry() to locate and modify the code.

In `@forester/tests/legacy/priority_fee_test.rs`:
- Around line 23-72: The StartArgs struct literal in the legacy test is
incomplete and must be updated to include the same required fields as the
current StartArgs definition; locate the StartArgs construction in the legacy
test and add all missing fields present in the non-legacy
forester/tests/priority_fee_test.rs (e.g. processor_mode, queue_polling_mode,
tree_ids, enable_compressible, api_server_port, light_pda_programs, helius_rpc,
group_authority and any other non-Option fields) with appropriate test values so
the StartArgs literal matches the full struct used by the application.

In `@forester/tests/test_compressible_ctoken.rs`:
- Around line 536-543: The mainnet bootstrap test never exercises the V2
(Helius) branch because run_bootstrap_test always passes helius_rpc = false;
update the run_bootstrap_test helper to accept a helius_rpc: bool parameter,
thread that flag into the call to
forester::compressible::bootstrap_ctoken_accounts (the last argument currently
set to false in the spawn block), and change the mainnet test invocation to call
run_bootstrap_test(true) while leaving other tests that should use the original
behavior calling run_bootstrap_test(false); ensure the helper signature and all
call sites are updated consistently.

In `@forester/tests/test_utils.rs`:
- Around line 105-119: The GeneralConfig literal in the test helper lists every
field explicitly (including the newly added helius_rpc) which makes future
changes brittle; update the instantiation to use struct update syntax with
Default::default() so only the overrides remain (e.g., keep the specific
overrides like slot_update_interval_seconds, tree_discovery_interval_seconds,
queue_polling_mode, helius_rpc, etc.) and append ..Default::default() to fill
the rest automatically; locate the GeneralConfig construction in the test helper
in tests/test_utils.rs and replace the full-field literal with the minimal
overrides + ..Default::default().

In `@sdk-libs/macros/src/light_pdas/program/compress.rs`:
- Around line 137-145: The match arm that currently swallows unknown
discriminators (`_ => Ok(())`) should instead return an error so malformed or
unexpected account data isn't treated as a successful no‑op; update the match in
compress.rs (the block that matches on the `discriminator: [u8; 8]` and the
`#(`#compress_arms`)*`) to return a specific error (e.g.,
light_account::LightSdkTypesError::InvalidInstructionData or a new
InvalidDiscriminator variant) for the `_` case so callers can detect and handle
invalid discriminators instead of silently continuing.

---

Duplicate comments:
In `@sdk-libs/macros/src/light_pdas/program/compress.rs`:
- Around line 410-418: The generated code wraps the generated compress arms
inside an extra `match discriminator { #(`#compress_arms`)* _ => Ok(()) }`,
causing a duplicate/invalid match shape; remove the outer `match discriminator {
... }` wrapper and emit the `#compress_arms` expansion directly (or ensure the
generated arms themselves form the complete `match discriminator { ... }`),
keeping the `discriminator: [u8; 8]` extraction intact so `discriminator` is
available to the arms.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 12bec0c and 93c98c4.

⛔ Files ignored due to path filters (1)
  • external/photon is excluded by none and included by none
📒 Files selected for processing (57)
  • forester/dashboard/package.json
  • forester/dashboard/src/app/api/[...path]/route.ts
  • forester/dashboard/src/app/api/photon-stats/route.ts
  • forester/dashboard/src/app/compressible/page.tsx
  • forester/dashboard/src/app/layout.tsx
  • forester/dashboard/src/app/metrics/page.tsx
  • forester/dashboard/src/app/page.tsx
  • forester/dashboard/src/app/trees/page.tsx
  • forester/dashboard/src/components/CompressiblePanel.tsx
  • forester/dashboard/src/components/EpochCard.tsx
  • forester/dashboard/src/components/ForesterList.tsx
  • forester/dashboard/src/components/MetricsPanel.tsx
  • forester/dashboard/src/components/PhotonStatsPanel.tsx
  • forester/dashboard/src/components/QueuePressureChart.tsx
  • forester/dashboard/src/components/Sidebar.tsx
  • forester/dashboard/src/components/TreeTable.tsx
  • forester/dashboard/src/hooks/useBalanceHistory.ts
  • forester/dashboard/src/hooks/useCompressible.ts
  • forester/dashboard/src/hooks/usePhotonStats.ts
  • forester/dashboard/src/lib/api.ts
  • forester/dashboard/src/lib/utils.ts
  • forester/dashboard/src/types/forester.ts
  • forester/src/api_server.rs
  • forester/src/cli.rs
  • forester/src/compressible/bootstrap_helpers.rs
  • forester/src/compressible/ctoken/bootstrap.rs
  • forester/src/compressible/ctoken/compressor.rs
  • forester/src/compressible/ctoken/state.rs
  • forester/src/compressible/mint/bootstrap.rs
  • forester/src/compressible/mint/compressor.rs
  • forester/src/compressible/mint/state.rs
  • forester/src/compressible/pda/bootstrap.rs
  • forester/src/compressible/pda/compressor.rs
  • forester/src/compressible/pda/state.rs
  • forester/src/compressible/traits.rs
  • forester/src/config.rs
  • forester/src/epoch_manager.rs
  • forester/src/errors.rs
  • forester/src/forester_status.rs
  • forester/src/lib.rs
  • forester/src/logging.rs
  • forester/src/main.rs
  • forester/src/processor/v1/helpers.rs
  • forester/src/processor/v2/common.rs
  • forester/src/processor/v2/helpers.rs
  • forester/src/processor/v2/processor.rs
  • forester/src/telemetry.rs
  • forester/src/tree_data_sync.rs
  • forester/tests/e2e_test.rs
  • forester/tests/legacy/priority_fee_test.rs
  • forester/tests/priority_fee_test.rs
  • forester/tests/test_compressible_ctoken.rs
  • forester/tests/test_compressible_mint.rs
  • forester/tests/test_compressible_pda.rs
  • forester/tests/test_utils.rs
  • sdk-libs/client/src/rpc/client.rs
  • sdk-libs/macros/src/light_pdas/program/compress.rs
💤 Files with no reviewable changes (4)
  • forester/dashboard/src/app/trees/page.tsx
  • forester/dashboard/src/app/metrics/page.tsx
  • forester/dashboard/src/app/compressible/page.tsx
  • forester/dashboard/src/components/Sidebar.tsx

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
sdk-libs/macros/src/light_pdas/program/compress.rs (1)

179-204: 🛠️ Refactor suggestion | 🟠 Major

generate_dispatch_fn leaks Anchor types despite accepting a generic backend.

The function hardcodes anchor_lang::prelude::AccountInfo<'info> at the generated function signature (line 186) even though it receives a &dyn CodegenBackend. The only safety net is the !backend.is_pinocchio() guard in instructions.rs; a future caller passing PinocchioBackend would silently produce malformed generated code mixing light_account_pinocchio crate paths with Anchor account types.

backend.account_info_type() is already used in generate_enum_dispatch_method_with_backend (line 385) — use it here for consistency:

♻️ Proposed fix
 pub fn generate_dispatch_fn(&self, backend: &dyn CodegenBackend) -> Result<syn::ItemFn> {
     let account_crate = backend.account_crate();
     let sdk_error = backend.sdk_error_type();
+    let account_info_type = backend.account_info_type();
     let compress_arms = self.generate_compress_arms(backend);

     Ok(syn::parse_quote! {
         fn __compress_dispatch<'info>(
-            account_info: &anchor_lang::prelude::AccountInfo<'info>,
+            account_info: &#account_info_type,
             meta: &#account_crate::account_meta::CompressedAccountMetaNoLamportsNoAddress,
             index: usize,
             ctx: &mut `#account_crate`::CompressCtx<'_, 'info>,
         ) -> std::result::Result<(), `#sdk_error`> {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/macros/src/light_pdas/program/compress.rs` around lines 179 - 204,
The generated __compress_dispatch function currently hardcodes
anchor_lang::prelude::AccountInfo<'info>; update generate_dispatch_fn to use the
backend-provided account info type by calling backend.account_info_type() (same
approach as generate_enum_dispatch_method_with_backend) so the parameter
signature uses that returned type instead of
anchor_lang::prelude::AccountInfo<'info>, and ensure any lifetime/type
interpolation in the syn::parse_quote! uses the backend.account_info_type()
token/identifier so generated code stays consistent with CodegenBackend
implementations (reference generate_dispatch_fn, backend.account_info_type(),
CodegenBackend, and the __compress_dispatch signature).
forester/src/compressible/mint/compressor.rs (2)

297-306: 🧹 Nitpick | 🔵 Trivial

Single-mint compress doesn't unmark pending on confirmation failure.

When confirmed is false (line 300), the method returns an error but doesn't call unmark_pending. This is fine today because the only caller (compress_batch_concurrent, line 218) unmarks on Err. But if compress were ever called from another path, the pending entry would leak.

Since the method is private, this isn't urgent — but a defensive unmark here would make the function self-contained.

🛡️ Defensive fix
         if confirmed {
             info!("CompressAndCloseMint tx for Mint {} confirmed", mint_pda);
             Ok(signature)
         } else {
             Err(anyhow::anyhow!(
                 "Transaction {} not confirmed for Mint {}",
                 signature,
                 mint_pda
             ))
         }

No change needed today since the caller handles it, but worth a comment noting the caller's responsibility.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/mint/compressor.rs` around lines 297 - 306, The
single-mint method compress should defensively clear the pending entry when
confirmation fails: inside compress(), when confirmed is false (the else branch
that currently returns Err), call self.unmark_pending(&mint_pda) before
returning the error; if unmark_pending can fail, swallow/log that error (e.g.,
via warn!) but still return the original anyhow::Error so callers get the
original failure; mention in a comment that compress_batch_concurrent also
unmarks on Err but compress defends itself for future callers.

52-96: ⚠️ Potential issue | 🟡 Minor

compress_batch skips the Mint PDA existence check that compress() performs, risking batch-wide failures.

compress() (line 227) checks if the Mint PDA still exists on-chain (line 240-244) before building the instruction. compress_batch skips this pre-check and builds all instructions in parallel (line 64). If a Mint PDA was closed between being queued and instruction build time, create_compress_and_close_mint_instruction will fail when trying to fetch the compressed account, causing try_join_all (line 96) to short-circuit. All pending marks get reverted (line 119), blocking the entire batch from one stale account.

While the idempotent flag (line 79) is set to handle this gracefully on-chain, it doesn't prevent the instruction builder from failing during RPC calls. Consider adding a lightweight Mint PDA existence pre-check before building each instruction in parallel, or switch from try_join_all to individual error handling so failed instructions don't poison the entire batch.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/mint/compressor.rs` around lines 52 - 96,
compress_batch builds all instructions in parallel and uses
futures::future::try_join_all so a single
create_compress_and_close_mint_instruction failure (e.g., because the Mint PDA
was closed) cancels the whole batch; update compress_batch to either perform the
same lightweight Mint PDA existence check that compress() does before calling
create_compress_and_close_mint_instruction (i.e., inside the instruction_futures
closure, query the RPC for the mint PDA and skip building the instruction if it
no longer exists) or change the parallel collection to tolerate per-mint
failures (replace try_join_all with join_all and map each result to
Option/Result so you can filter out failed instruction builds and only include
successful Instruction values), referencing compress_batch, compress,
create_compress_and_close_mint_instruction and try_join_all to locate the code
to change.
♻️ Duplicate comments (6)
sdk-libs/macros/src/light_pdas/program/compress.rs (1)

379-431: Previous critical issue resolved — Pinocchio and Anchor dispatch now correctly separated.

The Pinocchio path (lines 390–405) correctly emits the compress arms as bare sequential blocks terminated by Ok(()), while the Anchor path (lines 406–430) wraps proper match-guard arms (d if d == Name::LIGHT_DISCRIMINATOR) in a match discriminator { ... }. Both are syntactically valid in their contexts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/macros/src/light_pdas/program/compress.rs` around lines 379 - 431,
Summary: The Pinocchio and Anchor dispatch paths were intended to be separated
and the current implementation emits bare sequential compress arms for Pinocchio
and match-arm-style compress arms for Anchor. Action: Do not merge changes that
convert one style into the other; keep
generate_enum_dispatch_method_with_backend as-is, but verify that
generate_compress_arms produces the correct shape for each backend (blocks for
Pinocchio, match arms for Anchor) and that symbols borrow_error, account_crate,
compress_dispatch, and enum_name are referenced consistently; run the
unit/integration tests that exercise compress_dispatch to confirm behavior
before approving the PR.
forester/src/compressible/mint/state.rs (2)

163-169: Closed-account handling looks correct.

Empty data triggers self.remove(&pubkey) which cleans up both accounts and pending via the trait method. Good.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/mint/state.rs` around lines 163 - 169, The
closed-account handling in state.rs is correct: when data.is_empty() the call to
self.remove(&pubkey) properly cleans up both accounts and pending via the trait
method; no code changes are required—leave the data.is_empty() branch and
self.remove(&pubkey) as-is and retain the debug! log for Removed closed Mint
account.

77-85: Past review fix confirmed — self.remove() is now used correctly.

This properly clears both the accounts map and the pending set when a mint is found to be no longer decompressed. Nice fix.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/mint/state.rs` around lines 77 - 85, The previous
bug where stale tracker entries remained for compressed mints has been fixed by
calling self.remove(&pubkey) inside the branch where
mint.metadata.mint_decompressed is false; keep the current logic in the function
containing that branch so that self.remove clears both the accounts map and the
pending set when a mint is no longer decompressed (ensure the call to
self.remove(&pubkey) remains and the subsequent debug! and return Ok(())
behavior is preserved).
forester/src/compressible/pda/state.rs (2)

188-194: Closed-account handling looks correct.

Same pattern as the Mint tracker: empty data → self.remove() → cleans both maps. Good.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/pda/state.rs` around lines 188 - 194, No change
required: the closed-account handling in process_account update is correct—when
data.is_empty() the code calls self.remove(&pubkey) to clean both maps and
returns; leave the if data.is_empty() { if self.remove(&pubkey).is_some() {
debug!(...) } return Ok(()); } logic as-is (referenced symbols: data.is_empty(),
self.remove(&pubkey), debug!).

118-124: Past review fix confirmed — uses trait's remove() for compressed accounts.

This correctly clears both the accounts map and the pending set when a PDA is found to be already compressed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/pda/state.rs` around lines 118 - 124, The code
correctly handles already-compressed PDAs by calling the trait method
self.remove(&pubkey) inside the compression_info.is_compressed() branch, which
clears both the accounts map and pending set; no code change is needed—keep the
compression_info.is_compressed() check and the call to self.remove(&pubkey) (and
the debug! message) as-is in state.rs.
forester/src/compressible/mint/compressor.rs (1)

239-254: Past review fix confirmed — remove (not remove_compressed) is now used for already-closed accounts.

This correctly avoids inflating the compressed counter for accounts that were closed by someone else. The on-chain pre-check is a nice optimization to avoid building useless instructions, though the TOCTOU window means you still rely on the idempotent flag (line 262) as the real safety net — which is fine.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/mint/compressor.rs` around lines 239 - 254,
Pre-check the Mint PDA with rpc.get_account in the Mint PDA pre-check block and,
if account_info.is_none(), call self.tracker.remove(mint_pda) (not
remove_compressed) and return an error; ensure the debug log message remains
clear and that the subsequent idempotent flag (the idempotent handling around
line ~262) is still used as the ultimate safety net for TOCTOU races.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@forester/src/compressible/ctoken/compressor.rs`:
- Around line 242-253: The code currently removes accounts from the tracker once
rpc.confirm_transaction(signature).await returns true; instead, after
confirmation call a status/metadata RPC (e.g., rpc.get_signature_statuses or
rpc.get_transaction) using the same signature to verify the transaction executed
without instruction errors, and only call
self.tracker.remove_compressed(&account_state.pubkey) when the returned status
indicates success (no err/log error); if the status shows execution failure,
unmark pending via self.tracker.unmark_pending(&pubkeys) and propagate or handle
the error so failed compressions are retried. Ensure this check is applied in
the compressor functions that use confirm_transaction (ctoken/pda/mint) and
reference variables signature, rpc, account_states, self.tracker,
remove_compressed and unmark_pending.

In `@forester/src/compressible/mint/state.rs`:
- Around line 37-51: The MintAccountTracker::new implementation duplicates
Default::default; refactor new() to call Default::default() (or derive Default
and implement new() as Self::default()) so there’s a single source of
initialization for accounts, compressed_count, and pending (ensure Default
initializes accounts: DashMap::new(), compressed_count: AtomicU64::new(0),
pending: DashSet::new()).

---

Outside diff comments:
In `@forester/src/compressible/mint/compressor.rs`:
- Around line 297-306: The single-mint method compress should defensively clear
the pending entry when confirmation fails: inside compress(), when confirmed is
false (the else branch that currently returns Err), call
self.unmark_pending(&mint_pda) before returning the error; if unmark_pending can
fail, swallow/log that error (e.g., via warn!) but still return the original
anyhow::Error so callers get the original failure; mention in a comment that
compress_batch_concurrent also unmarks on Err but compress defends itself for
future callers.
- Around line 52-96: compress_batch builds all instructions in parallel and uses
futures::future::try_join_all so a single
create_compress_and_close_mint_instruction failure (e.g., because the Mint PDA
was closed) cancels the whole batch; update compress_batch to either perform the
same lightweight Mint PDA existence check that compress() does before calling
create_compress_and_close_mint_instruction (i.e., inside the instruction_futures
closure, query the RPC for the mint PDA and skip building the instruction if it
no longer exists) or change the parallel collection to tolerate per-mint
failures (replace try_join_all with join_all and map each result to
Option/Result so you can filter out failed instruction builds and only include
successful Instruction values), referencing compress_batch, compress,
create_compress_and_close_mint_instruction and try_join_all to locate the code
to change.

In `@sdk-libs/macros/src/light_pdas/program/compress.rs`:
- Around line 179-204: The generated __compress_dispatch function currently
hardcodes anchor_lang::prelude::AccountInfo<'info>; update generate_dispatch_fn
to use the backend-provided account info type by calling
backend.account_info_type() (same approach as
generate_enum_dispatch_method_with_backend) so the parameter signature uses that
returned type instead of anchor_lang::prelude::AccountInfo<'info>, and ensure
any lifetime/type interpolation in the syn::parse_quote! uses the
backend.account_info_type() token/identifier so generated code stays consistent
with CodegenBackend implementations (reference generate_dispatch_fn,
backend.account_info_type(), CodegenBackend, and the __compress_dispatch
signature).

---

Duplicate comments:
In `@forester/src/compressible/mint/compressor.rs`:
- Around line 239-254: Pre-check the Mint PDA with rpc.get_account in the Mint
PDA pre-check block and, if account_info.is_none(), call
self.tracker.remove(mint_pda) (not remove_compressed) and return an error;
ensure the debug log message remains clear and that the subsequent idempotent
flag (the idempotent handling around line ~262) is still used as the ultimate
safety net for TOCTOU races.

In `@forester/src/compressible/mint/state.rs`:
- Around line 163-169: The closed-account handling in state.rs is correct: when
data.is_empty() the call to self.remove(&pubkey) properly cleans up both
accounts and pending via the trait method; no code changes are required—leave
the data.is_empty() branch and self.remove(&pubkey) as-is and retain the debug!
log for Removed closed Mint account.
- Around line 77-85: The previous bug where stale tracker entries remained for
compressed mints has been fixed by calling self.remove(&pubkey) inside the
branch where mint.metadata.mint_decompressed is false; keep the current logic in
the function containing that branch so that self.remove clears both the accounts
map and the pending set when a mint is no longer decompressed (ensure the call
to self.remove(&pubkey) remains and the subsequent debug! and return Ok(())
behavior is preserved).

In `@forester/src/compressible/pda/state.rs`:
- Around line 188-194: No change required: the closed-account handling in
process_account update is correct—when data.is_empty() the code calls
self.remove(&pubkey) to clean both maps and returns; leave the if
data.is_empty() { if self.remove(&pubkey).is_some() { debug!(...) } return
Ok(()); } logic as-is (referenced symbols: data.is_empty(),
self.remove(&pubkey), debug!).
- Around line 118-124: The code correctly handles already-compressed PDAs by
calling the trait method self.remove(&pubkey) inside the
compression_info.is_compressed() branch, which clears both the accounts map and
pending set; no code change is needed—keep the compression_info.is_compressed()
check and the call to self.remove(&pubkey) (and the debug! message) as-is in
state.rs.

In `@sdk-libs/macros/src/light_pdas/program/compress.rs`:
- Around line 379-431: Summary: The Pinocchio and Anchor dispatch paths were
intended to be separated and the current implementation emits bare sequential
compress arms for Pinocchio and match-arm-style compress arms for Anchor.
Action: Do not merge changes that convert one style into the other; keep
generate_enum_dispatch_method_with_backend as-is, but verify that
generate_compress_arms produces the correct shape for each backend (blocks for
Pinocchio, match arms for Anchor) and that symbols borrow_error, account_crate,
compress_dispatch, and enum_name are referenced consistently; run the
unit/integration tests that exercise compress_dispatch to confirm behavior
before approving the PR.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 93c98c4 and 4f2aa34.

📒 Files selected for processing (6)
  • forester/src/compressible/ctoken/compressor.rs
  • forester/src/compressible/mint/compressor.rs
  • forester/src/compressible/mint/state.rs
  • forester/src/compressible/pda/state.rs
  • sdk-libs/macros/src/light_pdas/program/compress.rs
  • sdk-libs/macros/src/light_pdas/program/instructions.rs

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
forester/src/compressible/mint/compressor.rs (1)

182-189: 🧹 Nitpick | 🔵 Trivial

Nice defensive guard against max_concurrent == 0.

Worth noting: the PDA compressor's compress_batch_concurrent doesn't have this guard, which would cause buffer_unordered(0) to panic. Consider adding the same check there for consistency.

#!/bin/bash
# Check if PDA compressor has the same guard
rg -nP 'max_concurrent\s*==\s*0|buffer_unordered' forester/src/compressible/pda/compressor.rs
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/mint/compressor.rs` around lines 182 - 189, Add the
same defensive guard in the PDA compressor's compress_batch_concurrent function
to check if max_concurrent == 0 and immediately return an iterator/collection of
Err((mint_state, anyhow::anyhow!("max_concurrent must be > 0"))) for each
mint_state, mirroring the logic used in mint/compressor.rs; this prevents
buffer_unordered(0) from panicking and keeps behavior consistent for mint_states
and error handling in compress_batch_concurrent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@forester/src/compressible/mint/compressor.rs`:
- Around line 136-140: The confirm_transaction call in the compressor currently
uses `.map_err(...)?` which returns early on error without calling
`self.tracker.unmark_pending(&pubkeys)` and thus leaves accounts marked pending;
replace the `.map_err(...)?` usage for
`rpc.confirm_transaction(signature).await` with an explicit match (or let result
= ...; match result { Ok(confirmed) => ..., Err(e) => {
self.tracker.unmark_pending(&pubkeys); return Err(anyhow::anyhow!("Failed to
confirm transaction: {:?}", e)); } })` so that on error you unmark pending via
`self.tracker.unmark_pending(&pubkeys)` before propagating the error; keep the
same error message formatting and continue using `rpc`, `signature`, and
`pubkeys` identifiers.

In `@forester/src/compressible/pda/compressor.rs`:
- Around line 324-328: The call to rpc.confirm_transaction(signature).await
currently uses the ? operator which returns early on Err and skips unmarking
accounts, leaving them stuck in the pending set; update the logic in the
compressor.rs flow where confirm_transaction is called (the block that computes
confirmed) to match on the Result instead of using ?, and on Err call
unmark_pending (the same helper used elsewhere) for the affected accounts before
returning the error (preserve the original error propagation). Ensure you
reference the same signature variable and pending-set/unmark_pending logic used
by the ctoken compressor to unmark the accounts on failure.
- Around line 376-388: The pre-check branch that handles a missing on-chain PDA
currently calls self.tracker.remove_compressed(pda), which incorrectly
increments compressed_count when this compressor did not perform the
compression; change that call to self.tracker.remove(pda) so the tracker treats
this as a plain removal (match the mint compressor behavior around mint_pda),
keeping the account_info.is_none() check and the debug log intact.
- Around line 298-300: The code recomputes the same Vec<Pubkey> from
account_states twice; instead of calling account_states.iter().map(|s|
s.pubkey).collect() again at the mark_pending call, reuse the first binding you
created earlier (the initial Vec<Pubkey> produced from account_states) and pass
that to self.tracker.mark_pending(&pubkeys) — ensure the reused binding is in
scope (and of type Vec<Pubkey>) so you can borrow it (&pubkeys) rather than
recomputing.

In `@forester/src/compressible/traits.rs`:
- Around line 103-121: The function verify_transaction_execution currently
treats a missing status (statuses.first() == None or Some(None)) as success;
update verify_transaction_execution to handle unknown status explicitly by
retrying the RPC a few times with a short delay (e.g., 3 attempts with a small
sleep) and only proceed to check err after a successful Some(status); if after
retries the status is still None, return an Err (or at minimum log a warning and
return an Err) so callers won't assume success; reference the
verify_transaction_execution function, rpc.get_signature_statuses call, and the
statuses.first()/status.err check when implementing the retry+delay and the
final error/log path.

---

Outside diff comments:
In `@forester/src/compressible/mint/compressor.rs`:
- Around line 182-189: Add the same defensive guard in the PDA compressor's
compress_batch_concurrent function to check if max_concurrent == 0 and
immediately return an iterator/collection of Err((mint_state,
anyhow::anyhow!("max_concurrent must be > 0"))) for each mint_state, mirroring
the logic used in mint/compressor.rs; this prevents buffer_unordered(0) from
panicking and keeps behavior consistent for mint_states and error handling in
compress_batch_concurrent.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4f2aa34 and 7fe15c1.

📒 Files selected for processing (5)
  • forester/src/compressible/ctoken/compressor.rs
  • forester/src/compressible/mint/compressor.rs
  • forester/src/compressible/mint/state.rs
  • forester/src/compressible/pda/compressor.rs
  • forester/src/compressible/traits.rs

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
forester/src/compressible/pda/compressor.rs (1)

159-192: ⚠️ Potential issue | 🔴 Critical

Guard buffer_unordered against max_concurrent == 0 to prevent panic and pending leaks.

The PDA compressor marks all accounts pending upfront (line 161), then calls buffer_unordered(max_concurrent) at line 190 without validating the bound. Since buffer_unordered panics when passed 0, any panic leaves accounts stranded in the pending tracker. The mint compressor already implements this guard correctly—the PDA compressor should match that pattern.

🐛 Proposed fix
     if account_states.is_empty() {
         return Vec::new();
     }

+    // Guard against max_concurrent == 0 to avoid buffer_unordered panic
+    if max_concurrent == 0 {
+        return account_states
+            .iter()
+            .cloned()
+            .map(|account_state| {
+                Err((account_state, anyhow::anyhow!("max_concurrent must be > 0")))
+            })
+            .collect();
+    }
+
     // Mark all accounts as pending upfront so concurrent cycles skip them
     let all_pubkeys: Vec<Pubkey> = account_states.iter().map(|s| s.pubkey).collect();
     self.tracker.mark_pending(&all_pubkeys);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/pda/compressor.rs` around lines 159 - 192, The PDA
compressor currently calls
futures::stream::iter(...).buffer_unordered(max_concurrent) without validating
max_concurrent, which can panic when it is 0 and leave accounts marked pending;
update the logic around where max_concurrent is used (the buffer_unordered call
in the async block that builds compression_futures) to early-return or set a
safe minimum (e.g., treat 0 as 1) — follow the same guard pattern used by the
mint compressor: check max_concurrent before creating the stream and either skip
processing/unmark all via tracker.unmark_pending(&all_pubkeys) and return early,
or clamp max_concurrent to at least 1 so buffer_unordered never receives 0;
ensure to reference compressor.tracker.mark_pending/unmark_pending, the
compression_futures creation, and the compressor.compress/cancelled.load checks
when implementing the guard.
♻️ Duplicate comments (1)
forester/src/compressible/pda/compressor.rs (1)

279-300: 🧹 Nitpick | 🔵 Trivial

Reuse the existing pubkeys vector in compress_batch.

Line 280 already builds pubkeys, then Line 299 rebuilds the same vector before mark_pending. Reusing the first binding removes redundant allocation/work.

♻️ Proposed cleanup
         // Build pubkeys array
         let pubkeys: Vec<Pubkey> = account_states.iter().map(|s| s.pubkey).collect();
@@
         // Mark as pending before sending
-        let pubkeys: Vec<Pubkey> = account_states.iter().map(|s| s.pubkey).collect();
         self.tracker.mark_pending(&pubkeys);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/pda/compressor.rs` around lines 279 - 300, The code
in compress_batch builds pubkeys once then rebuilds it before calling
self.tracker.mark_pending; remove the second duplicate declaration and reuse the
earlier pubkeys binding (the Vec<Pubkey> created for
build_compress_accounts_idempotent) by passing &pubkeys to
self.tracker.mark_pending, ensuring you don't shadow or reallocate pubkeys and
that ownership/borrowing matches mark_pending's signature; update only the
duplicate pubkeys creation (the one before self.tracker.mark_pending) to use the
existing variable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 159-192: The PDA compressor currently calls
futures::stream::iter(...).buffer_unordered(max_concurrent) without validating
max_concurrent, which can panic when it is 0 and leave accounts marked pending;
update the logic around where max_concurrent is used (the buffer_unordered call
in the async block that builds compression_futures) to early-return or set a
safe minimum (e.g., treat 0 as 1) — follow the same guard pattern used by the
mint compressor: check max_concurrent before creating the stream and either skip
processing/unmark all via tracker.unmark_pending(&all_pubkeys) and return early,
or clamp max_concurrent to at least 1 so buffer_unordered never receives 0;
ensure to reference compressor.tracker.mark_pending/unmark_pending, the
compression_futures creation, and the compressor.compress/cancelled.load checks
when implementing the guard.

---

Duplicate comments:
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 279-300: The code in compress_batch builds pubkeys once then
rebuilds it before calling self.tracker.mark_pending; remove the second
duplicate declaration and reuse the earlier pubkeys binding (the Vec<Pubkey>
created for build_compress_accounts_idempotent) by passing &pubkeys to
self.tracker.mark_pending, ensuring you don't shadow or reallocate pubkeys and
that ownership/borrowing matches mark_pending's signature; update only the
duplicate pubkeys creation (the one before self.tracker.mark_pending) to use the
existing variable.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7fe15c1 and 6120ced.

📒 Files selected for processing (3)
  • forester/src/compressible/mint/compressor.rs
  • forester/src/compressible/pda/compressor.rs
  • forester/src/compressible/traits.rs

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
forester/src/compressible/pda/compressor.rs (1)

280-315: ⚠️ Potential issue | 🟠 Major

Pending is marked too late in compress_batch — large TOCTOU window exposes accounts to concurrent processing.

mark_pending fires at line 299, well after fetch_program_config, six-plus network round-trips (parallel compressed-account fetches, validity proof), and instruction building. Any concurrent cycle that calls compress_batch_concurrent or another compress_batch during that window will see the same accounts as not pending and start a redundant (or conflicting) compression path.

compress_batch_concurrent correctly marks upfront (line 161, before any work). compress_batch should do the same: move mark_pending to immediately after the pubkeys binding is built (line 280), and propagate unmark_pending to each early-return error path that fires before the instruction is sent.

🐛 Proposed fix — mark pending immediately after pubkeys are available
         // Build pubkeys array
         let pubkeys: Vec<Pubkey> = account_states.iter().map(|s| s.pubkey).collect();

+        // Mark as pending immediately to prevent concurrent cycles from picking up these accounts
+        self.tracker.mark_pending(&pubkeys);

         // Build single batched instruction
         let ix = build_compress_accounts_idempotent(
             program_id,
             &COMPRESS_ACCOUNTS_IDEMPOTENT_DISCRIMINATOR,
             &pubkeys,
             &cached_config.program_metas,
             proof_with_context,
         )
-        .map_err(|e| anyhow::anyhow!("Failed to build instruction: {:?}", e))?;
+        .map_err(|e| {
+            self.tracker.unmark_pending(&pubkeys);
+            anyhow::anyhow!("Failed to build instruction: {:?}", e)
+        })?;

         debug!(
             "Built batched compress_accounts_idempotent for {} PDAs (program {})",
             account_states.len(),
             program_id
         );

-        // Mark as pending before sending
-        self.tracker.mark_pending(&pubkeys);

         // Send single transaction
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/compressible/pda/compressor.rs` around lines 280 - 315, Move the
pending-marking earlier in compress_batch: after computing pubkeys (the pubkeys:
Vec<Pubkey> binding) call self.tracker.mark_pending(&pubkeys) immediately to
close the TOCTOU window (mirroring compress_batch_concurrent which marks
upfront), and add corresponding self.tracker.unmark_pending(&pubkeys) calls on
every early-return/error path that can occur before create_and_send_transaction
(e.g., errors from fetch_program_config, parallel account fetches,
proof_with_context building, and build_compress_accounts_idempotent) so pending
state is cleared on failure; keep the existing unmark in the transaction send
Err branch and ensure you reference compress_batch, compress_batch_concurrent,
mark_pending, unmark_pending, build_compress_accounts_idempotent, and
create_and_send_transaction when making the changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 280-315: Move the pending-marking earlier in compress_batch: after
computing pubkeys (the pubkeys: Vec<Pubkey> binding) call
self.tracker.mark_pending(&pubkeys) immediately to close the TOCTOU window
(mirroring compress_batch_concurrent which marks upfront), and add corresponding
self.tracker.unmark_pending(&pubkeys) calls on every early-return/error path
that can occur before create_and_send_transaction (e.g., errors from
fetch_program_config, parallel account fetches, proof_with_context building, and
build_compress_accounts_idempotent) so pending state is cleared on failure; keep
the existing unmark in the transaction send Err branch and ensure you reference
compress_batch, compress_batch_concurrent, mark_pending, unmark_pending,
build_compress_accounts_idempotent, and create_and_send_transaction when making
the changes.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6120ced and 858960a.

📒 Files selected for processing (1)
  • forester/src/compressible/pda/compressor.rs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants