Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

22 changes: 18 additions & 4 deletions forester/tests/test_compressible_ctoken.rs
Original file line number Diff line number Diff line change
Expand Up @@ -324,12 +324,26 @@ async fn test_compressible_ctoken_compression() {
assert_eq!(accounts.len(), 1);
let account_state = &accounts[0];
assert_eq!(account_state.pubkey, token_account_pubkey);
assert_eq!(account_state.account.mint, mint.to_bytes());
use light_token_interface::state::{AccountState, Token, ACCOUNT_TYPE_TOKEN_ACCOUNT};
assert_eq!(
account_state.account.owner,
owner_keypair.pubkey().to_bytes()
account_state.account,
Token {
mint: mint.to_bytes().into(),
owner: owner_keypair.pubkey().to_bytes().into(),
amount: 0,
delegate: None,
state: AccountState::Initialized,
is_native: None,
delegated_amount: 0,
close_authority: None,
account_type: ACCOUNT_TYPE_TOKEN_ACCOUNT,
extensions: account_state.account.extensions.clone(),
}
);
assert!(
account_state.account.extensions.is_some(),
"compressible token account should have extensions"
);
Comment on lines 328 to 346
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Tautological extensions comparison in assert_eq!

Line 340 sets extensions: account_state.account.extensions.clone(), so that field trivially matches and contributes nothing to the equality assertion. The test looks as though it's verifying the extensions value, but it's only verifying all the other fields.

The separate is_some() guard on lines 343–346 is the sole real assertion for extensions. To make the intent clear:

🔧 Suggested refactor
 assert_eq!(
     account_state.account,
     Token {
         mint: mint.to_bytes().into(),
         owner: owner_keypair.pubkey().to_bytes().into(),
         amount: 0,
         delegate: None,
         state: AccountState::Initialized,
         is_native: None,
         delegated_amount: 0,
         close_authority: None,
         account_type: ACCOUNT_TYPE_TOKEN_ACCOUNT,
-        extensions: account_state.account.extensions.clone(),
+        // extensions is not asserted here; verified separately below
+        ..account_state.account.clone()
     }
 );

Or, if Token supports a field-by-field comparison without struct-literal syntax, exclude extensions entirely and keep the dedicated is_some() check:

+assert!(account_state.account.extensions.is_some(), "compressible token account should have extensions");
 assert_eq!(account_state.account.amount, 0);
 assert_eq!(account_state.account.state, AccountState::Initialized);
 // … individual field assertions …
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/tests/test_compressible_ctoken.rs` around lines 328 - 346, The
current assert_eq! uses extensions: account_state.account.extensions.clone(),
which tautologically makes the extensions field always match; update the test to
make a real assertion: either remove the extensions field from the expected
Token literal and keep the existing
assert!(account_state.account.extensions.is_some()) check, or replace that field
with the concrete expected extensions value (e.g., the known CompressibleToken
extension data) and assert equality against it; locate the Token struct
construction in test_compressible_ctoken.rs (the assert_eq! comparing
account_state.account) and modify the extensions handling accordingly.

assert_eq!(account_state.account.amount, 0);
assert!(account_state.lamports > 0);
let lamports = account_state.lamports;
// Test lamports update
Expand Down
48 changes: 42 additions & 6 deletions forester/tests/test_indexer_interface.rs
Original file line number Diff line number Diff line change
Expand Up @@ -515,9 +515,16 @@ async fn test_indexer_interface_scenarios() {
);
println!(" PASSED: Token account interface resolved with correct token data");

// ============ Test 3: getMultipleAccountInterfaces batch lookup ============
println!("\nTest 3: getMultipleAccountInterfaces batch lookup...");
let batch_addresses = vec![&decompressed_mint_pda, &compressible_token_account];
// ============ Test 3: getMultipleAccountInterfaces batch lookup (4 accounts) ============
println!("\nTest 3: getMultipleAccountInterfaces batch lookup (4 accounts)...");
// Include: decompressed mint PDA, compressible token account, compressed mint PDA (not found),
// and the compressed mint seed pubkey (not a known on-chain account).
let batch_addresses = vec![
&decompressed_mint_pda,
&compressible_token_account,
&bob_ata,
&charlie_ata,
];
Comment on lines +520 to +527
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Stale comment describes the wrong accounts

The comment at lines 520–521 still refers to the previous batch contents (compressed mint PDA (not found) and compressed mint seed pubkey), but the actual batch_addresses now uses bob_ata and charlie_ata — both fully on-chain hot accounts.

✏️ Suggested fix
-    // Include: decompressed mint PDA, compressible token account, compressed mint PDA (not found),
-    // and the compressed mint seed pubkey (not a known on-chain account).
+    // Include: decompressed mint PDA (on-chain CMint), compressible token account (on-chain),
+    // Bob's ATA (on-chain), and Charlie's ATA (on-chain).
     let batch_addresses = vec![
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Include: decompressed mint PDA, compressible token account, compressed mint PDA (not found),
// and the compressed mint seed pubkey (not a known on-chain account).
let batch_addresses = vec![
&decompressed_mint_pda,
&compressible_token_account,
&bob_ata,
&charlie_ata,
];
// Include: decompressed mint PDA (on-chain CMint), compressible token account (on-chain),
// Bob's ATA (on-chain), and Charlie's ATA (on-chain).
let batch_addresses = vec![
&decompressed_mint_pda,
&compressible_token_account,
&bob_ata,
&charlie_ata,
];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/tests/test_indexer_interface.rs` around lines 520 - 527, The comment
above the batch_addresses vector is stale and mentions "compressed mint PDA (not
found)" and "compressed mint seed pubkey" which no longer match the actual
items; update the comment to accurately describe the current contents:
decompressed_mint_pda, compressible_token_account, bob_ata, and charlie_ata
(both on-chain hot accounts). Locate the comment near batch_addresses and
replace the old descriptions with a brief, correct list referencing
decompressed_mint_pda, compressible_token_account, bob_ata, and charlie_ata.


let batch_response = photon_indexer
.get_multiple_account_interfaces(batch_addresses.clone(), None)
Expand All @@ -526,8 +533,8 @@ async fn test_indexer_interface_scenarios() {

assert_eq!(
batch_response.value.len(),
2,
"Batch response should have exactly 2 results"
4,
"Batch response should have exactly 4 results"
);

// First result: decompressed mint
Expand Down Expand Up @@ -560,7 +567,36 @@ async fn test_indexer_interface_scenarios() {
batch_token.account.lamports > 0,
"Batch token account should have lamports > 0"
);
println!(" PASSED: Batch lookup returned correct results");

// Third result: Bob's ATA (on-chain token account)
let batch_bob_ata = batch_response.value[2]
.as_ref()
.expect("Bob's ATA should be found in batch");
assert!(batch_bob_ata.is_hot(), "Bob's ATA should be hot (on-chain)");
assert_eq!(batch_bob_ata.key, bob_ata, "Bob's ATA key should match");
assert!(
batch_bob_ata.account.lamports > 0,
"Bob's ATA should have lamports > 0"
);

// Fourth result: Charlie's ATA (on-chain token account)
let batch_charlie_ata = batch_response.value[3]
.as_ref()
.expect("Charlie's ATA should be found in batch");
assert!(
batch_charlie_ata.is_hot(),
"Charlie's ATA should be hot (on-chain)"
);
assert_eq!(
batch_charlie_ata.key, charlie_ata,
"Charlie's ATA key should match"
);
assert!(
batch_charlie_ata.account.lamports > 0,
"Charlie's ATA should have lamports > 0"
);

println!(" PASSED: Batch lookup returned correct results for 4 accounts");

// ============ Test 4: Verify fully compressed mint via getAccountInterface returns None ============
// Fully compressed mints (after CompressAndCloseMint) have full mint data in the compressed DB.
Expand Down
21 changes: 21 additions & 0 deletions sdk-libs/client/src/indexer/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -28,3 +28,24 @@ impl Default for RetryConfig {
}
}
}

#[cfg(test)]
mod tests {
use super::*;

#[test]
fn test_retry_config_default_values() {
let config = RetryConfig::default();
assert_eq!(config.num_retries, 10);
assert_eq!(config.delay_ms, 400);
assert_eq!(config.max_delay_ms, 8000);
}

#[test]
fn test_indexer_rpc_config_new_sets_slot() {
let slot = 42u64;
let config = IndexerRpcConfig::new(slot);
assert_eq!(config.slot, slot);
assert_eq!(config.retry_config, RetryConfig::default());
}
}
110 changes: 110 additions & 0 deletions sdk-libs/client/src/interface/pack.rs
Original file line number Diff line number Diff line change
Expand Up @@ -123,3 +123,113 @@ fn pack_proof_internal(
system_accounts_offset: system_offset as u8,
})
}

#[cfg(test)]
mod tests {
use light_compressed_account::TreeType;
use solana_pubkey::Pubkey;

use super::{pack_proof, pack_proof_for_mints};
use crate::indexer::{TreeInfo, ValidityProofWithContext};

fn make_state_v1_tree_info() -> TreeInfo {
TreeInfo {
tree_type: TreeType::StateV1,
tree: Pubkey::new_unique(),
queue: Pubkey::new_unique(),
cpi_context: None,
next_tree_info: None,
}
}

#[test]
fn test_pack_proof_minimal_valid_proof_no_cpi_context() {
let program_id = Pubkey::new_unique();
let proof = ValidityProofWithContext::default();
let output_tree = make_state_v1_tree_info();

let result = pack_proof(&program_id, proof, &output_tree, None).unwrap();

// v2 system accounts (with self_program, no cpi_context):
// light_system_program, cpi_signer, registered_program_pda,
// account_compression_authority, account_compression_program, system_program = 6
// + output queue = 1
// Total = 7
assert_eq!(
result.remaining_accounts.len(),
7,
"expected 7 remaining accounts without cpi_context"
);
assert_eq!(result.state_tree_index, None);
// system_accounts_offset is 0 because system accounts are prepended
// at the start of remaining_accounts by add_system_accounts_raw
assert_eq!(result.system_accounts_offset, 0);
}

#[test]
fn test_pack_proof_with_cpi_context_adds_extra_account() {
let program_id = Pubkey::new_unique();
let cpi_context_pubkey = Pubkey::new_unique();
let proof = ValidityProofWithContext::default();
let output_tree = make_state_v1_tree_info();

let result_no_cpi = pack_proof(&program_id, proof.clone(), &output_tree, None).unwrap();
let result_with_cpi =
pack_proof(&program_id, proof, &output_tree, Some(cpi_context_pubkey)).unwrap();

// cpi_context adds one more account
assert_eq!(
result_with_cpi.remaining_accounts.len(),
result_no_cpi.remaining_accounts.len() + 1,
"cpi_context should add exactly one account"
);
}

#[test]
fn test_pack_proof_for_mints_adds_state_tree_index() {
let program_id = Pubkey::new_unique();
let proof = ValidityProofWithContext::default();
let output_tree = make_state_v1_tree_info();

let result = pack_proof_for_mints(&program_id, proof, &output_tree, None).unwrap();

// state_tree_index must be Some for mint creation path
assert!(
result.state_tree_index.is_some(),
"pack_proof_for_mints should set state_tree_index"
);
let idx = result.state_tree_index.unwrap();
assert!(
(idx as usize) < result.remaining_accounts.len(),
"state_tree_index must be a valid index into remaining_accounts"
);
}

#[test]
fn test_pack_proof_vs_pack_proof_for_mints_output_tree_index_consistent() {
let program_id = Pubkey::new_unique();
let proof = ValidityProofWithContext::default();
let tree = Pubkey::new_unique();
let queue = Pubkey::new_unique();
let output_tree = TreeInfo {
tree_type: TreeType::StateV1,
tree,
queue,
cpi_context: None,
next_tree_info: None,
};

let r1 = pack_proof(&program_id, proof.clone(), &output_tree, None).unwrap();
let r2 = pack_proof_for_mints(&program_id, proof, &output_tree, None).unwrap();

// Both should have the same output_tree_index since they use the same output_tree
assert_eq!(r1.output_tree_index, r2.output_tree_index);

// pack_proof_for_mints adds exactly one account (the state tree)
assert_eq!(
r2.remaining_accounts.len(),
r1.remaining_accounts.len() + 1,
"pack_proof_for_mints adds exactly one account (the state tree)"
);
}
}
87 changes: 87 additions & 0 deletions sdk-libs/client/src/interface/tx_size.rs
Original file line number Diff line number Diff line change
Expand Up @@ -153,6 +153,93 @@ mod tests {

use super::*;

#[test]
fn test_empty_input_returns_empty_batches() {
let payer = Pubkey::new_unique();
let result = split_by_tx_size(vec![], &payer, None).unwrap();
assert_eq!(result, vec![] as Vec<Vec<Instruction>>);
}

#[test]
fn test_single_small_instruction_one_batch() {
let payer = Pubkey::new_unique();
let ix = Instruction {
program_id: Pubkey::new_unique(),
accounts: vec![AccountMeta::new(Pubkey::new_unique(), false)],
data: vec![0u8; 10],
};
let batches = split_by_tx_size(vec![ix], &payer, None).unwrap();
assert_eq!(batches.len(), 1);
assert_eq!(batches[0].len(), 1);
}

#[test]
fn test_multiple_small_instructions_fit_in_one_batch() {
let payer = Pubkey::new_unique();
// Three tiny instructions that easily fit in one tx
let instructions: Vec<Instruction> = (0..3)
.map(|_| Instruction {
program_id: Pubkey::new_unique(),
accounts: vec![AccountMeta::new(Pubkey::new_unique(), false)],
data: vec![0u8; 5],
})
.collect();
let batches = split_by_tx_size(instructions, &payer, None).unwrap();
assert_eq!(batches.len(), 1);
assert_eq!(batches[0].len(), 3);
}

#[test]
fn test_instructions_split_into_multiple_batches_with_small_max_size() {
let payer = Pubkey::new_unique();
// Use a very small max_size to force each instruction into its own batch.
// Each instruction has 1 account + 10 bytes data. Estimate:
// header(3) + accounts(compact+32*2) + blockhash(32) + ixs + sigs ~ 200 bytes
// Set max_size=250 to allow one instruction per tx but not two.
let max_size = 250usize;
Comment on lines +196 to +199
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Inaccurate account-count estimate in comment

The inline comment says accounts(compact+32*2), implying 2 keys (payer + program_id). Each instruction also contributes a third key via AccountMeta::new(Pubkey::new_unique(), false), so the real account list is [payer, program_id, account] = 3 × 32 = 96 bytes, giving ~212 bytes per single-instruction tx — not ~200. The test logic and assertions are still correct since 212 < 250 (one fits) and ~290 bytes for two (split forced), but the comment formula misleads future readers.

✏️ Suggested comment fix
-        // Each instruction has 1 account + 10 bytes data. Estimate:
-        //   header(3) + accounts(compact+32*2) + blockhash(32) + ixs + sigs ~ 200 bytes
-        // Set max_size=250 to allow one instruction per tx but not two.
+        // Each instruction has 1 account (non-signer) + 10 bytes data. Estimate per single tx:
+        //   header(3) + accounts(compact+3×32=97) + blockhash(32) + ix(~14) + sigs(65) ≈ 211 bytes
+        //   Two instructions share no accounts → ≈ 290 bytes.
+        // max_size=250 allows one instruction per batch but not two → 3 batches for 3 instructions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/interface/tx_size.rs` around lines 196 - 199, The comment
next to max_size in tx_size.rs miscounts instruction accounts: each instruction
adds a third key created via AccountMeta::new(...), so replace the misleading
"accounts(compact+32*2)" part with "accounts(compact+32*3)" and update the byte
math to reflect header(3) + accounts(compact+32*3) + blockhash(32) + ixs + sigs
≈ 212 bytes for one-instruction tx (hence max_size = 250 still allows one but
not two); update the inline numbers to show ~212 for one and ~290 for two so the
comment matches the actual test logic using max_size.

let instructions: Vec<Instruction> = (0..3)
.map(|_| Instruction {
program_id: Pubkey::new_unique(),
accounts: vec![AccountMeta::new(Pubkey::new_unique(), false)],
data: vec![0u8; 10],
})
.collect();
let batches = split_by_tx_size(instructions, &payer, Some(max_size)).unwrap();
assert_eq!(
batches.len(),
3,
"each of 3 instructions should be in its own batch at max_size=250"
);
for batch in &batches {
assert_eq!(
batch.len(),
1,
"each batch should contain exactly one instruction"
);
assert!(estimate_tx_size(batch, &payer) <= max_size);
}
}

#[test]
fn test_single_instruction_exceeding_max_size_returns_error() {
let payer = Pubkey::new_unique();
let oversized_ix = Instruction {
program_id: Pubkey::new_unique(),
accounts: vec![AccountMeta::new(Pubkey::new_unique(), false)],
data: vec![0u8; 100],
};
// Set max_size to 50 - smaller than any valid instruction
let result = split_by_tx_size(vec![oversized_ix], &payer, Some(50));
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.instruction_index, 0);
assert_eq!(err.max_size, 50);
assert!(err.estimated_size > 50);
// Display impl sanity check
let msg = err.to_string();
assert!(msg.contains("instruction at index 0"));
}

#[test]
fn test_split_by_tx_size() {
let payer = Pubkey::new_unique();
Expand Down
1 change: 1 addition & 0 deletions sdk-tests/client-test/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ light-token = { workspace = true }
light-indexed-array = { workspace = true }
light-merkle-tree-reference = { workspace = true }
light-macros = { workspace = true }
light-event = { workspace = true }

tokio = { workspace = true }
rand = { workspace = true }
Expand Down
Loading