From 5b431bb7f5b4617583d268b7835d218b18be2c85 Mon Sep 17 00:00:00 2001 From: Ovi Trif Date: Fri, 27 Mar 2026 16:56:18 +0100 Subject: [PATCH 1/5] chore: add AGENTS.md with project instructions AGENTS.md provides build, test, lint commands, architecture overview, and key constraints for AI coding tools. CLAUDE.md is a symlink to it. Co-Authored-By: Claude Opus 4.6 (1M context) --- AGENTS.md | 49 +++++++++++++++++++++++++++++++++++++++++++++++++ CLAUDE.md | 1 + 2 files changed, 50 insertions(+) create mode 100644 AGENTS.md create mode 120000 CLAUDE.md diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 0000000..d88ad04 --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,49 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## Project Overview + +Rust FFI library (`bitkitcore`) providing Bitcoin & Lightning functionality with UniFFI-generated bindings for iOS (Swift), Android (Kotlin), and Python. + +## Build + +```bash +cargo build # Rust library +./build.sh # Platform bindings +./build.sh -r --patch # Bump version + build (--minor, --major) +``` + +## Test + +```bash +cargo test # All tests +cargo test modules:: # Single module (scanner, lnurl, onchain, activity, blocktank, trezor, pubky) +``` + +## Lint & Format + +```bash +cargo clippy # Lint +cargo fmt # Format Rust +``` + +Android bindings use ktlint via Gradle plugin (`org.jlleitschuh.gradle.ktlint`), excluding generated code. + +## Architecture + +- `src/lib.rs` — UniFFI exports and module re-exports +- `src/modules/` — Core modules: scanner, lnurl, onchain, activity, blocktank, trezor, pubky +- `bindings/` — Platform-specific binding outputs (ios/, android/, python/) +- `build.sh`, `build_ios.sh`, `build_android.sh`, `build_python.sh` — Build scripts + +## Key Constraints + +- **Version sync**: Version must match across `Cargo.toml`, `Package.swift`, and `bindings/android/gradle.properties`. Use `build.sh -r` to bump all three. +- **UniFFI**: Public types exposed to bindings are declared in `src/lib.rs`. Follow existing UniFFI patterns when adding new types. +- **Platform-specific deps**: Trezor uses Bluetooth-only on iOS, USB+Bluetooth on other platforms (see `Cargo.toml` target-specific dependencies). +- **Android build**: `build_android.sh` temporarily modifies `Cargo.toml` crate-type and removes `example/main.rs` during build — don't run concurrent builds. + +## Conventions + +- Branch naming: `feat/*`, `fix/*`, `chore/*` diff --git a/CLAUDE.md b/CLAUDE.md new file mode 120000 index 0000000..47dc3e3 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1 @@ +AGENTS.md \ No newline at end of file From 84b6be034addff1c85fc340a40e7a959933c94ec Mon Sep 17 00:00:00 2001 From: Ovi Trif Date: Fri, 27 Mar 2026 16:56:25 +0100 Subject: [PATCH 2/5] chore: add Claude Code skills and format hook - /verify skill: runs cargo clippy + cargo test - /build skill: wrapper for ./build.sh with platform target - PostToolUse hook: auto-formats .rs files with rustfmt on edit Co-Authored-By: Claude Opus 4.6 (1M context) --- .claude/settings.json | 17 +++++++++++++++++ .claude/skills/build/SKILL.md | 11 +++++++++++ .claude/skills/verify/SKILL.md | 11 +++++++++++ 3 files changed, 39 insertions(+) create mode 100644 .claude/settings.json create mode 100644 .claude/skills/build/SKILL.md create mode 100644 .claude/skills/verify/SKILL.md diff --git a/.claude/settings.json b/.claude/settings.json new file mode 100644 index 0000000..41f35fb --- /dev/null +++ b/.claude/settings.json @@ -0,0 +1,17 @@ +{ + "hooks": { + "PostToolUse": [ + { + "matcher": "Write|Edit", + "hooks": [ + { + "type": "command", + "command": "jq -r '.tool_input.file_path' | { read -r f; echo \"$f\" | grep -qE '\\.rs$' && rustfmt --edition 2021 \"$f\"; } 2>/dev/null || true", + "timeout": 30, + "statusMessage": "Formatting Rust..." + } + ] + } + ] + } +} diff --git a/.claude/skills/build/SKILL.md b/.claude/skills/build/SKILL.md new file mode 100644 index 0000000..93d39c3 --- /dev/null +++ b/.claude/skills/build/SKILL.md @@ -0,0 +1,11 @@ +--- +name: build +description: Build platform bindings using build.sh. Pass a target as argument (ios, android, python, all). Use when you need to generate or test platform-specific bindings. +disable-model-invocation: true +--- + +Run `./build.sh $ARGUMENTS` from the project root. + +If no arguments provided, ask the user which target to build (ios, android, python, all). + +For release builds, remind the user to use `-r` with a version bump flag (`--patch`, `--minor`, or `--major`). diff --git a/.claude/skills/verify/SKILL.md b/.claude/skills/verify/SKILL.md new file mode 100644 index 0000000..d08d6d0 --- /dev/null +++ b/.claude/skills/verify/SKILL.md @@ -0,0 +1,11 @@ +--- +name: verify +description: Run clippy and tests to verify the codebase compiles cleanly and all tests pass. Use after making changes or before committing. +--- + +Run the following checks in sequence, stopping on first failure: + +1. `cargo clippy -- -D warnings` — ensure no lint warnings +2. `cargo test` — run all tests + +Report results concisely. If clippy or tests fail, show the relevant errors and suggest fixes. From 66810eb1e1ed17b69b485dc4f2b4f2ad35f9b36c Mon Sep 17 00:00:00 2001 From: Ovi Trif Date: Fri, 27 Mar 2026 16:56:29 +0100 Subject: [PATCH 3/5] style: apply rustfmt formatting Co-Authored-By: Claude Opus 4.6 (1M context) --- example/main.rs | 23 +- src/lib.rs | 1682 ++++++++++++++------- src/modules/activity/errors.rs | 30 +- src/modules/activity/implementation.rs | 1653 ++++++++++++-------- src/modules/activity/mod.rs | 6 +- src/modules/activity/tests.rs | 1930 ++++++++++++++++-------- src/modules/activity/types.rs | 22 +- src/modules/blocktank/api.rs | 232 +-- src/modules/blocktank/db.rs | 507 ++++--- src/modules/blocktank/errors.rs | 57 +- src/modules/blocktank/liquidity.rs | 17 +- src/modules/blocktank/mod.rs | 10 +- src/modules/blocktank/models.rs | 10 +- src/modules/blocktank/tests.rs | 1096 +++++++++----- src/modules/blocktank/types.rs | 117 +- src/modules/lnurl/errors.rs | 6 +- src/modules/lnurl/implementation.rs | 77 +- src/modules/lnurl/mod.rs | 12 +- src/modules/lnurl/tests.rs | 130 +- src/modules/lnurl/types.rs | 2 +- src/modules/lnurl/utils.rs | 5 +- src/modules/mod.rs | 8 +- src/modules/onchain/compose.rs | 56 +- src/modules/onchain/implementation.rs | 342 +++-- src/modules/onchain/mod.rs | 16 +- src/modules/onchain/tests.rs | 626 +++++--- src/modules/onchain/types.rs | 38 +- src/modules/pubky/auth.rs | 19 +- src/modules/pubky/keys.rs | 13 +- src/modules/pubky/profile.rs | 46 +- src/modules/pubky/resolve.rs | 13 +- src/modules/pubky/session.rs | 55 +- src/modules/pubky/tests.rs | 11 +- src/modules/scanner/errors.rs | 30 +- src/modules/scanner/implementation.rs | 192 +-- src/modules/scanner/mod.rs | 6 +- src/modules/scanner/tests.rs | 61 +- src/modules/scanner/types.rs | 6 +- src/modules/scanner/utils.rs | 5 +- src/modules/trezor/account_info.rs | 2 +- src/modules/trezor/callbacks.rs | 9 +- src/modules/trezor/errors.rs | 13 +- src/modules/trezor/implementation.rs | 189 ++- src/modules/trezor/mod.rs | 12 +- src/modules/trezor/tests.rs | 72 +- src/modules/trezor/types.rs | 1 - 46 files changed, 5998 insertions(+), 3467 deletions(-) diff --git a/example/main.rs b/example/main.rs index aaabf73..bf9d5f1 100644 --- a/example/main.rs +++ b/example/main.rs @@ -1,9 +1,10 @@ - use bitkitcore::*; fn handle_decode_result(result: Result) { match result { - Ok(Scanner::Lightning { invoice: ln_invoice }) => { + Ok(Scanner::Lightning { + invoice: ln_invoice, + }) => { println!("Successfully decoded Lightning invoice:"); println!("Payment hash: {:?}", ln_invoice.payment_hash); println!("Amount: {} sats", ln_invoice.amount_satoshis); @@ -19,7 +20,9 @@ fn handle_decode_result(result: Result) { } } - Ok(Scanner::OnChain { invoice: btc_invoice }) => { + Ok(Scanner::OnChain { + invoice: btc_invoice, + }) => { println!("\nSuccessfully decoded on-chain invoice:"); println!("Address: {}", btc_invoice.address); println!("Amount Sats: {}", btc_invoice.amount_satoshis); @@ -98,7 +101,14 @@ fn handle_decode_result(result: Result) { println!("\nSuccessfully decoded Node Connection:"); println!("URL: {}", url); println!("Network: {}", network); - println!("Type: {}", if url.contains("onion") { "Tor" } else { "Clearnet" }); + println!( + "Type: {}", + if url.contains("onion") { + "Tor" + } else { + "Clearnet" + } + ); } Ok(Scanner::Gift { code, amount }) => { @@ -128,7 +138,8 @@ async fn main() { let legacy_address = "199Grz1BcL5KffikSAtbgngAPgYZZRa3cs"; let random_string = "random_string"; let tor_node_id = "72413cc3e96168cb4320f992bfa483865133dc28d@3phi2gcmu3nsbvux53hixrxjgyg3u6vd6kqy3yq6rlrvudqrjsxir6id.onion:9735"; - let node_id = "039b8b4dd1d88c2c5db374290cda397a8f5d79f312d6ea5d5bfdfc7c6ff363eae3@34.65.111.104:9735"; + let node_id = + "039b8b4dd1d88c2c5db374290cda397a8f5d79f312d6ea5d5bfdfc7c6ff363eae3@34.65.111.104:9735"; let gift_code = "bitkit://gift-ABC123XYZ-50000"; let invalid_gift_code = "bitkit://gift-TEST-notanumber"; @@ -136,7 +147,7 @@ async fn main() { println!("\n=== Testing Gift Code Parsing ==="); println!("Decoding: {}", gift_code); handle_decode_result(Scanner::decode(gift_code.to_string()).await); - + // Test with invalid amount println!("\nDecoding invalid gift code: {}", invalid_gift_code); handle_decode_result(Scanner::decode(invalid_gift_code.to_string()).await); diff --git a/src/lib.rs b/src/lib.rs index 582bce5..37efff5 100644 --- a/src/lib.rs +++ b/src/lib.rs @@ -20,36 +20,51 @@ mod modules; use once_cell::sync::OnceCell; // Re-export Trezor callback types and traits so UniFFI discovers them at the crate root -pub use crate::modules::trezor::{ - TrezorTransportReadResult, TrezorTransportWriteResult, TrezorCallMessageResult, - NativeDeviceInfo, TrezorTransportCallback, - trezor_set_transport_callback, get_transport_callback, - trezor_is_ble_available, - TrezorUiCallback, trezor_set_ui_callback, +use crate::activity::{ + Activity, ActivityDB, ActivityError, ActivityFilter, ActivityTags, ClosedChannelDetails, + DbError, LightningActivity, OnchainActivity, PaymentType, PreActivityMetadata, SortDirection, + TransactionDetails, }; -pub use modules::scanner::{ - Scanner, - DecodingError +use crate::modules::blocktank::{ + BlocktankDB, BlocktankError, BtOrderState2, CJitStateEnum, ChannelLiquidityOptions, + ChannelLiquidityParams, CreateCjitOptions, CreateOrderOptions, DefaultLspBalanceParams, + IBt0ConfMinTxFeeWindow, IBtBolt11Invoice, IBtEstimateFeeResponse, IBtEstimateFeeResponse2, + IBtInfo, IBtOrder, ICJitEntry, IGift, }; -pub use modules::lnurl; -pub use modules::onchain; -pub use modules::activity; use crate::modules::pubky::{PubkyError, PubkyProfile}; -use crate::activity::{ActivityError, ActivityDB, OnchainActivity, LightningActivity, Activity, ActivityFilter, SortDirection, PaymentType, DbError, ClosedChannelDetails, ActivityTags, PreActivityMetadata, TransactionDetails}; -use crate::modules::blocktank::{BlocktankDB, BlocktankError, IBtInfo, IBtOrder, CreateOrderOptions, BtOrderState2, IBt0ConfMinTxFeeWindow, IBtEstimateFeeResponse, IBtEstimateFeeResponse2, CreateCjitOptions, ICJitEntry, CJitStateEnum, IBtBolt11Invoice, IGift, ChannelLiquidityOptions, ChannelLiquidityParams, DefaultLspBalanceParams}; -use crate::onchain::{AddressError, BroadcastError, AccountInfoError, ValidationResult, GetAddressResponse, Network, GetAddressesResponse, SweepError, SweepResult, SweepTransactionPreview, SweepableBalances, broadcast_raw_tx, AccountInfoResult, SingleAddressInfoResult, AccountType, get_account_info, get_address_info, get_transaction_history, get_transaction_detail, TransactionHistoryResult, TransactionDetail}; -use crate::modules::trezor::{TrezorError, TrezorDeviceInfo, TrezorFeatures, TrezorGetAddressParams, TrezorAddressResponse, TrezorGetPublicKeyParams, TrezorPublicKeyResponse, TrezorScriptType, TrezorManager, TrezorSignMessageParams, TrezorSignedMessageResponse, TrezorVerifyMessageParams, TrezorSignTxParams, TrezorSignedTx, TrezorCoinType}; use crate::modules::trezor::account_type_to_script_type; -use crate::onchain::{compose_transaction, ComposeParams, ComposeResult}; +pub use crate::modules::trezor::{ + get_transport_callback, trezor_is_ble_available, trezor_set_transport_callback, + trezor_set_ui_callback, NativeDeviceInfo, TrezorCallMessageResult, TrezorTransportCallback, + TrezorTransportReadResult, TrezorTransportWriteResult, TrezorUiCallback, +}; +use crate::modules::trezor::{ + TrezorAddressResponse, TrezorCoinType, TrezorDeviceInfo, TrezorError, TrezorFeatures, + TrezorGetAddressParams, TrezorGetPublicKeyParams, TrezorManager, TrezorPublicKeyResponse, + TrezorScriptType, TrezorSignMessageParams, TrezorSignTxParams, TrezorSignedMessageResponse, + TrezorSignedTx, TrezorVerifyMessageParams, +}; pub use crate::onchain::WordCount; +use crate::onchain::{ + broadcast_raw_tx, get_account_info, get_address_info, get_transaction_detail, + get_transaction_history, AccountInfoError, AccountInfoResult, AccountType, AddressError, + BroadcastError, GetAddressResponse, GetAddressesResponse, Network, SingleAddressInfoResult, + SweepError, SweepResult, SweepTransactionPreview, SweepableBalances, TransactionDetail, + TransactionHistoryResult, ValidationResult, +}; +use crate::onchain::{compose_transaction, ComposeParams, ComposeResult}; +pub use modules::activity; +pub use modules::lnurl; +pub use modules::onchain; +pub use modules::scanner::{DecodingError, Scanner}; -use std::sync::Mutex as StdMutex; -use tokio::runtime::Runtime; -use tokio::sync::Mutex as TokioMutex; use bip39::Mnemonic; use bitcoin::bip32::Xpriv; use bitcoin::Network as BitcoinNetwork; use std::str::FromStr; +use std::sync::Mutex as StdMutex; +use tokio::runtime::Runtime; +use tokio::sync::Mutex as TokioMutex; pub struct DatabaseConnections { activity_db: Option, @@ -65,15 +80,13 @@ static RUNTIME: OnceCell = OnceCell::new(); static TREZOR_MANAGER: OnceCell = OnceCell::new(); fn ensure_runtime() -> &'static Runtime { - RUNTIME.get_or_init(|| { - Runtime::new().expect("Failed to create Tokio runtime") - }) + RUNTIME.get_or_init(|| Runtime::new().expect("Failed to create Tokio runtime")) } /// Helper function to get a reference to the activity database connections fn get_activity_db() -> Result, ActivityError> { let cell = DB.get().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; Ok(cell.lock().unwrap()) } @@ -81,17 +94,20 @@ fn get_activity_db() -> Result Result { let rt = ensure_runtime(); - rt.spawn(async move { - Scanner::decode(invoice).await - }).await.unwrap() + rt.spawn(async move { Scanner::decode(invoice).await }) + .await + .unwrap() } #[uniffi::export] -pub async fn get_lnurl_invoice(address: String, amount_satoshis: u64) -> Result { +pub async fn get_lnurl_invoice( + address: String, + amount_satoshis: u64, +) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - lnurl::get_lnurl_invoice(&address, amount_satoshis).await - }).await.unwrap() + rt.spawn(async move { lnurl::get_lnurl_invoice(&address, amount_satoshis).await }) + .await + .unwrap() } #[uniffi::export] @@ -135,9 +151,9 @@ pub async fn lnurl_auth( network: Option, bip39_passphrase: Option, ) -> Result { - let mnemonic = Mnemonic::parse(&bip32_mnemonic) - .map_err(|_| lnurl::LnurlError::AuthenticationFailed)?; - + let mnemonic = + Mnemonic::parse(&bip32_mnemonic).map_err(|_| lnurl::LnurlError::AuthenticationFailed)?; + let bitcoin_network = match network.unwrap_or(Network::Bitcoin) { Network::Bitcoin => BitcoinNetwork::Bitcoin, Network::Testnet => BitcoinNetwork::Testnet, @@ -145,32 +161,33 @@ pub async fn lnurl_auth( Network::Signet => BitcoinNetwork::Signet, Network::Regtest => BitcoinNetwork::Regtest, }; - + let seed = mnemonic.to_seed(bip39_passphrase.as_deref().unwrap_or("")); let root = Xpriv::new_master(bitcoin_network, &seed) .map_err(|_| lnurl::LnurlError::AuthenticationFailed)?; - + // Derive hashing key using m/138'/0 path (as per LUD-05) let hashing_path = bitcoin::bip32::DerivationPath::from_str("m/138'/0") .map_err(|_| lnurl::LnurlError::AuthenticationFailed)?; - + let secp = bitcoin::secp256k1::Secp256k1::new(); - let hashing_key_xpriv = root.derive_priv(&secp, &hashing_path) + let hashing_key_xpriv = root + .derive_priv(&secp, &hashing_path) .map_err(|_| lnurl::LnurlError::AuthenticationFailed)?; - + let hashing_key_bytes = hashing_key_xpriv.private_key.secret_bytes(); - + let params = lnurl::LnurlAuthParams { domain, k1, callback, hashing_key: hashing_key_bytes, }; - + let rt = ensure_runtime(); - rt.spawn(async move { - lnurl::lnurl_auth(params).await - }).await.unwrap() + rt.spawn(async move { lnurl::lnurl_auth(params).await }) + .await + .unwrap() } #[uniffi::export] @@ -266,7 +283,10 @@ pub fn entropy_to_mnemonic(entropy: Vec) -> Result { } #[uniffi::export] -pub fn mnemonic_to_seed(mnemonic_phrase: String, passphrase: Option) -> Result, AddressError> { +pub fn mnemonic_to_seed( + mnemonic_phrase: String, + passphrase: Option, +) -> Result, AddressError> { onchain::BitcoinAddressValidator::mnemonic_to_seed(&mnemonic_phrase, passphrase.as_deref()) } @@ -342,26 +362,17 @@ pub async fn broadcast_sweep_transaction( #[uniffi::export] pub fn init_db(base_path: String) -> Result { // Initialize sync database state - DB.get_or_init(|| { - StdMutex::new(DatabaseConnections { - activity_db: None, - }) - }); + DB.get_or_init(|| StdMutex::new(DatabaseConnections { activity_db: None })); // Initialize async database state - ASYNC_DB.get_or_init(|| { - TokioMutex::new(AsyncDatabaseConnections { - blocktank_db: None, - }) - }); + ASYNC_DB.get_or_init(|| TokioMutex::new(AsyncDatabaseConnections { blocktank_db: None })); // Create runtime for async operations let rt = ensure_runtime(); // Create database connections let activity_db = ActivityDB::new(&format!("{}/activity.db", base_path))?; - let blocktank_db = rt.block_on(async { - BlocktankDB::new(&format!("{}/blocktank.db", base_path), None).await - })?; + let blocktank_db = rt + .block_on(async { BlocktankDB::new(&format!("{}/blocktank.db", base_path), None).await })?; // Initialize sync database { @@ -390,30 +401,48 @@ pub fn get_activities( min_date: Option, max_date: Option, limit: Option, - sort_direction: Option + sort_direction: Option, ) -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; - db.get_activities(filter, tx_type, tags, search, min_date, max_date, limit, sort_direction) + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; + db.get_activities( + filter, + tx_type, + tags, + search, + min_date, + max_date, + limit, + sort_direction, + ) } #[uniffi::export] pub fn upsert_activity(activity: Activity) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.upsert_activity(&activity) } #[uniffi::export] pub fn insert_activity(activity: Activity) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; match activity { Activity::Onchain(onchain) => db.insert_onchain_activity(&onchain), Activity::Lightning(lightning) => db.insert_lightning_activity(&lightning), @@ -423,219 +452,312 @@ pub fn insert_activity(activity: Activity) -> Result<(), ActivityError> { #[uniffi::export] pub fn update_activity(activity_id: String, activity: Activity) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; match activity { Activity::Onchain(onchain) => db.update_onchain_activity_by_id(&activity_id, &onchain), - Activity::Lightning(lightning) => db.update_lightning_activity_by_id(&activity_id, &lightning), + Activity::Lightning(lightning) => { + db.update_lightning_activity_by_id(&activity_id, &lightning) + } } } #[uniffi::export] pub fn get_activity_by_id(activity_id: String) -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_activity_by_id(&activity_id) } #[uniffi::export] pub fn get_activity_by_tx_id(tx_id: String) -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_activity_by_tx_id(&tx_id) } #[uniffi::export] pub fn delete_activity_by_id(activity_id: String) -> Result { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.delete_activity_by_id(&activity_id) } #[uniffi::export] pub fn add_tags(activity_id: String, tags: Vec) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.add_tags(&activity_id, &tags) } #[uniffi::export] pub fn remove_tags(activity_id: String, tags: Vec) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.remove_tags(&activity_id, &tags) } #[uniffi::export] pub fn get_tags(activity_id: String) -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_tags(&activity_id) } #[uniffi::export] -pub fn get_activities_by_tag(tag: String, limit: Option, sort_direction: Option) -> Result, ActivityError> { +pub fn get_activities_by_tag( + tag: String, + limit: Option, + sort_direction: Option, +) -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_activities_by_tag(&tag, limit, sort_direction) } #[uniffi::export] pub fn get_all_unique_tags() -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_all_unique_tags() } #[uniffi::export] pub fn get_all_activities_tags() -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_all_activities_tags() } #[uniffi::export] pub fn upsert_tags(activity_tags: Vec) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.upsert_tags(&activity_tags) } #[uniffi::export] -pub fn add_pre_activity_metadata(pre_activity_metadata: PreActivityMetadata) -> Result<(), ActivityError> { +pub fn add_pre_activity_metadata( + pre_activity_metadata: PreActivityMetadata, +) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.add_pre_activity_metadata(&pre_activity_metadata) } #[uniffi::export] -pub fn add_pre_activity_metadata_tags(payment_id: String, tags: Vec) -> Result<(), ActivityError> { +pub fn add_pre_activity_metadata_tags( + payment_id: String, + tags: Vec, +) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.add_pre_activity_metadata_tags(&payment_id, &tags) } #[uniffi::export] -pub fn remove_pre_activity_metadata_tags(payment_id: String, tags: Vec) -> Result<(), ActivityError> { +pub fn remove_pre_activity_metadata_tags( + payment_id: String, + tags: Vec, +) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.remove_pre_activity_metadata_tags(&payment_id, &tags) } #[uniffi::export] pub fn reset_pre_activity_metadata_tags(payment_id: String) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.reset_pre_activity_metadata_tags(&payment_id) } #[uniffi::export] pub fn delete_pre_activity_metadata(payment_id: String) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.delete_pre_activity_metadata(&payment_id) } #[uniffi::export] -pub fn upsert_pre_activity_metadata(pre_activity_metadata: Vec) -> Result<(), ActivityError> { +pub fn upsert_pre_activity_metadata( + pre_activity_metadata: Vec, +) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.upsert_pre_activity_metadata(&pre_activity_metadata) } #[uniffi::export] -pub fn get_pre_activity_metadata(search_key: String, search_by_address: bool) -> Result, ActivityError> { +pub fn get_pre_activity_metadata( + search_key: String, + search_by_address: bool, +) -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_pre_activity_metadata(&search_key, search_by_address) } #[uniffi::export] pub fn get_all_pre_activity_metadata() -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_all_pre_activity_metadata() } #[uniffi::export] pub fn upsert_closed_channel(channel: ClosedChannelDetails) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; - db.upsert_closed_channel(&channel) + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; + db.upsert_closed_channel(&channel) } #[uniffi::export] pub fn upsert_closed_channels(channels: Vec) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.upsert_closed_channels(&channels) } #[uniffi::export] pub fn upsert_onchain_activities(activities: Vec) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.upsert_onchain_activities(&activities) } #[uniffi::export] -pub fn upsert_lightning_activities(activities: Vec) -> Result<(), ActivityError> { +pub fn upsert_lightning_activities( + activities: Vec, +) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.upsert_lightning_activities(&activities) } #[uniffi::export] pub fn upsert_activities(activities: Vec) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; let mut onchain_list: Vec = Vec::new(); let mut lightning_list: Vec = Vec::new(); @@ -658,38 +780,54 @@ pub fn upsert_activities(activities: Vec) -> Result<(), ActivityError> } #[uniffi::export] -pub fn get_closed_channel_by_id(channel_id: String) -> Result, ActivityError> { +pub fn get_closed_channel_by_id( + channel_id: String, +) -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_closed_channel_by_id(&channel_id) } #[uniffi::export] -pub fn get_all_closed_channels(sort_direction: Option) -> Result, ActivityError> { +pub fn get_all_closed_channels( + sort_direction: Option, +) -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_all_closed_channels(sort_direction) } #[uniffi::export] pub fn remove_closed_channel_by_id(channel_id: String) -> Result { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.remove_closed_channel_by_id(&channel_id) } #[uniffi::export] pub fn wipe_all_closed_channels() -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.wipe_all_closed_channels() } @@ -699,16 +837,23 @@ pub async fn update_blocktank_url(new_url: String) -> Result<(), BlocktankError> // Use spawn_blocking instead of block_on to avoid deadlocks rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let mut guard = cell.lock().await; - let db = guard.blocktank_db.as_mut().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_mut() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.update_blocktank_url(&new_url).await - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -716,12 +861,15 @@ pub async fn get_info(refresh: Option) -> Result, Blocktan let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; if refresh.unwrap_or(false) { Ok(Some(db.fetch_and_store_info().await?.into())) @@ -729,9 +877,13 @@ pub async fn get_info(refresh: Option) -> Result, Blocktan let info = db.get_info().await?; Ok(info.map(|info| info.into())) } - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -743,21 +895,30 @@ pub async fn create_order( let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; // Convert the options to the external type using .into() let external_options = options.map(|opt| opt.into()); // Convert the result to our local IBtOrder type - db.create_and_store_order(lsp_balance_sat, channel_expiry_weeks, external_options).await.map(|order| order.into()) - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + db.create_and_store_order(lsp_balance_sat, channel_expiry_weeks, external_options) + .await + .map(|order| order.into()) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -768,17 +929,26 @@ pub async fn open_channel( let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; - - db.open_channel(order_id, connection_string).await.map(|order| order.into()) - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; + + db.open_channel(order_id, connection_string) + .await + .map(|order| order.into()) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -790,28 +960,35 @@ pub async fn get_orders( let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; // If refresh is true and we have order_ids, refresh those specific orders if refresh && order_ids.is_some() { let ids = order_ids.unwrap(); - db.refresh_orders(&ids).await.map(|orders| { - orders.into_iter().map(|order| order.into()).collect() - }) + db.refresh_orders(&ids) + .await + .map(|orders| orders.into_iter().map(|order| order.into()).collect()) } else { // Otherwise get orders from the database - db.get_orders(order_ids.as_deref(), filter.map(|f| f.into())).await.map(|orders| { - orders.into_iter().map(|order| order.into()).collect() - }) + db.get_orders(order_ids.as_deref(), filter.map(|f| f.into())) + .await + .map(|orders| orders.into_iter().map(|order| order.into()).collect()) } - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Refresh all active orders in the database with latest data from the LSP @@ -820,18 +997,25 @@ pub async fn refresh_active_orders() -> Result, BlocktankError> { let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; - db.refresh_active_orders().await.map(|orders| { - orders.into_iter().map(|order| order.into()).collect() + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; + db.refresh_active_orders() + .await + .map(|orders| orders.into_iter().map(|order| order.into()).collect()) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), }) - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) } #[uniffi::export] @@ -841,17 +1025,26 @@ pub async fn get_min_zero_conf_tx_fee( let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; - - db.get_min_zero_conf_tx_fee(order_id).await.map(|fee| fee.into()) - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; + + db.get_min_zero_conf_tx_fee(order_id) + .await + .map(|fee| fee.into()) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -863,19 +1056,28 @@ pub async fn estimate_order_fee( let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; let external_options = options.map(|opt| opt.into()); - db.estimate_order_fee(lsp_balance_sat, channel_expiry_weeks, external_options).await.map(|response| response.into()) - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + db.estimate_order_fee(lsp_balance_sat, channel_expiry_weeks, external_options) + .await + .map(|response| response.into()) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -887,19 +1089,28 @@ pub async fn estimate_order_fee_full( let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; let external_options = options.map(|opt| opt.into()); - db.estimate_order_fee_full(lsp_balance_sat, channel_expiry_weeks, external_options).await.map(|response| response.into()) - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + db.estimate_order_fee_full(lsp_balance_sat, channel_expiry_weeks, external_options) + .await + .map(|response| response.into()) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -914,12 +1125,15 @@ pub async fn create_cjit_entry( let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; let external_options = options.map(|opt| opt.into()); @@ -929,11 +1143,17 @@ pub async fn create_cjit_entry( &invoice_description, &node_id, channel_expiry_weeks, - external_options - ).await.map(|entry| entry.into()) - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + external_options, + ) + .await + .map(|entry| entry.into()) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -945,12 +1165,15 @@ pub async fn get_cjit_entries( let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; // If refresh is true and we have entry_ids, refresh those specific entries if refresh && entry_ids.is_some() { @@ -966,13 +1189,17 @@ pub async fn get_cjit_entries( Ok(results.into_iter().map(|entry| entry.into()).collect()) } else { // Otherwise get entries from the database - db.get_cjit_entries(entry_ids.as_deref(), filter.map(|f| f.into())).await.map(|entries| { - entries.into_iter().map(|entry| entry.into()).collect() - }) + db.get_cjit_entries(entry_ids.as_deref(), filter.map(|f| f.into())) + .await + .map(|entries| entries.into_iter().map(|entry| entry.into()).collect()) } - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Refresh all active CJIT entries in the database with latest data from the LSP @@ -981,18 +1208,25 @@ pub async fn refresh_active_cjit_entries() -> Result, BlocktankE let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; - db.refresh_active_cjit_entries().await.map(|entries| { - entries.into_iter().map(|entry| entry.into()).collect() + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; + db.refresh_active_cjit_entries() + .await + .map(|entries| entries.into_iter().map(|entry| entry.into()).collect()) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), }) - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) } #[uniffi::export] @@ -1004,17 +1238,20 @@ pub async fn register_device( iso_timestamp: String, signature: String, is_production: Option, - custom_url: Option + custom_url: Option, ) -> Result { let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.register_device( &device_token, @@ -1024,11 +1261,16 @@ pub async fn register_device( &iso_timestamp, &signature, is_production, - custom_url.as_deref() - ).await - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + custom_url.as_deref(), + ) + .await + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1036,27 +1278,35 @@ pub async fn test_notification( device_token: String, secret_message: String, notification_type: Option, - custom_url: Option + custom_url: Option, ) -> Result { let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.test_notification( &device_token, &secret_message, notification_type.as_deref(), - custom_url.as_deref() - ).await - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + custom_url.as_deref(), + ) + .await + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1064,17 +1314,24 @@ pub async fn gift_pay(invoice: String) -> Result { let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.gift_pay(&invoice).await - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1082,17 +1339,24 @@ pub async fn gift_order(client_node_id: String, code: String) -> Result Result { let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_gift(&gift_id).await - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1118,17 +1389,26 @@ pub async fn get_payment(payment_id: String) -> Result) -> Result<(), BlocktankError> { let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.regtest_mine(count).await - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1157,17 +1444,24 @@ pub async fn regtest_deposit( let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.regtest_deposit(&address, amount_sat).await - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1178,17 +1472,24 @@ pub async fn regtest_pay( let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.regtest_pay(&invoice, amount_sat).await - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1196,17 +1497,26 @@ pub async fn regtest_get_payment(payment_id: String) -> Result Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.wipe_all() } #[uniffi::export] pub fn is_address_used(address: String) -> Result { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.is_address_used(&address) } #[uniffi::export] pub fn mark_activity_as_seen(activity_id: String, seen_at: u64) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.mark_activity_as_seen(&activity_id, seen_at) } #[uniffi::export] -pub fn upsert_transaction_details(details_list: Vec) -> Result<(), ActivityError> { +pub fn upsert_transaction_details( + details_list: Vec, +) -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.upsert_transaction_details(&details_list) } #[uniffi::export] pub fn get_transaction_details(tx_id: String) -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_transaction_details(&tx_id) } #[uniffi::export] pub fn get_all_transaction_details() -> Result, ActivityError> { let guard = get_activity_db()?; - let db = guard.activity_db.as_ref().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_ref() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.get_all_transaction_details() } #[uniffi::export] pub fn delete_transaction_details(tx_id: String) -> Result { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.delete_transaction_details(&tx_id) } #[uniffi::export] pub fn wipe_all_transaction_details() -> Result<(), ActivityError> { let mut guard = get_activity_db()?; - let db = guard.activity_db.as_mut().ok_or(ActivityError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(ActivityError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.wipe_all_transaction_details() } @@ -1308,14 +1652,19 @@ pub async fn blocktank_remove_all_orders() -> Result<(), BlocktankError> { let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.remove_all_orders().await - }).await.unwrap() + }) + .await + .unwrap() } #[uniffi::export] @@ -1323,14 +1672,19 @@ pub async fn blocktank_remove_all_cjit_entries() -> Result<(), BlocktankError> { let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.remove_all_cjit_entries().await - }).await.unwrap() + }) + .await + .unwrap() } #[uniffi::export] @@ -1338,14 +1692,19 @@ pub async fn blocktank_wipe_all() -> Result<(), BlocktankError> { let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; db.wipe_all().await - }).await.unwrap() + }) + .await + .unwrap() } #[uniffi::export] @@ -1353,17 +1712,24 @@ pub async fn upsert_info(info: IBtInfo) -> Result<(), BlocktankError> { let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; let external_info: rust_blocktank_client::IBtInfo = info.into(); db.upsert_info(&external_info).await - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1371,18 +1737,26 @@ pub async fn upsert_orders(orders: Vec) -> Result<(), BlocktankError> let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; - - let external_orders: Vec = orders.into_iter().map(|order| order.into()).collect(); + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; + + let external_orders: Vec = + orders.into_iter().map(|order| order.into()).collect(); db.upsert_orders(&external_orders).await - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1390,18 +1764,26 @@ pub async fn upsert_cjit_entries(entries: Vec) -> Result<(), Blockta let rt = ensure_runtime(); rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(BlocktankError::ConnectionError { - error_details: "Database not initialized. Call init_db first.".to_string() - })?; - - let external_entries: Vec = entries.into_iter().map(|e| e.into()).collect(); + let db = guard + .blocktank_db + .as_ref() + .ok_or(BlocktankError::ConnectionError { + error_details: "Database not initialized. Call init_db first.".to_string(), + })?; + + let external_entries: Vec = + entries.into_iter().map(|e| e.into()).collect(); db.upsert_cjit_entries(&external_entries).await - }).await.unwrap_or_else(|e| Err(BlocktankError::ConnectionError { - error_details: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(BlocktankError::ConnectionError { + error_details: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1411,31 +1793,42 @@ pub async fn wipe_all_databases() -> Result { // Wipe activity database - require it to be initialized { let cell = DB.get().ok_or(DbError::InitializationError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let mut guard = cell.lock().unwrap(); - let db = guard.activity_db.as_mut().ok_or(DbError::InitializationError { - error_details: "Activity database not initialized. Call init_db first.".to_string() - })?; + let db = guard + .activity_db + .as_mut() + .ok_or(DbError::InitializationError { + error_details: "Activity database not initialized. Call init_db first.".to_string(), + })?; db.wipe_all().map_err(|e| DbError::InitializationError { - error_details: format!("Failed to wipe activity database: {}", e) + error_details: format!("Failed to wipe activity database: {}", e), })?; } // Wipe blocktank database - require it to be initialized rt.spawn(async move { let cell = ASYNC_DB.get().ok_or(DbError::InitializationError { - error_details: "Database not initialized. Call init_db first.".to_string() + error_details: "Database not initialized. Call init_db first.".to_string(), })?; let guard = cell.lock().await; - let db = guard.blocktank_db.as_ref().ok_or(DbError::InitializationError { - error_details: "Blocktank database not initialized. Call init_db first.".to_string() - })?; - db.wipe_all().await.map_err(|e| DbError::InitializationError { - error_details: format!("Failed to wipe blocktank database: {}", e) - })?; + let db = guard + .blocktank_db + .as_ref() + .ok_or(DbError::InitializationError { + error_details: "Blocktank database not initialized. Call init_db first." + .to_string(), + })?; + db.wipe_all() + .await + .map_err(|e| DbError::InitializationError { + error_details: format!("Failed to wipe blocktank database: {}", e), + })?; Ok::<(), DbError>(()) - }).await.unwrap()?; + }) + .await + .unwrap()?; Ok("All databases wiped successfully".to_string()) } @@ -1464,71 +1857,85 @@ pub fn resolve_pubky_url(uri: String) -> Result { #[uniffi::export] pub async fn fetch_pubky_file(uri: String) -> Result, PubkyError> { let rt = ensure_runtime(); - rt.spawn(async move { - crate::modules::pubky::fetch_pubky_file(uri).await - }).await.unwrap_or_else(|e| Err(PubkyError::ResolutionFailed { - reason: format!("Runtime error: {}", e) - })) + rt.spawn(async move { crate::modules::pubky::fetch_pubky_file(uri).await }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::ResolutionFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] pub async fn start_pubky_auth(caps: String) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - crate::modules::pubky::start_pubky_auth(caps).await - }).await.unwrap_or_else(|e| Err(PubkyError::AuthFailed { - reason: format!("Runtime error: {}", e) - })) + rt.spawn(async move { crate::modules::pubky::start_pubky_auth(caps).await }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::AuthFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] pub async fn cancel_pubky_auth() -> Result<(), PubkyError> { let rt = ensure_runtime(); - rt.spawn(async move { - crate::modules::pubky::cancel_pubky_auth().await - }).await.unwrap_or_else(|e| Err(PubkyError::AuthFailed { - reason: format!("Runtime error: {}", e) - })) + rt.spawn(async move { crate::modules::pubky::cancel_pubky_auth().await }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::AuthFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] pub async fn complete_pubky_auth() -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - crate::modules::pubky::complete_pubky_auth().await - }).await.unwrap_or_else(|e| Err(PubkyError::AuthFailed { - reason: format!("Runtime error: {}", e) - })) + rt.spawn(async move { crate::modules::pubky::complete_pubky_auth().await }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::AuthFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] pub async fn fetch_pubky_profile(public_key: String) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - crate::modules::pubky::fetch_pubky_profile(public_key).await - }).await.unwrap_or_else(|e| Err(PubkyError::FetchFailed { - reason: format!("Runtime error: {}", e) - })) + rt.spawn(async move { crate::modules::pubky::fetch_pubky_profile(public_key).await }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::FetchFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] pub async fn fetch_pubky_contacts(public_key: String) -> Result, PubkyError> { let rt = ensure_runtime(); - rt.spawn(async move { - crate::modules::pubky::fetch_pubky_contacts(public_key).await - }).await.unwrap_or_else(|e| Err(PubkyError::FetchFailed { - reason: format!("Runtime error: {}", e) - })) + rt.spawn(async move { crate::modules::pubky::fetch_pubky_contacts(public_key).await }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::FetchFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] pub async fn fetch_pubky_file_string(uri: String) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - crate::modules::pubky::fetch_pubky_file_string(uri).await - }).await.unwrap_or_else(|e| Err(PubkyError::FetchFailed { - reason: format!("Runtime error: {}", e) - })) + rt.spawn(async move { crate::modules::pubky::fetch_pubky_file_string(uri).await }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::FetchFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1549,20 +1956,27 @@ pub async fn pubky_sign_up( ) -> Result { let rt = ensure_runtime(); rt.spawn(async move { - crate::modules::pubky::pubky_sign_up(secret_key_hex, homeserver_public_key_z32, signup_code).await - }).await.unwrap_or_else(|e| Err(PubkyError::AuthFailed { - reason: format!("Runtime error: {}", e) - })) + crate::modules::pubky::pubky_sign_up(secret_key_hex, homeserver_public_key_z32, signup_code) + .await + }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::AuthFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] pub async fn pubky_sign_in(secret_key_hex: String) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - crate::modules::pubky::pubky_sign_in(secret_key_hex).await - }).await.unwrap_or_else(|e| Err(PubkyError::AuthFailed { - reason: format!("Runtime error: {}", e) - })) + rt.spawn(async move { crate::modules::pubky::pubky_sign_in(secret_key_hex).await }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::AuthFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1574,22 +1988,25 @@ pub async fn pubky_session_put( let rt = ensure_runtime(); rt.spawn(async move { crate::modules::pubky::pubky_session_put(session_secret, path, content).await - }).await.unwrap_or_else(|e| Err(PubkyError::WriteFailed { - reason: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::WriteFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] -pub async fn pubky_session_delete( - session_secret: String, - path: String, -) -> Result<(), PubkyError> { +pub async fn pubky_session_delete(session_secret: String, path: String) -> Result<(), PubkyError> { let rt = ensure_runtime(); - rt.spawn(async move { - crate::modules::pubky::pubky_session_delete(session_secret, path).await - }).await.unwrap_or_else(|e| Err(PubkyError::WriteFailed { - reason: format!("Runtime error: {}", e) - })) + rt.spawn(async move { crate::modules::pubky::pubky_session_delete(session_secret, path).await }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::WriteFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1601,9 +2018,13 @@ pub async fn pubky_put_with_secret_key( let rt = ensure_runtime(); rt.spawn(async move { crate::modules::pubky::pubky_put_with_secret_key(secret_key_hex, path, content).await - }).await.unwrap_or_else(|e| Err(PubkyError::WriteFailed { - reason: format!("Runtime error: {}", e) - })) + }) + .await + .unwrap_or_else(|e| { + Err(PubkyError::WriteFailed { + reason: format!("Runtime error: {}", e), + }) + }) } #[uniffi::export] @@ -1612,11 +2033,15 @@ pub async fn pubky_session_list( dir_path: String, ) -> Result, PubkyError> { let rt = ensure_runtime(); - rt.spawn(async move { - crate::modules::pubky::pubky_session_list(session_secret, dir_path).await - }).await.unwrap_or_else(|e| Err(PubkyError::FetchFailed { - reason: format!("Runtime error: {}", e) - })) + rt.spawn( + async move { crate::modules::pubky::pubky_session_list(session_secret, dir_path).await }, + ) + .await + .unwrap_or_else(|e| { + Err(PubkyError::FetchFailed { + reason: format!("Runtime error: {}", e), + }) + }) } // ============================================================================ @@ -1672,9 +2097,13 @@ pub extern "system" fn Java_to_bitkit_services_BluetoothInit_nativeInit( #[uniffi::export] pub async fn trezor_initialize(credential_path: Option) -> Result<(), TrezorError> { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().initialize(credential_path).await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().initialize(credential_path).await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Scan for available Trezor devices (USB + Bluetooth). @@ -1684,18 +2113,26 @@ pub async fn trezor_initialize(credential_path: Option) -> Result<(), Tr #[uniffi::export] pub async fn trezor_scan() -> Result, TrezorError> { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().scan().await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().scan().await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// List previously discovered devices without triggering a new scan. #[uniffi::export] pub async fn trezor_list_devices() -> Result, TrezorError> { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().list_devices().await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().list_devices().await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Connect to a Trezor device by its ID. @@ -1705,63 +2142,83 @@ pub async fn trezor_list_devices() -> Result, TrezorError> #[uniffi::export] pub async fn trezor_connect(device_id: String) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().connect(&device_id).await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().connect(&device_id).await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Get a Bitcoin address from the connected Trezor device. #[uniffi::export] -pub async fn trezor_get_address(params: TrezorGetAddressParams) -> Result { +pub async fn trezor_get_address( + params: TrezorGetAddressParams, +) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().get_address(params).await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().get_address(params).await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Get a public key (xpub) from the connected Trezor device. #[uniffi::export] -pub async fn trezor_get_public_key(params: TrezorGetPublicKeyParams) -> Result { +pub async fn trezor_get_public_key( + params: TrezorGetPublicKeyParams, +) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().get_public_key(params).await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().get_public_key(params).await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Disconnect from the currently connected Trezor device. #[uniffi::export] pub async fn trezor_disconnect() -> Result<(), TrezorError> { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().disconnect().await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().disconnect().await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Check if the Trezor manager is initialized. #[uniffi::export] pub async fn trezor_is_initialized() -> bool { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().is_initialized().await - }).await.unwrap_or(false) + rt.spawn(async move { get_trezor_manager().is_initialized().await }) + .await + .unwrap_or(false) } /// Check if a Trezor device is currently connected. #[uniffi::export] pub async fn trezor_is_connected() -> bool { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().is_connected().await - }).await.unwrap_or(false) + rt.spawn(async move { get_trezor_manager().is_connected().await }) + .await + .unwrap_or(false) } /// Get information about the currently connected Trezor device. #[uniffi::export] pub async fn trezor_get_connected_device() -> Option { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().get_connected_device().await - }).await.unwrap_or(None) + rt.spawn(async move { get_trezor_manager().get_connected_device().await }) + .await + .unwrap_or(None) } /// Get the cached features of the currently connected Trezor device. @@ -1771,36 +2228,50 @@ pub async fn trezor_get_connected_device() -> Option { #[uniffi::export] pub async fn trezor_get_features() -> Option { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().get_features().await - }).await.unwrap_or(None) + rt.spawn(async move { get_trezor_manager().get_features().await }) + .await + .unwrap_or(None) } /// Sign a message with the connected Trezor device. #[uniffi::export] -pub async fn trezor_sign_message(params: TrezorSignMessageParams) -> Result { +pub async fn trezor_sign_message( + params: TrezorSignMessageParams, +) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().sign_message(params).await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().sign_message(params).await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Verify a message signature with the connected Trezor device. #[uniffi::export] pub async fn trezor_verify_message(params: TrezorVerifyMessageParams) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().verify_message(params).await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().verify_message(params).await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Sign a Bitcoin transaction with the connected Trezor device. #[uniffi::export] pub async fn trezor_sign_tx(params: TrezorSignTxParams) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().sign_tx(params).await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().sign_tx(params).await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Get the device's master root fingerprint as an 8-character hex string. @@ -1810,9 +2281,13 @@ pub async fn trezor_sign_tx(params: TrezorSignTxParams) -> Result Result { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().get_device_fingerprint().await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().get_device_fingerprint().await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Sign a Bitcoin transaction from a PSBT (base64-encoded). @@ -1824,11 +2299,22 @@ pub async fn trezor_get_device_fingerprint() -> Result { /// * `psbt_base64` - Base64-encoded PSBT data /// * `network` - Bitcoin network type. Defaults to Bitcoin (mainnet) if None. #[uniffi::export] -pub async fn trezor_sign_tx_from_psbt(psbt_base64: String, network: Option) -> Result { +pub async fn trezor_sign_tx_from_psbt( + psbt_base64: String, + network: Option, +) -> Result { let rt = ensure_runtime(); rt.spawn(async move { - get_trezor_manager().sign_tx_from_psbt(psbt_base64, network).await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + get_trezor_manager() + .sign_tx_from_psbt(psbt_base64, network) + .await + }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Clear stored Bluetooth pairing credentials for a specific Trezor device. @@ -1838,9 +2324,13 @@ pub async fn trezor_sign_tx_from_psbt(psbt_base64: String, network: Option Result<(), TrezorError> { let rt = ensure_runtime(); - rt.spawn(async move { - get_trezor_manager().clear_credentials(&device_id).await - }).await.unwrap_or_else(|e| Err(TrezorError::IoError { error_details: format!("Runtime error: {}", e) })) + rt.spawn(async move { get_trezor_manager().clear_credentials(&device_id).await }) + .await + .unwrap_or_else(|e| { + Err(TrezorError::IoError { + error_details: format!("Runtime error: {}", e), + }) + }) } // ============================================================================ @@ -1858,10 +2348,21 @@ pub async fn onchain_get_account_info( ) -> Result { let rt = ensure_runtime(); rt.spawn(async move { - get_account_info(&extended_key, &electrum_url, network, gap_limit, script_type).await - }).await.unwrap_or_else(|e| Err(AccountInfoError::SyncError { - error_details: format!("Runtime error: {}", e), - })) + get_account_info( + &extended_key, + &electrum_url, + network, + gap_limit, + script_type, + ) + .await + }) + .await + .unwrap_or_else(|e| { + Err(AccountInfoError::SyncError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Query transaction history and balance for an extended public key via Electrum. @@ -1875,9 +2376,13 @@ pub async fn onchain_get_transaction_history( let rt = ensure_runtime(); rt.spawn(async move { get_transaction_history(&extended_key, &electrum_url, network, script_type).await - }).await.unwrap_or_else(|e| Err(AccountInfoError::SyncError { - error_details: format!("Runtime error: {}", e), - })) + }) + .await + .unwrap_or_else(|e| { + Err(AccountInfoError::SyncError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Get full details for a single transaction by txid. @@ -1892,9 +2397,13 @@ pub async fn onchain_get_transaction_detail( let rt = ensure_runtime(); rt.spawn(async move { get_transaction_detail(&extended_key, &electrum_url, &txid, network, script_type).await - }).await.unwrap_or_else(|e| Err(AccountInfoError::SyncError { - error_details: format!("Runtime error: {}", e), - })) + }) + .await + .unwrap_or_else(|e| { + Err(AccountInfoError::SyncError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Query balance and UTXOs for a single Bitcoin address via Electrum. @@ -1905,11 +2414,13 @@ pub async fn onchain_get_address_info( network: Option, ) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - get_address_info(&address, &electrum_url, network).await - }).await.unwrap_or_else(|e| Err(AccountInfoError::SyncError { - error_details: format!("Runtime error: {}", e), - })) + rt.spawn(async move { get_address_info(&address, &electrum_url, network).await }) + .await + .unwrap_or_else(|e| { + Err(AccountInfoError::SyncError { + error_details: format!("Runtime error: {}", e), + }) + }) } /// Convert an account type to its corresponding Trezor script type. @@ -1932,15 +2443,16 @@ pub fn trezor_account_type_to_script_type(account_type: AccountType) -> TrezorSc pub async fn onchain_compose_transaction(params: ComposeParams) -> Vec { let rt = ensure_runtime(); let num_rates = params.fee_rates.len(); - rt.spawn(async move { - compose_transaction(params).await - }) - .await - .unwrap_or_else(|e| { - vec![ComposeResult::Error { - error: format!("Runtime error: {}", e), - }; num_rates] - }) + rt.spawn(async move { compose_transaction(params).await }) + .await + .unwrap_or_else(|e| { + vec![ + ComposeResult::Error { + error: format!("Runtime error: {}", e), + }; + num_rates + ] + }) } /// Broadcast a signed raw transaction via Electrum. @@ -1953,9 +2465,11 @@ pub async fn onchain_broadcast_raw_tx( electrum_url: String, ) -> Result { let rt = ensure_runtime(); - rt.spawn(async move { - broadcast_raw_tx(serialized_tx, &electrum_url).await - }).await.unwrap_or_else(|e| Err(BroadcastError::TaskError { - error_details: format!("Runtime error: {}", e), - })) + rt.spawn(async move { broadcast_raw_tx(serialized_tx, &electrum_url).await }) + .await + .unwrap_or_else(|e| { + Err(BroadcastError::TaskError { + error_details: format!("Runtime error: {}", e), + }) + }) } diff --git a/src/modules/activity/errors.rs b/src/modules/activity/errors.rs index abc1f6a..4e40758 100644 --- a/src/modules/activity/errors.rs +++ b/src/modules/activity/errors.rs @@ -3,37 +3,23 @@ use thiserror::Error; #[derive(uniffi::Error, Debug, Error)] pub enum ActivityError { #[error("Invalid Activity: {error_details}")] - InvalidActivity { - error_details: String, - }, + InvalidActivity { error_details: String }, #[error("Database initialization failed: {error_details}")] - InitializationError { - error_details: String, - }, + InitializationError { error_details: String }, #[error("Failed to insert activity: {error_details}")] - InsertError { - error_details: String, - }, + InsertError { error_details: String }, #[error("Failed to retrieve activities: {error_details}")] - RetrievalError { - error_details: String, - }, + RetrievalError { error_details: String }, #[error("Invalid data format: {error_details}")] - DataError { - error_details: String, - }, + DataError { error_details: String }, #[error("Database connection error: {error_details}")] - ConnectionError { - error_details: String, - }, + ConnectionError { error_details: String }, #[error("Serialization error: {error_details}")] - SerializationError { - error_details: String, - } -} \ No newline at end of file + SerializationError { error_details: String }, +} diff --git a/src/modules/activity/implementation.rs b/src/modules/activity/implementation.rs index 713d44e..31a9d32 100644 --- a/src/modules/activity/implementation.rs +++ b/src/modules/activity/implementation.rs @@ -1,6 +1,10 @@ +use crate::activity::{ + Activity, ActivityError, ActivityFilter, ActivityTags, ClosedChannelDetails, LightningActivity, + OnchainActivity, PaymentState, PaymentType, PreActivityMetadata, SortDirection, + TransactionDetails, TxInput, TxOutput, +}; use rusqlite::{Connection, OptionalExtension}; use serde_json; -use crate::activity::{Activity, ActivityError, ActivityFilter, LightningActivity, OnchainActivity, PaymentState, PaymentType, SortDirection, ClosedChannelDetails, ActivityTags, PreActivityMetadata, TransactionDetails, TxInput, TxOutput}; pub struct ActivityDB { pub conn: Connection, @@ -150,7 +154,6 @@ const TRIGGER_STATEMENTS: &[&str] = &[ SET updated_at = strftime('%s', 'now') WHERE id = NEW.id; END", - // Insert confirm timestamp validation trigger "CREATE TRIGGER IF NOT EXISTS onchain_confirm_timestamp_check_insert AFTER INSERT ON onchain_activity @@ -163,7 +166,6 @@ const TRIGGER_STATEMENTS: &[&str] = &[ THEN RAISE(ABORT, 'confirm_timestamp must be greater than or equal to timestamp') END; END", - // New update confirm timestamp validation trigger "CREATE TRIGGER IF NOT EXISTS onchain_confirm_timestamp_check_update AFTER UPDATE ON onchain_activity @@ -175,19 +177,17 @@ const TRIGGER_STATEMENTS: &[&str] = &[ ) THEN RAISE(ABORT, 'confirm_timestamp must be greater than or equal to timestamp') END; - END" + END", ]; /// Migrations to apply to the activities table. /// Each entry is (column_name, ALTER TABLE statement). The column is checked /// via `PRAGMA table_info` before running the statement to avoid relying on /// locale-dependent SQLite error messages. -const MIGRATIONS: &[(&str, &str)] = &[ - ( - "seen_at", - "ALTER TABLE activities ADD COLUMN seen_at INTEGER CHECK (seen_at IS NULL OR seen_at > 0)", - ), -]; +const MIGRATIONS: &[(&str, &str)] = &[( + "seen_at", + "ALTER TABLE activities ADD COLUMN seen_at INTEGER CHECK (seen_at IS NULL OR seen_at > 0)", +)]; impl ActivityDB { /// Creates a new ActivityDB instance with the specified database path. @@ -196,8 +196,10 @@ impl ActivityDB { // Create the directory if it doesn't exist if let Some(dir_path) = std::path::Path::new(db_path).parent() { if !dir_path.exists() { - std::fs::create_dir_all(dir_path).map_err(|e| ActivityError::InitializationError { - error_details: format!("Failed to create directory: {}", e), + std::fs::create_dir_all(dir_path).map_err(|e| { + ActivityError::InitializationError { + error_details: format!("Failed to create directory: {}", e), + } })?; } } @@ -213,7 +215,7 @@ impl ActivityDB { let conn = match Connection::open(&final_path) { Ok(conn) => conn, Err(e) => { - return Err(ActivityError::InitializationError{ + return Err(ActivityError::InitializationError { error_details: format!("Error opening database: {}", e), }); } @@ -321,35 +323,45 @@ impl ActivityDB { Activity::Onchain(onchain) => { match self.update_onchain_activity_by_id(&onchain.id, onchain) { Ok(_) => Ok(()), - Err(ActivityError::DataError{ error_details }) if error_details == "No activity found with given ID" => { + Err(ActivityError::DataError { error_details }) + if error_details == "No activity found with given ID" => + { self.insert_onchain_activity(onchain) } Err(e) => Err(e), } - }, + } Activity::Lightning(lightning) => { match self.update_lightning_activity_by_id(&lightning.id, lightning) { Ok(_) => Ok(()), - Err(ActivityError::DataError { error_details }) if error_details == "No activity found with given ID" => { + Err(ActivityError::DataError { error_details }) + if error_details == "No activity found with given ID" => + { self.insert_lightning_activity(lightning) } Err(e) => Err(e), } - }, + } } } /// Inserts a new onchain activity into the database. - pub fn insert_onchain_activity(&mut self, activity: &OnchainActivity) -> Result<(), ActivityError> { + pub fn insert_onchain_activity( + &mut self, + activity: &OnchainActivity, + ) -> Result<(), ActivityError> { if activity.id.is_empty() { return Err(ActivityError::DataError { error_details: "Activity ID cannot be empty".to_string(), }); } - let tx = match self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - }) { + let tx = match self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + }) { Ok(tx) => tx, Err(e) => return Err(e), }; @@ -368,7 +380,8 @@ impl ActivityDB { Self::payment_type_to_string(&activity.tx_type), activity.timestamp, ), - ).map_err(|e| ActivityError::InsertError { + ) + .map_err(|e| ActivityError::InsertError { error_details: format!("Failed to insert into activities: {}", e), })?; @@ -401,7 +414,8 @@ impl ActivityDB { &activity.channel_id, &activity.transfer_tx_id, ), - ).map_err(|e| ActivityError::InsertError { + ) + .map_err(|e| ActivityError::InsertError { error_details: format!("Failed to insert into onchain_activity: {}", e), })?; @@ -410,19 +424,33 @@ impl ActivityDB { })?; if activity.tx_type == PaymentType::Received { - let _ = self.transfer_pre_activity_metadata_to_activity(&activity.address, &activity.id, true); + let _ = self.transfer_pre_activity_metadata_to_activity( + &activity.address, + &activity.id, + true, + ); } else if activity.tx_type == PaymentType::Sent { - let _ = self.transfer_pre_activity_metadata_to_activity(&activity.tx_id, &activity.id, false); + let _ = self.transfer_pre_activity_metadata_to_activity( + &activity.tx_id, + &activity.id, + false, + ); } Ok(()) } /// Inserts a new lightning activity into the database. - pub fn insert_lightning_activity(&mut self, activity: &LightningActivity) -> Result<(), ActivityError> { - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + pub fn insert_lightning_activity( + &mut self, + activity: &LightningActivity, + ) -> Result<(), ActivityError> { + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; let activities_sql = " INSERT INTO activities ( @@ -438,7 +466,8 @@ impl ActivityDB { Self::payment_type_to_string(&activity.tx_type), activity.timestamp, ), - ).map_err(|e| ActivityError::InsertError { + ) + .map_err(|e| ActivityError::InsertError { error_details: format!("Failed to insert into activities: {}", e), })?; @@ -460,7 +489,8 @@ impl ActivityDB { &activity.message, &activity.preimage, ), - ).map_err(|e| ActivityError::InsertError { + ) + .map_err(|e| ActivityError::InsertError { error_details: format!("Failed to insert into lightning_activity: {}", e), })?; @@ -473,14 +503,20 @@ impl ActivityDB { Ok(()) } - pub fn upsert_onchain_activities(&mut self, activities: &[OnchainActivity]) -> Result<(), ActivityError> { + pub fn upsert_onchain_activities( + &mut self, + activities: &[OnchainActivity], + ) -> Result<(), ActivityError> { if activities.is_empty() { return Ok(()); } - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; { let mut stmt_act = tx.prepare( @@ -488,17 +524,19 @@ impl ActivityDB { ).map_err(|e| ActivityError::DataError { error_details: format!("Failed to prepare activities statement: {}", e), })?; - let mut stmt_onchain = tx.prepare( - "INSERT OR REPLACE INTO onchain_activity ( + let mut stmt_onchain = tx + .prepare( + "INSERT OR REPLACE INTO onchain_activity ( id, tx_id, address, confirmed, value, fee, fee_rate, is_boosted, boost_tx_ids, is_transfer, does_exist, confirm_timestamp, channel_id, transfer_tx_id ) VALUES ( ?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, ?14 - )" - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to prepare onchain statement: {}", e), - })?; + )", + ) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to prepare onchain statement: {}", e), + })?; for activity in activities { if activity.id.is_empty() { @@ -507,33 +545,37 @@ impl ActivityDB { }); } - stmt_act.execute(( - &activity.id, - Self::payment_type_to_string(&activity.tx_type), - activity.timestamp, - )).map_err(|e| ActivityError::InsertError { - error_details: format!("Failed to upsert activities: {}", e), - })?; + stmt_act + .execute(( + &activity.id, + Self::payment_type_to_string(&activity.tx_type), + activity.timestamp, + )) + .map_err(|e| ActivityError::InsertError { + error_details: format!("Failed to upsert activities: {}", e), + })?; let boost_tx_ids_str = activity.boost_tx_ids.join(","); - stmt_onchain.execute(( - &activity.id, - &activity.tx_id, - &activity.address, - activity.confirmed, - activity.value, - activity.fee, - activity.fee_rate, - activity.is_boosted, - &boost_tx_ids_str, - activity.is_transfer, - activity.does_exist, - activity.confirm_timestamp, - &activity.channel_id, - &activity.transfer_tx_id, - )).map_err(|e| ActivityError::InsertError { - error_details: format!("Failed to upsert onchain_activity: {}", e), - })?; + stmt_onchain + .execute(( + &activity.id, + &activity.tx_id, + &activity.address, + activity.confirmed, + activity.value, + activity.fee, + activity.fee_rate, + activity.is_boosted, + &boost_tx_ids_str, + activity.is_transfer, + activity.does_exist, + activity.confirm_timestamp, + &activity.channel_id, + &activity.transfer_tx_id, + )) + .map_err(|e| ActivityError::InsertError { + error_details: format!("Failed to upsert onchain_activity: {}", e), + })?; } } @@ -544,14 +586,20 @@ impl ActivityDB { Ok(()) } - pub fn upsert_lightning_activities(&mut self, activities: &[LightningActivity]) -> Result<(), ActivityError> { + pub fn upsert_lightning_activities( + &mut self, + activities: &[LightningActivity], + ) -> Result<(), ActivityError> { if activities.is_empty() { return Ok(()); } - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; { let mut stmt_act = tx.prepare( @@ -559,15 +607,17 @@ impl ActivityDB { ).map_err(|e| ActivityError::DataError { error_details: format!("Failed to prepare activities statement: {}", e), })?; - let mut stmt_ln = tx.prepare( - "INSERT OR REPLACE INTO lightning_activity ( + let mut stmt_ln = tx + .prepare( + "INSERT OR REPLACE INTO lightning_activity ( id, invoice, value, status, fee, message, preimage ) VALUES ( ?1, ?2, ?3, ?4, ?5, ?6, ?7 - )" - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to prepare lightning statement: {}", e), - })?; + )", + ) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to prepare lightning statement: {}", e), + })?; for activity in activities { if activity.id.is_empty() { @@ -576,25 +626,29 @@ impl ActivityDB { }); } - stmt_act.execute(( - &activity.id, - Self::payment_type_to_string(&activity.tx_type), - activity.timestamp, - )).map_err(|e| ActivityError::InsertError { - error_details: format!("Failed to upsert activities: {}", e), - })?; + stmt_act + .execute(( + &activity.id, + Self::payment_type_to_string(&activity.tx_type), + activity.timestamp, + )) + .map_err(|e| ActivityError::InsertError { + error_details: format!("Failed to upsert activities: {}", e), + })?; - stmt_ln.execute(( - &activity.id, - &activity.invoice, - activity.value, - Self::payment_state_to_string(&activity.status), - activity.fee, - &activity.message, - &activity.preimage, - )).map_err(|e| ActivityError::InsertError { - error_details: format!("Failed to upsert lightning_activity: {}", e), - })?; + stmt_ln + .execute(( + &activity.id, + &activity.invoice, + activity.value, + Self::payment_state_to_string(&activity.status), + activity.fee, + &activity.message, + &activity.preimage, + )) + .map_err(|e| ActivityError::InsertError { + error_details: format!("Failed to upsert lightning_activity: {}", e), + })?; } } @@ -626,7 +680,7 @@ impl ActivityDB { LEFT JOIN activity_tags t ON a.id = t.activity_id LEFT JOIN onchain_activity o ON a.id = o.id LEFT JOIN lightning_activity l ON a.id = l.id - WHERE 1=1" + WHERE 1=1", ); // Activity type filter @@ -638,19 +692,23 @@ impl ActivityDB { // Transaction type filter if let Some(tx_type) = tx_type { - query.push_str(&format!(" AND a.tx_type = '{}'", - Self::payment_type_to_string(&tx_type))); + query.push_str(&format!( + " AND a.tx_type = '{}'", + Self::payment_type_to_string(&tx_type) + )); } // Tags filter (ANY of the provided tags) if let Some(tag_list) = tags { if !tag_list.is_empty() { query.push_str(" AND t.tag IN ("); - query.push_str(&tag_list - .iter() - .map(|t| format!("'{}'", t.replace('\'', "''"))) - .collect::>() - .join(",")); + query.push_str( + &tag_list + .iter() + .map(|t| format!("'{}'", t.replace('\'', "''"))) + .collect::>() + .join(","), + ); query.push(')'); } } @@ -681,7 +739,8 @@ impl ActivityDB { query.push_str(")"); // Main query - query.push_str(" + query.push_str( + " SELECT a.id, a.activity_type, @@ -718,7 +777,8 @@ impl ActivityDB { INNER JOIN filtered_activities fa ON a.id = fa.id LEFT JOIN onchain_activity o ON a.id = o.id AND a.activity_type = 'onchain' LEFT JOIN lightning_activity l ON a.id = l.id AND a.activity_type = 'lightning' - ORDER BY a.timestamp "); + ORDER BY a.timestamp ", + ); // Add sort direction and limit query.push_str(Self::sort_direction_to_sql(direction)); @@ -726,83 +786,88 @@ impl ActivityDB { query.push_str(&format!(" LIMIT {}", n)); } - let mut stmt = self.conn.prepare(&query).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; - - let activity_iter = stmt.query_map([], |row| { - let activity_type: String = row.get(1)?; - match activity_type.as_str() { - "onchain" => { - let timestamp: i64 = row.get(3)?; - let created_at: Option = row.get(4)?; - let updated_at: Option = row.get(5)?; - let seen_at: Option = row.get(6)?; - let value: i64 = row.get(8)?; - let fee: i64 = row.get(9)?; - let fee_rate: i64 = row.get(10)?; - let confirm_timestamp: Option = row.get(17)?; - let boost_tx_ids_str: String = row.get(14)?; - let boost_tx_ids: Vec = if boost_tx_ids_str.is_empty() { - Vec::new() - } else { - boost_tx_ids_str.split(',').map(|s| s.to_string()).collect() - }; + let mut stmt = self + .conn + .prepare(&query) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; - Ok(Activity::Onchain(OnchainActivity { - id: row.get(0)?, - tx_type: Self::parse_payment_type(row, 2)?, - timestamp: timestamp as u64, - created_at: created_at.map(|t| t as u64), - updated_at: updated_at.map(|t| t as u64), - seen_at: seen_at.map(|t| t as u64), - tx_id: row.get(7)?, - value: value as u64, - fee: fee as u64, - fee_rate: fee_rate as u64, - address: row.get(11)?, - confirmed: row.get(12)?, - is_boosted: row.get(13)?, - boost_tx_ids, - is_transfer: row.get(15)?, - does_exist: row.get(16)?, - confirm_timestamp: confirm_timestamp.map(|t| t as u64), - channel_id: row.get(18)?, - transfer_tx_id: row.get(19)?, - })) - } - "lightning" => { - let timestamp: i64 = row.get(3)?; - let created_at: Option = row.get(4)?; - let updated_at: Option = row.get(5)?; - let seen_at: Option = row.get(6)?; - let value: i64 = row.get(21)?; - let fee: Option = row.get(23)?; - - Ok(Activity::Lightning(LightningActivity { - id: row.get(0)?, - tx_type: Self::parse_payment_type(row, 2)?, - timestamp: timestamp as u64, - created_at: created_at.map(|t| t as u64), - updated_at: updated_at.map(|t| t as u64), - seen_at: seen_at.map(|t| t as u64), - invoice: row.get(20)?, - value: value as u64, - status: Self::parse_payment_state(row, 22)?, - fee: fee.map(|f| f as u64), - message: row.get(24)?, - preimage: row.get(25)?, - })) + let activity_iter = stmt + .query_map([], |row| { + let activity_type: String = row.get(1)?; + match activity_type.as_str() { + "onchain" => { + let timestamp: i64 = row.get(3)?; + let created_at: Option = row.get(4)?; + let updated_at: Option = row.get(5)?; + let seen_at: Option = row.get(6)?; + let value: i64 = row.get(8)?; + let fee: i64 = row.get(9)?; + let fee_rate: i64 = row.get(10)?; + let confirm_timestamp: Option = row.get(17)?; + let boost_tx_ids_str: String = row.get(14)?; + let boost_tx_ids: Vec = if boost_tx_ids_str.is_empty() { + Vec::new() + } else { + boost_tx_ids_str.split(',').map(|s| s.to_string()).collect() + }; + + Ok(Activity::Onchain(OnchainActivity { + id: row.get(0)?, + tx_type: Self::parse_payment_type(row, 2)?, + timestamp: timestamp as u64, + created_at: created_at.map(|t| t as u64), + updated_at: updated_at.map(|t| t as u64), + seen_at: seen_at.map(|t| t as u64), + tx_id: row.get(7)?, + value: value as u64, + fee: fee as u64, + fee_rate: fee_rate as u64, + address: row.get(11)?, + confirmed: row.get(12)?, + is_boosted: row.get(13)?, + boost_tx_ids, + is_transfer: row.get(15)?, + does_exist: row.get(16)?, + confirm_timestamp: confirm_timestamp.map(|t| t as u64), + channel_id: row.get(18)?, + transfer_tx_id: row.get(19)?, + })) + } + "lightning" => { + let timestamp: i64 = row.get(3)?; + let created_at: Option = row.get(4)?; + let updated_at: Option = row.get(5)?; + let seen_at: Option = row.get(6)?; + let value: i64 = row.get(21)?; + let fee: Option = row.get(23)?; + + Ok(Activity::Lightning(LightningActivity { + id: row.get(0)?, + tx_type: Self::parse_payment_type(row, 2)?, + timestamp: timestamp as u64, + created_at: created_at.map(|t| t as u64), + updated_at: updated_at.map(|t| t as u64), + seen_at: seen_at.map(|t| t as u64), + invoice: row.get(20)?, + value: value as u64, + status: Self::parse_payment_state(row, 22)?, + fee: fee.map(|f| f as u64), + message: row.get(24)?, + preimage: row.get(25)?, + })) + } + _ => Err(rusqlite::Error::InvalidColumnType( + 1, + "activity_type".to_string(), + rusqlite::types::Type::Text, + )), } - _ => Err(rusqlite::Error::InvalidColumnType( - 1, - "activity_type".to_string(), - rusqlite::types::Type::Text, - )), - } - }).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to execute query: {}", e), - })?; + }) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to execute query: {}", e), + })?; let mut activities = Vec::new(); for activity_res in activity_iter { @@ -817,21 +882,23 @@ impl ActivityDB { /// Retrieves a single activity by its ID. pub fn get_activity_by_id(&self, activity_id: &str) -> Result, ActivityError> { - let activity_type: String = match self.conn.query_row( - "SELECT activity_type FROM activities WHERE id = ?1", - [activity_id], - |row| row.get(0), - ) { - Ok(activity_type) => activity_type, - Err(rusqlite::Error::QueryReturnedNoRows) => return Ok(None), - Err(e) => return Err(ActivityError::RetrievalError { - error_details: format!("Failed to get activity type: {}", e), - }), - }; - - match activity_type.as_str() { - "onchain" => { - let sql = " + let activity_type: String = match self.conn.query_row( + "SELECT activity_type FROM activities WHERE id = ?1", + [activity_id], + |row| row.get(0), + ) { + Ok(activity_type) => activity_type, + Err(rusqlite::Error::QueryReturnedNoRows) => return Ok(None), + Err(e) => { + return Err(ActivityError::RetrievalError { + error_details: format!("Failed to get activity type: {}", e), + }) + } + }; + + match activity_type.as_str() { + "onchain" => { + let sql = " SELECT a.id, a.tx_type, o.tx_id, o.value, o.fee, o.fee_rate, o.address, o.confirmed, a.timestamp, o.is_boosted, @@ -841,58 +908,61 @@ impl ActivityDB { JOIN onchain_activity o ON a.id = o.id WHERE a.id = ?1"; - let mut stmt = self.conn.prepare(sql).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = + self.conn + .prepare(sql) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; + + let activity = match stmt.query_row([activity_id], |row| { + let value: i64 = row.get(3)?; + let fee: i64 = row.get(4)?; + let fee_rate: i64 = row.get(5)?; + let timestamp: i64 = row.get(8)?; + let confirm_timestamp: Option = row.get(13)?; + let created_at: Option = row.get(16)?; + let updated_at: Option = row.get(17)?; + let seen_at: Option = row.get(18)?; + let boost_tx_ids_str: String = row.get(10)?; + let boost_tx_ids: Vec = if boost_tx_ids_str.is_empty() { + Vec::new() + } else { + boost_tx_ids_str.split(',').map(|s| s.to_string()).collect() + }; - let activity = match stmt.query_row([activity_id], |row| { - let value: i64 = row.get(3)?; - let fee: i64 = row.get(4)?; - let fee_rate: i64 = row.get(5)?; - let timestamp: i64 = row.get(8)?; - let confirm_timestamp: Option = row.get(13)?; - let created_at: Option = row.get(16)?; - let updated_at: Option = row.get(17)?; - let seen_at: Option = row.get(18)?; - let boost_tx_ids_str: String = row.get(10)?; - let boost_tx_ids: Vec = if boost_tx_ids_str.is_empty() { - Vec::new() - } else { - boost_tx_ids_str.split(',').map(|s| s.to_string()).collect() + Ok(Activity::Onchain(OnchainActivity { + id: row.get(0)?, + tx_type: Self::parse_payment_type(row, 1)?, + tx_id: row.get(2)?, + value: value as u64, + fee: fee as u64, + fee_rate: fee_rate as u64, + address: row.get(6)?, + confirmed: row.get(7)?, + timestamp: timestamp as u64, + is_boosted: row.get(9)?, + boost_tx_ids, + is_transfer: row.get(11)?, + does_exist: row.get(12)?, + confirm_timestamp: confirm_timestamp.map(|t| t as u64), + channel_id: row.get(14)?, + transfer_tx_id: row.get(15)?, + created_at: created_at.map(|t| t as u64), + updated_at: updated_at.map(|t| t as u64), + seen_at: seen_at.map(|t| t as u64), + })) + }) { + Ok(activity) => Ok(Some(activity)), + Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None), + Err(e) => Err(ActivityError::RetrievalError { + error_details: format!("Failed to get onchain activity: {}", e), + }), }; - - Ok(Activity::Onchain(OnchainActivity { - id: row.get(0)?, - tx_type: Self::parse_payment_type(row, 1)?, - tx_id: row.get(2)?, - value: value as u64, - fee: fee as u64, - fee_rate: fee_rate as u64, - address: row.get(6)?, - confirmed: row.get(7)?, - timestamp: timestamp as u64, - is_boosted: row.get(9)?, - boost_tx_ids, - is_transfer: row.get(11)?, - does_exist: row.get(12)?, - confirm_timestamp: confirm_timestamp.map(|t| t as u64), - channel_id: row.get(14)?, - transfer_tx_id: row.get(15)?, - created_at: created_at.map(|t| t as u64), - updated_at: updated_at.map(|t| t as u64), - seen_at: seen_at.map(|t| t as u64), - })) - }) { - Ok(activity) => Ok(Some(activity)), - Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None), - Err(e) => Err(ActivityError::RetrievalError { - error_details: format!("Failed to get onchain activity: {}", e), - }), - }; - activity - }, - "lightning" => { - let sql = " + activity + } + "lightning" => { + let sql = " SELECT a.id, a.tx_type, l.status, l.value, l.fee, l.invoice, l.message, a.timestamp, @@ -901,44 +971,52 @@ impl ActivityDB { JOIN lightning_activity l ON a.id = l.id WHERE a.id = ?1"; - let mut stmt = self.conn.prepare(sql).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; - - let activity = stmt.query_row([activity_id], |row| { - let value: i64 = row.get(3)?; - let fee: Option = row.get(4)?; - let timestamp: i64 = row.get(7)?; - let created_at: Option = row.get(9)?; - let updated_at: Option = row.get(10)?; - let seen_at: Option = row.get(11)?; - - Ok(Activity::Lightning(LightningActivity { - id: row.get(0)?, - tx_type: Self::parse_payment_type(row, 1)?, - status: Self::parse_payment_state(row, 2)?, - value: value as u64, - fee: fee.map(|f| f as u64), - invoice: row.get(5)?, - message: row.get(6)?, - timestamp: timestamp as u64, - preimage: row.get(8)?, - created_at: created_at.map(|t| t as u64), - updated_at: updated_at.map(|t| t as u64), - seen_at: seen_at.map(|t| t as u64), - })) - }).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to get lightning activity: {}", e), - }); + let mut stmt = + self.conn + .prepare(sql) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; + + let activity = stmt + .query_row([activity_id], |row| { + let value: i64 = row.get(3)?; + let fee: Option = row.get(4)?; + let timestamp: i64 = row.get(7)?; + let created_at: Option = row.get(9)?; + let updated_at: Option = row.get(10)?; + let seen_at: Option = row.get(11)?; + + Ok(Activity::Lightning(LightningActivity { + id: row.get(0)?, + tx_type: Self::parse_payment_type(row, 1)?, + status: Self::parse_payment_state(row, 2)?, + value: value as u64, + fee: fee.map(|f| f as u64), + invoice: row.get(5)?, + message: row.get(6)?, + timestamp: timestamp as u64, + preimage: row.get(8)?, + created_at: created_at.map(|t| t as u64), + updated_at: updated_at.map(|t| t as u64), + seen_at: seen_at.map(|t| t as u64), + })) + }) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to get lightning activity: {}", e), + }); - Ok(Some(activity?)) - }, - _ => Ok(None), + Ok(Some(activity?)) + } + _ => Ok(None), + } } -} /// Retrieves an onchain activity by transaction ID. - pub fn get_activity_by_tx_id(&self, tx_id: &str) -> Result, ActivityError> { + pub fn get_activity_by_tx_id( + &self, + tx_id: &str, + ) -> Result, ActivityError> { let sql = " SELECT a.id, a.tx_type, o.tx_id, o.value, o.fee, o.fee_rate, @@ -950,9 +1028,12 @@ impl ActivityDB { WHERE o.tx_id = ?1 AND a.activity_type = 'onchain' LIMIT 1"; - let mut stmt = self.conn.prepare(sql).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = self + .conn + .prepare(sql) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; let activity = match stmt.query_row([tx_id], |row| { let value: i64 = row.get(3)?; @@ -1003,10 +1084,17 @@ impl ActivityDB { } /// Updates an existing onchain activity by ID. - pub fn update_onchain_activity_by_id(&mut self, activity_id: &str, activity: &OnchainActivity) -> Result<(), ActivityError> { - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + pub fn update_onchain_activity_by_id( + &mut self, + activity_id: &str, + activity: &OnchainActivity, + ) -> Result<(), ActivityError> { + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; let activities_sql = " UPDATE activities SET @@ -1014,16 +1102,18 @@ impl ActivityDB { timestamp = ?2 WHERE id = ?3 AND activity_type = 'onchain'"; - let rows = tx.execute( - activities_sql, - ( - Self::payment_type_to_string(&activity.tx_type), - activity.timestamp, - activity_id, - ), - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to update activities: {}", e), - })?; + let rows = tx + .execute( + activities_sql, + ( + Self::payment_type_to_string(&activity.tx_type), + activity.timestamp, + activity_id, + ), + ) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to update activities: {}", e), + })?; if rows == 0 { return Err(ActivityError::DataError { @@ -1068,7 +1158,8 @@ impl ActivityDB { &activity.transfer_tx_id, activity_id, ), - ).map_err(|e| ActivityError::DataError { + ) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to update onchain_activity: {}", e), })?; @@ -1080,10 +1171,17 @@ impl ActivityDB { } /// Updates an existing lightning activity by ID. - pub fn update_lightning_activity_by_id(&mut self, activity_id: &str, activity: &LightningActivity) -> Result<(), ActivityError> { - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + pub fn update_lightning_activity_by_id( + &mut self, + activity_id: &str, + activity: &LightningActivity, + ) -> Result<(), ActivityError> { + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; let activities_sql = " UPDATE activities SET @@ -1091,16 +1189,18 @@ impl ActivityDB { timestamp = ?2 WHERE id = ?3 AND activity_type = 'lightning'"; - let rows = tx.execute( - activities_sql, - ( - Self::payment_type_to_string(&activity.tx_type), - activity.timestamp, - activity_id, - ), - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to update activities: {}", e), - })?; + let rows = tx + .execute( + activities_sql, + ( + Self::payment_type_to_string(&activity.tx_type), + activity.timestamp, + activity_id, + ), + ) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to update activities: {}", e), + })?; if rows == 0 { return Err(ActivityError::DataError { @@ -1129,7 +1229,8 @@ impl ActivityDB { &activity.preimage, activity_id, ), - ).map_err(|e| ActivityError::DataError { + ) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to update lightning_activity: {}", e), })?; @@ -1141,13 +1242,20 @@ impl ActivityDB { } /// Marks an activity as seen by setting the seen_at timestamp. - pub fn mark_activity_as_seen(&mut self, activity_id: &str, seen_at: u64) -> Result<(), ActivityError> { - let rows = self.conn.execute( - "UPDATE activities SET seen_at = ?1 WHERE id = ?2", - rusqlite::params![seen_at as i64, activity_id], - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to mark activity as seen: {}", e), - })?; + pub fn mark_activity_as_seen( + &mut self, + activity_id: &str, + seen_at: u64, + ) -> Result<(), ActivityError> { + let rows = self + .conn + .execute( + "UPDATE activities SET seen_at = ?1 WHERE id = ?2", + rusqlite::params![seen_at as i64, activity_id], + ) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to mark activity as seen: {}", e), + })?; if rows == 0 { return Err(ActivityError::DataError { @@ -1160,15 +1268,15 @@ impl ActivityDB { /// Deletes an activity and associated data. pub fn delete_activity_by_id(&mut self, activity_id: &str) -> Result { - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; // Delete from activities table (this will cascade to other tables) - let rows = match tx.execute( - "DELETE FROM activities WHERE id = ?1", - [activity_id], - ) { + let rows = match tx.execute("DELETE FROM activities WHERE id = ?1", [activity_id]) { Ok(rows) => rows, Err(e) => { tx.rollback().ok(); @@ -1188,13 +1296,18 @@ impl ActivityDB { /// Add tags to an activity pub fn add_tags(&mut self, activity_id: &str, tags: &[String]) -> Result<(), ActivityError> { // Verify the activity exists - let exists = self.conn.query_row( - "SELECT 1 FROM activities WHERE id = ?1", - [activity_id], - |_| Ok(true) - ).optional().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to check activity existence: {}", e), - })?.unwrap_or(false); + let exists = self + .conn + .query_row( + "SELECT 1 FROM activities WHERE id = ?1", + [activity_id], + |_| Ok(true), + ) + .optional() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to check activity existence: {}", e), + })? + .unwrap_or(false); if !exists { return Err(ActivityError::DataError { @@ -1202,15 +1315,19 @@ impl ActivityDB { }); } - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; for tag in tags { tx.execute( "INSERT OR IGNORE INTO activity_tags (activity_id, tag) VALUES (?1, ?2)", [activity_id, tag], - ).map_err(|e| ActivityError::DataError { + ) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to insert tag: {}", e), })?; } @@ -1224,15 +1341,19 @@ impl ActivityDB { /// Remove tags from an activity pub fn remove_tags(&mut self, activity_id: &str, tags: &[String]) -> Result<(), ActivityError> { - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; for tag in tags { tx.execute( "DELETE FROM activity_tags WHERE activity_id = ?1 AND tag = ?2", [activity_id, tag], - ).map_err(|e| ActivityError::DataError { + ) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to remove tag: {}", e), })?; } @@ -1247,25 +1368,32 @@ impl ActivityDB { /// Get all tags for an activity pub fn get_tags(&self, activity_id: &str) -> Result, ActivityError> { // Verify the activity exists - let exists = self.conn.query_row( - "SELECT 1 FROM activities WHERE id = ?1", - [activity_id], - |_| Ok(true) - ).optional().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to check activity existence: {}", e), - })?.unwrap_or(false); + let exists = self + .conn + .query_row( + "SELECT 1 FROM activities WHERE id = ?1", + [activity_id], + |_| Ok(true), + ) + .optional() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to check activity existence: {}", e), + })? + .unwrap_or(false); if !exists { return Ok(Vec::new()); } - let mut stmt = self.conn.prepare( - "SELECT tag FROM activity_tags WHERE activity_id = ?1", - ).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = self + .conn + .prepare("SELECT tag FROM activity_tags WHERE activity_id = ?1") + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; - let tags = stmt.query_map([activity_id], |row| row.get(0)) + let tags = stmt + .query_map([activity_id], |row| row.get(0)) .map_err(|e| ActivityError::RetrievalError { error_details: format!("Failed to execute query: {}", e), })? @@ -1278,7 +1406,12 @@ impl ActivityDB { } /// Get activities by tag with optional limit - pub fn get_activities_by_tag(&self, tag: &str, limit: Option, sort_direction: Option) -> Result, ActivityError> { + pub fn get_activities_by_tag( + &self, + tag: &str, + limit: Option, + sort_direction: Option, + ) -> Result, ActivityError> { let direction = sort_direction.unwrap_or_default(); let sql = format!( "SELECT a.id, a.activity_type @@ -1286,21 +1419,26 @@ impl ActivityDB { JOIN activity_tags t ON a.id = t.activity_id WHERE t.tag = ?1 ORDER BY a.timestamp {} {}", - Self::sort_direction_to_sql(direction), - limit.map_or(String::new(), |n| format!("LIMIT {}", n)) + Self::sort_direction_to_sql(direction), + limit.map_or(String::new(), |n| format!("LIMIT {}", n)) ); - let mut stmt = self.conn.prepare(&sql).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = self + .conn + .prepare(&sql) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; let rows = match stmt.query_map([tag], |row| { Ok((row.get::<_, String>(0)?, row.get::<_, String>(1)?)) }) { Ok(rows) => rows, - Err(e) => return Err(ActivityError::RetrievalError { - error_details: format!("Failed to execute query: {}", e), - }) + Err(e) => { + return Err(ActivityError::RetrievalError { + error_details: format!("Failed to execute query: {}", e), + }) + } }; let mut activities = Vec::new(); @@ -1319,13 +1457,15 @@ impl ActivityDB { /// Returns all unique tags stored in the database pub fn get_all_unique_tags(&self) -> Result, ActivityError> { - let mut stmt = self.conn.prepare( - "SELECT DISTINCT tag FROM activity_tags ORDER BY tag ASC" - ).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = self + .conn + .prepare("SELECT DISTINCT tag FROM activity_tags ORDER BY tag ASC") + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; - let tags = stmt.query_map([], |row| row.get(0)) + let tags = stmt + .query_map([], |row| row.get(0)) .map_err(|e| ActivityError::RetrievalError { error_details: format!("Failed to execute query: {}", e), })? @@ -1339,41 +1479,37 @@ impl ActivityDB { /// Get all activity tags for backup pub fn get_all_activities_tags(&self) -> Result, ActivityError> { - let mut stmt = self.conn.prepare( - "SELECT activity_id, tag FROM activity_tags ORDER BY activity_id, tag" - ).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = self + .conn + .prepare("SELECT activity_id, tag FROM activity_tags ORDER BY activity_id, tag") + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; - let rows: Vec<(String, String)> = stmt.query_map([], |row| { - Ok(( - row.get(0)?, - row.get(1)?, - )) - }).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to execute query: {}", e), - })? - .collect::, _>>() - .map_err(|e| ActivityError::DataError { - error_details: format!("Failed to process rows: {}", e), - })?; + let rows: Vec<(String, String)> = stmt + .query_map([], |row| Ok((row.get(0)?, row.get(1)?))) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to execute query: {}", e), + })? + .collect::, _>>() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to process rows: {}", e), + })?; // Group by activity_id - let mut grouped: std::collections::HashMap> = std::collections::HashMap::new(); + let mut grouped: std::collections::HashMap> = + std::collections::HashMap::new(); for (activity_id, tag) in rows { - grouped.entry(activity_id) + grouped + .entry(activity_id) .or_insert_with(Vec::new) .push(tag); } - let mut result: Vec = grouped.into_iter() - .map(|(activity_id, tags)| { - ActivityTags { - activity_id, - tags, - } - }) + let mut result: Vec = grouped + .into_iter() + .map(|(activity_id, tags)| ActivityTags { activity_id, tags }) .collect(); // Sort for consistent output @@ -1388,16 +1524,19 @@ impl ActivityDB { return Ok(()); } - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; { - let mut stmt = tx.prepare( - "INSERT OR IGNORE INTO activity_tags (activity_id, tag) VALUES (?1, ?2)" - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = tx + .prepare("INSERT OR IGNORE INTO activity_tags (activity_id, tag) VALUES (?1, ?2)") + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to prepare statement: {}", e), + })?; for activity_tag in activity_tags { if activity_tag.activity_id.is_empty() { @@ -1410,9 +1549,10 @@ impl ActivityDB { if tag.is_empty() { continue; // Skip empty tags } - stmt.execute([&activity_tag.activity_id, tag]).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to insert tag: {}", e), - })?; + stmt.execute([&activity_tag.activity_id, tag]) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to insert tag: {}", e), + })?; } } } @@ -1426,28 +1566,40 @@ impl ActivityDB { /// Add pre-activity metadata for an onchain address or lightning invoice /// If the metadata has an address, any existing metadata with the same address will be removed first - pub fn add_pre_activity_metadata(&mut self, pre_activity_metadata: &PreActivityMetadata) -> Result<(), ActivityError> { + pub fn add_pre_activity_metadata( + &mut self, + pre_activity_metadata: &PreActivityMetadata, + ) -> Result<(), ActivityError> { if pre_activity_metadata.payment_id.is_empty() { return Err(ActivityError::DataError { error_details: "Payment ID cannot be empty".to_string(), }); } - let tags_json = serde_json::to_string(&pre_activity_metadata.tags).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to serialize tags: {}", e), + let tags_json = serde_json::to_string(&pre_activity_metadata.tags).map_err(|e| { + ActivityError::DataError { + error_details: format!("Failed to serialize tags: {}", e), + } })?; - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; if let Some(ref address) = pre_activity_metadata.address { if !address.is_empty() { tx.execute( "DELETE FROM pre_activity_metadata WHERE address = ?1", [address], - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to delete existing metadata with address: {}", e), + ) + .map_err(|e| ActivityError::DataError { + error_details: format!( + "Failed to delete existing metadata with address: {}", + e + ), })?; } } @@ -1479,15 +1631,23 @@ impl ActivityDB { /// Add tags to existing pre-activity metadata for an onchain address or lightning invoice /// Returns an error if the metadata doesn't exist - pub fn add_pre_activity_metadata_tags(&mut self, payment_id: &str, tags_to_add: &[String]) -> Result<(), ActivityError> { + pub fn add_pre_activity_metadata_tags( + &mut self, + payment_id: &str, + tags_to_add: &[String], + ) -> Result<(), ActivityError> { // Get current metadata - let current_tags_json: Option = self.conn.query_row( - "SELECT tags FROM pre_activity_metadata WHERE payment_id = ?1", - [payment_id], - |row| row.get(0) - ).optional().map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to get current tags: {}", e), - })?; + let current_tags_json: Option = self + .conn + .query_row( + "SELECT tags FROM pre_activity_metadata WHERE payment_id = ?1", + [payment_id], + |row| row.get(0), + ) + .optional() + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to get current tags: {}", e), + })?; let mut current_tags: Vec = if let Some(tags_json) = current_tags_json { serde_json::from_str(&tags_json).map_err(|e| ActivityError::DataError { @@ -1495,7 +1655,10 @@ impl ActivityDB { })? } else { return Err(ActivityError::DataError { - error_details: format!("Pre-activity metadata not found for payment_id: {}", payment_id), + error_details: format!( + "Pre-activity metadata not found for payment_id: {}", + payment_id + ), }); }; @@ -1507,106 +1670,138 @@ impl ActivityDB { } // Update with merged tags - let updated_tags_json = serde_json::to_string(¤t_tags).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to serialize tags: {}", e), - })?; + let updated_tags_json = + serde_json::to_string(¤t_tags).map_err(|e| ActivityError::DataError { + error_details: format!("Failed to serialize tags: {}", e), + })?; - self.conn.execute( - "UPDATE pre_activity_metadata SET tags = ?1 WHERE payment_id = ?2", - [&updated_tags_json, payment_id], - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to update tags: {}", e), - })?; + self.conn + .execute( + "UPDATE pre_activity_metadata SET tags = ?1 WHERE payment_id = ?2", + [&updated_tags_json, payment_id], + ) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to update tags: {}", e), + })?; Ok(()) } /// Remove specific tags from pre-activity metadata for an onchain address or lightning invoice - pub fn remove_pre_activity_metadata_tags(&mut self, payment_id: &str, tags_to_remove: &[String]) -> Result<(), ActivityError> { + pub fn remove_pre_activity_metadata_tags( + &mut self, + payment_id: &str, + tags_to_remove: &[String], + ) -> Result<(), ActivityError> { // Get current metadata - let current_tags_json: Option = self.conn.query_row( - "SELECT tags FROM pre_activity_metadata WHERE payment_id = ?1", - [payment_id], - |row| row.get(0) - ).optional().map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to get current tags: {}", e), - })?; + let current_tags_json: Option = self + .conn + .query_row( + "SELECT tags FROM pre_activity_metadata WHERE payment_id = ?1", + [payment_id], + |row| row.get(0), + ) + .optional() + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to get current tags: {}", e), + })?; if let Some(tags_json) = current_tags_json { - let mut current_tags: Vec = serde_json::from_str(&tags_json).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to deserialize tags: {}", e), - })?; + let mut current_tags: Vec = + serde_json::from_str(&tags_json).map_err(|e| ActivityError::DataError { + error_details: format!("Failed to deserialize tags: {}", e), + })?; // Remove tags current_tags.retain(|tag| !tags_to_remove.contains(tag)); // Update with new tags - let updated_tags_json = serde_json::to_string(¤t_tags).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to serialize tags: {}", e), - })?; + let updated_tags_json = + serde_json::to_string(¤t_tags).map_err(|e| ActivityError::DataError { + error_details: format!("Failed to serialize tags: {}", e), + })?; - self.conn.execute( - "UPDATE pre_activity_metadata SET tags = ?1 WHERE payment_id = ?2", - [&updated_tags_json, payment_id], - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to update tags: {}", e), - })?; + self.conn + .execute( + "UPDATE pre_activity_metadata SET tags = ?1 WHERE payment_id = ?2", + [&updated_tags_json, payment_id], + ) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to update tags: {}", e), + })?; } Ok(()) } /// Reset (clear all tags) from pre-activity metadata for an onchain address or lightning invoice - pub fn reset_pre_activity_metadata_tags(&mut self, payment_id: &str) -> Result<(), ActivityError> { + pub fn reset_pre_activity_metadata_tags( + &mut self, + payment_id: &str, + ) -> Result<(), ActivityError> { // Check if row exists first - let exists: bool = self.conn.query_row( - "SELECT EXISTS(SELECT 1 FROM pre_activity_metadata WHERE payment_id = ?1)", - [payment_id], - |row| row.get(0) - ).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to check if metadata exists: {}", e), - })?; + let exists: bool = self + .conn + .query_row( + "SELECT EXISTS(SELECT 1 FROM pre_activity_metadata WHERE payment_id = ?1)", + [payment_id], + |row| row.get(0), + ) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to check if metadata exists: {}", e), + })?; if !exists { // Row doesn't exist, nothing to reset return Ok(()); } - let empty_tags_json = serde_json::to_string(&Vec::::new()).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to serialize empty tags: {}", e), - })?; + let empty_tags_json = + serde_json::to_string(&Vec::::new()).map_err(|e| ActivityError::DataError { + error_details: format!("Failed to serialize empty tags: {}", e), + })?; - self.conn.execute( - "UPDATE pre_activity_metadata SET tags = ?1 WHERE payment_id = ?2", - [&empty_tags_json, payment_id], - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to reset pre-activity metadata tags: {}", e), - })?; + self.conn + .execute( + "UPDATE pre_activity_metadata SET tags = ?1 WHERE payment_id = ?2", + [&empty_tags_json, payment_id], + ) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to reset pre-activity metadata tags: {}", e), + })?; Ok(()) } /// Delete all pre-activity metadata for an onchain address or lightning invoice pub fn delete_pre_activity_metadata(&mut self, payment_id: &str) -> Result<(), ActivityError> { - self.conn.execute( - "DELETE FROM pre_activity_metadata WHERE payment_id = ?1", - [payment_id], - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to delete pre-activity metadata: {}", e), - })?; + self.conn + .execute( + "DELETE FROM pre_activity_metadata WHERE payment_id = ?1", + [payment_id], + ) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to delete pre-activity metadata: {}", e), + })?; Ok(()) } /// Bulk upsert pre-activity metadata for backup/restore - pub fn upsert_pre_activity_metadata(&mut self, pre_activity_metadata: &[PreActivityMetadata]) -> Result<(), ActivityError> { + pub fn upsert_pre_activity_metadata( + &mut self, + pre_activity_metadata: &[PreActivityMetadata], + ) -> Result<(), ActivityError> { if pre_activity_metadata.is_empty() { return Ok(()); } - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; { let mut stmt = tx.prepare( @@ -1616,8 +1811,10 @@ impl ActivityDB { })?; for metadata in pre_activity_metadata { - let tags_json = serde_json::to_string(&metadata.tags).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to serialize tags: {}", e), + let tags_json = serde_json::to_string(&metadata.tags).map_err(|e| { + ActivityError::DataError { + error_details: format!("Failed to serialize tags: {}", e), + } })?; stmt.execute(rusqlite::params![ @@ -1631,7 +1828,8 @@ impl ActivityDB { metadata.is_transfer, &metadata.channel_id, metadata.created_at as i64, - ]).map_err(|e| ActivityError::DataError { + ]) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to insert pre-activity metadata: {}", e), })?; } @@ -1645,7 +1843,11 @@ impl ActivityDB { } /// Get pre-activity metadata for a specific payment_id or address - pub fn get_pre_activity_metadata(&self, search_key: &str, search_by_address: bool) -> Result, ActivityError> { + pub fn get_pre_activity_metadata( + &self, + search_key: &str, + search_by_address: bool, + ) -> Result, ActivityError> { let sql = if search_by_address { " SELECT @@ -1660,9 +1862,12 @@ impl ActivityDB { WHERE payment_id = ?1" }; - let mut stmt = self.conn.prepare(sql).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = self + .conn + .prepare(sql) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; match stmt.query_row([search_key], |row| { let payment_id_val: String = row.get(0)?; @@ -1676,11 +1881,14 @@ impl ActivityDB { let channel_id: Option = row.get(8)?; let created_at: i64 = row.get(9)?; - let tags: Vec = serde_json::from_str(&tags_json).map_err(|_e: serde_json::Error| rusqlite::Error::InvalidColumnType( - 1, - "tags".to_string(), - rusqlite::types::Type::Text, - ))?; + let tags: Vec = + serde_json::from_str(&tags_json).map_err(|_e: serde_json::Error| { + rusqlite::Error::InvalidColumnType( + 1, + "tags".to_string(), + rusqlite::types::Type::Text, + ) + })?; let created_at_u64 = created_at as u64; Ok(PreActivityMetadata { @@ -1712,33 +1920,59 @@ impl ActivityDB { error_details: format!("Failed to prepare statement: {}", e), })?; - let rows: Vec<(String, String, Option, Option, Option, bool, i64, bool, Option, i64)> = stmt.query_map([], |row| { - Ok(( - row.get(0)?, - row.get(1)?, - row.get(2)?, - row.get(3)?, - row.get(4)?, - row.get(5)?, - row.get(6)?, - row.get(7)?, - row.get(8)?, - row.get(9)?, - )) - }).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to execute query: {}", e), - })? - .collect::, _>>() - .map_err(|e| ActivityError::DataError { - error_details: format!("Failed to process rows: {}", e), - })?; + let rows: Vec<( + String, + String, + Option, + Option, + Option, + bool, + i64, + bool, + Option, + i64, + )> = stmt + .query_map([], |row| { + Ok(( + row.get(0)?, + row.get(1)?, + row.get(2)?, + row.get(3)?, + row.get(4)?, + row.get(5)?, + row.get(6)?, + row.get(7)?, + row.get(8)?, + row.get(9)?, + )) + }) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to execute query: {}", e), + })? + .collect::, _>>() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to process rows: {}", e), + })?; let mut result: Vec = Vec::new(); - for (payment_id, tags_json, payment_hash, tx_id, address, is_receive, fee_rate, is_transfer, channel_id, created_at) in rows { - let tags: Vec = serde_json::from_str(&tags_json).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to deserialize tags: {}", e), - })?; + for ( + payment_id, + tags_json, + payment_hash, + tx_id, + address, + is_receive, + fee_rate, + is_transfer, + channel_id, + created_at, + ) in rows + { + let tags: Vec = + serde_json::from_str(&tags_json).map_err(|e| ActivityError::DataError { + error_details: format!("Failed to deserialize tags: {}", e), + })?; let created_at_u64 = created_at as u64; result.push(PreActivityMetadata { @@ -1761,7 +1995,12 @@ impl ActivityDB { Ok(result) } - fn transfer_pre_activity_metadata_to_activity(&mut self, search_key: &str, activity_id: &str, search_by_address: bool) -> Result, ActivityError> { + fn transfer_pre_activity_metadata_to_activity( + &mut self, + search_key: &str, + activity_id: &str, + search_by_address: bool, + ) -> Result, ActivityError> { let metadata = match self.get_pre_activity_metadata(search_key, search_by_address)? { Some(m) => m, None => return Ok(Vec::new()), @@ -1769,16 +2008,20 @@ impl ActivityDB { let tags = metadata.tags; - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; if let Some(address) = &metadata.address { if !address.is_empty() { tx.execute( "UPDATE onchain_activity SET address = ?1 WHERE id = ?2", [address, activity_id], - ).map_err(|e| ActivityError::DataError { + ) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to update address: {}", e), })?; } @@ -1788,7 +2031,8 @@ impl ActivityDB { tx.execute( "UPDATE onchain_activity SET fee_rate = ?1 WHERE id = ?2", rusqlite::params![metadata.fee_rate as i64, activity_id], - ).map_err(|e| ActivityError::DataError { + ) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to update fee_rate: {}", e), })?; } @@ -1797,7 +2041,8 @@ impl ActivityDB { tx.execute( "UPDATE onchain_activity SET is_transfer = ?1 WHERE id = ?2", rusqlite::params![metadata.is_transfer, activity_id], - ).map_err(|e| ActivityError::DataError { + ) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to update is_transfer: {}", e), })?; } @@ -1807,7 +2052,8 @@ impl ActivityDB { tx.execute( "UPDATE onchain_activity SET channel_id = ?1 WHERE id = ?2", [channel_id, activity_id], - ).map_err(|e| ActivityError::DataError { + ) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to update channel_id: {}", e), })?; } @@ -1817,7 +2063,8 @@ impl ActivityDB { tx.execute( "INSERT OR IGNORE INTO activity_tags (activity_id, tag) VALUES (?1, ?2)", [activity_id, tag], - ).map_err(|e| ActivityError::DataError { + ) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to insert tag: {}", e), })?; } @@ -1826,14 +2073,16 @@ impl ActivityDB { tx.execute( "DELETE FROM pre_activity_metadata WHERE address = ?1 AND is_receive = 1", [search_key], - ).map_err(|e| ActivityError::DataError { + ) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to delete pre-activity metadata: {}", e), })?; } else { tx.execute( "DELETE FROM pre_activity_metadata WHERE payment_id = ?1", [search_key], - ).map_err(|e| ActivityError::DataError { + ) + .map_err(|e| ActivityError::DataError { error_details: format!("Failed to delete pre-activity metadata: {}", e), })?; } @@ -1845,51 +2094,64 @@ impl ActivityDB { Ok(tags) } - pub fn upsert_closed_channel(&mut self, channel: &ClosedChannelDetails) -> Result<(), ActivityError> { + pub fn upsert_closed_channel( + &mut self, + channel: &ClosedChannelDetails, + ) -> Result<(), ActivityError> { if channel.channel_id.is_empty() { return Err(ActivityError::DataError { error_details: "Channel ID cannot be empty".to_string(), }); } - self.conn.execute( - UPSERT_CLOSED_CHANNEL_SQL, - rusqlite::params![ - &channel.channel_id, - &channel.counterparty_node_id, - &channel.funding_txo_txid, - channel.funding_txo_index as i64, - channel.channel_value_sats as i64, - channel.closed_at as i64, - channel.outbound_capacity_msat as i64, - channel.inbound_capacity_msat as i64, - channel.counterparty_unspendable_punishment_reserve as i64, - channel.unspendable_punishment_reserve as i64, - channel.forwarding_fee_proportional_millionths as i64, - channel.forwarding_fee_base_msat as i64, - &channel.channel_name, - &channel.channel_closure_reason, - ], - ).map_err(|e| ActivityError::InsertError { - error_details: format!("Failed to insert closed channel: {}", e), - })?; + self.conn + .execute( + UPSERT_CLOSED_CHANNEL_SQL, + rusqlite::params![ + &channel.channel_id, + &channel.counterparty_node_id, + &channel.funding_txo_txid, + channel.funding_txo_index as i64, + channel.channel_value_sats as i64, + channel.closed_at as i64, + channel.outbound_capacity_msat as i64, + channel.inbound_capacity_msat as i64, + channel.counterparty_unspendable_punishment_reserve as i64, + channel.unspendable_punishment_reserve as i64, + channel.forwarding_fee_proportional_millionths as i64, + channel.forwarding_fee_base_msat as i64, + &channel.channel_name, + &channel.channel_closure_reason, + ], + ) + .map_err(|e| ActivityError::InsertError { + error_details: format!("Failed to insert closed channel: {}", e), + })?; Ok(()) } - pub fn upsert_closed_channels(&mut self, channels: &[ClosedChannelDetails]) -> Result<(), ActivityError> { + pub fn upsert_closed_channels( + &mut self, + channels: &[ClosedChannelDetails], + ) -> Result<(), ActivityError> { if channels.is_empty() { return Ok(()); } - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; { - let mut stmt = tx.prepare(UPSERT_CLOSED_CHANNEL_SQL).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = + tx.prepare(UPSERT_CLOSED_CHANNEL_SQL) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to prepare statement: {}", e), + })?; for channel in channels { if channel.channel_id.is_empty() { @@ -1913,8 +2175,12 @@ impl ActivityDB { channel.forwarding_fee_base_msat as i64, &channel.channel_name, &channel.channel_closure_reason, - ]).map_err(|e| ActivityError::InsertError { - error_details: format!("Failed to insert closed channel {}: {}", channel.channel_id, e), + ]) + .map_err(|e| ActivityError::InsertError { + error_details: format!( + "Failed to insert closed channel {}: {}", + channel.channel_id, e + ), })?; } } @@ -1926,7 +2192,10 @@ impl ActivityDB { Ok(()) } - pub fn get_closed_channel_by_id(&self, channel_id: &str) -> Result, ActivityError> { + pub fn get_closed_channel_by_id( + &self, + channel_id: &str, + ) -> Result, ActivityError> { let sql = " SELECT channel_id, counterparty_node_id, funding_txo_txid, funding_txo_index, @@ -1937,9 +2206,12 @@ impl ActivityDB { FROM closed_channels WHERE channel_id = ?1"; - let mut stmt = self.conn.prepare(sql).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = self + .conn + .prepare(sql) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; match stmt.query_row([channel_id], |row| { let channel_value_sats: i64 = row.get(4)?; @@ -1956,7 +2228,8 @@ impl ActivityDB { closed_at: row.get::<_, i64>(5)? as u64, outbound_capacity_msat: outbound_capacity_msat as u64, inbound_capacity_msat: inbound_capacity_msat as u64, - counterparty_unspendable_punishment_reserve: counterparty_unspendable_punishment_reserve as u64, + counterparty_unspendable_punishment_reserve: + counterparty_unspendable_punishment_reserve as u64, unspendable_punishment_reserve: row.get::<_, i64>(9)? as u64, forwarding_fee_proportional_millionths: row.get::<_, i64>(10)? as u32, forwarding_fee_base_msat: row.get::<_, i64>(11)? as u32, @@ -1972,7 +2245,10 @@ impl ActivityDB { } } - pub fn get_all_closed_channels(&self, sort_direction: Option) -> Result, ActivityError> { + pub fn get_all_closed_channels( + &self, + sort_direction: Option, + ) -> Result, ActivityError> { let direction = sort_direction.unwrap_or_default(); let sql = format!( " @@ -1988,39 +2264,45 @@ impl ActivityDB { Self::sort_direction_to_sql(direction) ); - let mut stmt = self.conn.prepare(&sql).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; - - let channels = stmt.query_map([], |row| { - let channel_value_sats: i64 = row.get(4)?; - let outbound_capacity_msat: i64 = row.get(6)?; - let inbound_capacity_msat: i64 = row.get(7)?; - let counterparty_unspendable_punishment_reserve: i64 = row.get(8)?; + let mut stmt = self + .conn + .prepare(&sql) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; - Ok(ClosedChannelDetails { - channel_id: row.get(0)?, - counterparty_node_id: row.get(1)?, - funding_txo_txid: row.get(2)?, - funding_txo_index: row.get::<_, i64>(3)? as u32, - channel_value_sats: channel_value_sats as u64, - closed_at: row.get::<_, i64>(5)? as u64, - outbound_capacity_msat: outbound_capacity_msat as u64, - inbound_capacity_msat: inbound_capacity_msat as u64, - counterparty_unspendable_punishment_reserve: counterparty_unspendable_punishment_reserve as u64, - unspendable_punishment_reserve: row.get::<_, i64>(9)? as u64, - forwarding_fee_proportional_millionths: row.get::<_, i64>(10)? as u32, - forwarding_fee_base_msat: row.get::<_, i64>(11)? as u32, - channel_name: row.get(12)?, - channel_closure_reason: row.get(13)?, + let channels = stmt + .query_map([], |row| { + let channel_value_sats: i64 = row.get(4)?; + let outbound_capacity_msat: i64 = row.get(6)?; + let inbound_capacity_msat: i64 = row.get(7)?; + let counterparty_unspendable_punishment_reserve: i64 = row.get(8)?; + + Ok(ClosedChannelDetails { + channel_id: row.get(0)?, + counterparty_node_id: row.get(1)?, + funding_txo_txid: row.get(2)?, + funding_txo_index: row.get::<_, i64>(3)? as u32, + channel_value_sats: channel_value_sats as u64, + closed_at: row.get::<_, i64>(5)? as u64, + outbound_capacity_msat: outbound_capacity_msat as u64, + inbound_capacity_msat: inbound_capacity_msat as u64, + counterparty_unspendable_punishment_reserve: + counterparty_unspendable_punishment_reserve as u64, + unspendable_punishment_reserve: row.get::<_, i64>(9)? as u64, + forwarding_fee_proportional_millionths: row.get::<_, i64>(10)? as u32, + forwarding_fee_base_msat: row.get::<_, i64>(11)? as u32, + channel_name: row.get(12)?, + channel_closure_reason: row.get(13)?, + }) }) - }).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to execute query: {}", e), - })? - .collect::, _>>() - .map_err(|e| ActivityError::DataError { - error_details: format!("Failed to process rows: {}", e), - })?; + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to execute query: {}", e), + })? + .collect::, _>>() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to process rows: {}", e), + })?; Ok(channels) } @@ -2073,13 +2355,14 @@ impl ActivityDB { fn sort_direction_to_sql(direction: SortDirection) -> &'static str { match direction { SortDirection::Asc => "ASC", - SortDirection::Desc => "DESC" + SortDirection::Desc => "DESC", } } /// Wipes all closed channels from the database pub fn wipe_all_closed_channels(&mut self) -> Result<(), ActivityError> { - self.conn.execute("DELETE FROM closed_channels", []) + self.conn + .execute("DELETE FROM closed_channels", []) .map_err(|e| ActivityError::DataError { error_details: format!("Failed to delete all closed channels: {}", e), })?; @@ -2088,25 +2371,34 @@ impl ActivityDB { } pub fn remove_closed_channel_by_id(&mut self, channel_id: &str) -> Result { - let rows = self.conn.execute( - "DELETE FROM closed_channels WHERE channel_id = ?1", - [channel_id], - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to delete closed channel: {}", e), - })?; + let rows = self + .conn + .execute( + "DELETE FROM closed_channels WHERE channel_id = ?1", + [channel_id], + ) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to delete closed channel: {}", e), + })?; Ok(rows > 0) } /// Upserts transaction details for one or more onchain transactions. - pub fn upsert_transaction_details(&mut self, details_list: &[TransactionDetails]) -> Result<(), ActivityError> { + pub fn upsert_transaction_details( + &mut self, + details_list: &[TransactionDetails], + ) -> Result<(), ActivityError> { if details_list.is_empty() { return Ok(()); } - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; { let mut stmt = tx.prepare( @@ -2122,12 +2414,16 @@ impl ActivityDB { }); } - let inputs_json = serde_json::to_string(&details.inputs).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to serialize inputs: {}", e), + let inputs_json = serde_json::to_string(&details.inputs).map_err(|e| { + ActivityError::DataError { + error_details: format!("Failed to serialize inputs: {}", e), + } })?; - let outputs_json = serde_json::to_string(&details.outputs).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to serialize outputs: {}", e), + let outputs_json = serde_json::to_string(&details.outputs).map_err(|e| { + ActivityError::DataError { + error_details: format!("Failed to serialize outputs: {}", e), + } })?; stmt.execute(rusqlite::params![ @@ -2135,7 +2431,8 @@ impl ActivityDB { details.amount_sats, &inputs_json, &outputs_json, - ]).map_err(|e| ActivityError::InsertError { + ]) + .map_err(|e| ActivityError::InsertError { error_details: format!("Failed to upsert transaction details: {}", e), })?; } @@ -2149,12 +2446,19 @@ impl ActivityDB { } /// Retrieves transaction details by transaction ID. - pub fn get_transaction_details(&self, tx_id: &str) -> Result, ActivityError> { - let sql = "SELECT tx_id, amount_sats, inputs, outputs FROM transaction_details WHERE tx_id = ?1"; - - let mut stmt = self.conn.prepare(sql).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + pub fn get_transaction_details( + &self, + tx_id: &str, + ) -> Result, ActivityError> { + let sql = + "SELECT tx_id, amount_sats, inputs, outputs FROM transaction_details WHERE tx_id = ?1"; + + let mut stmt = self + .conn + .prepare(sql) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), + })?; match stmt.query_row([tx_id], |row| { let tx_id: String = row.get(0)?; @@ -2163,11 +2467,19 @@ impl ActivityDB { let outputs_json: String = row.get(3)?; let inputs: Vec = serde_json::from_str(&inputs_json).map_err(|_| { - rusqlite::Error::InvalidColumnType(2, "inputs".to_string(), rusqlite::types::Type::Text) + rusqlite::Error::InvalidColumnType( + 2, + "inputs".to_string(), + rusqlite::types::Type::Text, + ) })?; let outputs: Vec = serde_json::from_str(&outputs_json).map_err(|_| { - rusqlite::Error::InvalidColumnType(3, "outputs".to_string(), rusqlite::types::Type::Text) + rusqlite::Error::InvalidColumnType( + 3, + "outputs".to_string(), + rusqlite::types::Type::Text, + ) })?; Ok(TransactionDetails { @@ -2187,35 +2499,49 @@ impl ActivityDB { /// Retrieves all transaction details. pub fn get_all_transaction_details(&self) -> Result, ActivityError> { - let sql = "SELECT tx_id, amount_sats, inputs, outputs FROM transaction_details ORDER BY tx_id"; - - let mut stmt = self.conn.prepare(sql).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to prepare statement: {}", e), - })?; - - let rows = stmt.query_map([], |row| { - let tx_id: String = row.get(0)?; - let amount_sats: i64 = row.get(1)?; - let inputs_json: String = row.get(2)?; - let outputs_json: String = row.get(3)?; + let sql = + "SELECT tx_id, amount_sats, inputs, outputs FROM transaction_details ORDER BY tx_id"; - let inputs: Vec = serde_json::from_str(&inputs_json).map_err(|_| { - rusqlite::Error::InvalidColumnType(2, "inputs".to_string(), rusqlite::types::Type::Text) + let mut stmt = self + .conn + .prepare(sql) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to prepare statement: {}", e), })?; - let outputs: Vec = serde_json::from_str(&outputs_json).map_err(|_| { - rusqlite::Error::InvalidColumnType(3, "outputs".to_string(), rusqlite::types::Type::Text) - })?; + let rows = stmt + .query_map([], |row| { + let tx_id: String = row.get(0)?; + let amount_sats: i64 = row.get(1)?; + let inputs_json: String = row.get(2)?; + let outputs_json: String = row.get(3)?; + + let inputs: Vec = serde_json::from_str(&inputs_json).map_err(|_| { + rusqlite::Error::InvalidColumnType( + 2, + "inputs".to_string(), + rusqlite::types::Type::Text, + ) + })?; - Ok(TransactionDetails { - tx_id, - amount_sats, - inputs, - outputs, + let outputs: Vec = serde_json::from_str(&outputs_json).map_err(|_| { + rusqlite::Error::InvalidColumnType( + 3, + "outputs".to_string(), + rusqlite::types::Type::Text, + ) + })?; + + Ok(TransactionDetails { + tx_id, + amount_sats, + inputs, + outputs, + }) }) - }).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to execute query: {}", e), - })?; + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to execute query: {}", e), + })?; let mut results = Vec::new(); for row in rows { @@ -2229,19 +2555,20 @@ impl ActivityDB { /// Deletes transaction details by transaction ID. pub fn delete_transaction_details(&mut self, tx_id: &str) -> Result { - let rows = self.conn.execute( - "DELETE FROM transaction_details WHERE tx_id = ?1", - [tx_id], - ).map_err(|e| ActivityError::DataError { - error_details: format!("Failed to delete transaction details: {}", e), - })?; + let rows = self + .conn + .execute("DELETE FROM transaction_details WHERE tx_id = ?1", [tx_id]) + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to delete transaction details: {}", e), + })?; Ok(rows > 0) } /// Wipes all transaction details from the database. pub fn wipe_all_transaction_details(&mut self) -> Result<(), ActivityError> { - self.conn.execute("DELETE FROM transaction_details", []) + self.conn + .execute("DELETE FROM transaction_details", []) .map_err(|e| ActivityError::DataError { error_details: format!("Failed to delete all transaction details: {}", e), })?; @@ -2253,9 +2580,12 @@ impl ActivityDB { /// This deletes all activities, which cascades to delete all activity_tags due to foreign key constraints. /// Also deletes all pre_activity_metadata and closed_channels. pub fn wipe_all(&mut self) -> Result<(), ActivityError> { - let tx = self.conn.transaction().map_err(|e| ActivityError::DataError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = self + .conn + .transaction() + .map_err(|e| ActivityError::DataError { + error_details: format!("Failed to start transaction: {}", e), + })?; // Delete from activities table (this will cascade to delete activity_tags due to foreign key constraints) tx.execute("DELETE FROM activities", []) @@ -2292,19 +2622,22 @@ impl ActivityDB { /// This checks for any onchain activities where the address appears, regardless /// of whether it's a sent or received transaction. pub fn is_address_used(&self, address: &str) -> Result { - let count: i64 = self.conn.query_row( - " + let count: i64 = self + .conn + .query_row( + " SELECT COUNT(*) FROM activities a JOIN onchain_activity o ON a.id = o.id WHERE o.address = ?1 AND a.activity_type = 'onchain' ", - [address], - |row| row.get(0), - ).map_err(|e| ActivityError::RetrievalError { - error_details: format!("Failed to check address usage: {}", e), - })?; + [address], + |row| row.get(0), + ) + .map_err(|e| ActivityError::RetrievalError { + error_details: format!("Failed to check address usage: {}", e), + })?; Ok(count > 0) } -} \ No newline at end of file +} diff --git a/src/modules/activity/mod.rs b/src/modules/activity/mod.rs index 03a9212..73c051c 100644 --- a/src/modules/activity/mod.rs +++ b/src/modules/activity/mod.rs @@ -1,8 +1,8 @@ -mod implementation; -mod types; mod errors; +mod implementation; mod tests; +mod types; +pub use errors::*; pub use implementation::*; pub use types::*; -pub use errors::*; \ No newline at end of file diff --git a/src/modules/activity/tests.rs b/src/modules/activity/tests.rs index c443c6a..9903e87 100644 --- a/src/modules/activity/tests.rs +++ b/src/modules/activity/tests.rs @@ -1,8 +1,12 @@ #[cfg(test)] mod tests { - use crate::activity::{ActivityDB, OnchainActivity, LightningActivity, PaymentType, PaymentState, Activity, ActivityFilter, ActivityType, SortDirection, ClosedChannelDetails, ActivityTags, PreActivityMetadata, TransactionDetails, TxInput, TxOutput}; - use std::fs; + use crate::activity::{ + Activity, ActivityDB, ActivityFilter, ActivityTags, ActivityType, ClosedChannelDetails, + LightningActivity, OnchainActivity, PaymentState, PaymentType, PreActivityMetadata, + SortDirection, TransactionDetails, TxInput, TxOutput, + }; use rand::random; + use std::fs; fn setup() -> (ActivityDB, String) { let db_path = format!("test_db_{}.sqlite", random::()); @@ -74,7 +78,11 @@ mod tests { } } - fn create_test_pre_activity_metadata(payment_id: String, _payment_type: ActivityType, tags: Vec) -> PreActivityMetadata { + fn create_test_pre_activity_metadata( + payment_id: String, + _payment_type: ActivityType, + tags: Vec, + ) -> PreActivityMetadata { PreActivityMetadata { payment_id, tags, @@ -92,7 +100,10 @@ mod tests { #[test] fn test_db_initialization() { let (db, db_path) = setup(); - assert!(db.conn.is_autocommit(), "Database should be in autocommit mode"); + assert!( + db.conn.is_autocommit(), + "Database should be in autocommit mode" + ); cleanup(&db_path); } @@ -102,7 +113,18 @@ mod tests { let activity = create_test_onchain_activity(); assert!(db.insert_onchain_activity(&activity).is_ok()); - let activities = db.get_activities(Some(ActivityFilter::Onchain), None, None, None, None, None, None, None).unwrap(); + let activities = db + .get_activities( + Some(ActivityFilter::Onchain), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(activities.len(), 1); if let Activity::Onchain(retrieved) = &activities[0] { assert_eq!(retrieved.id, activity.id); @@ -123,7 +145,18 @@ mod tests { let activity = create_test_lightning_activity(); assert!(db.insert_lightning_activity(&activity).is_ok()); - let activities = db.get_activities(Some(ActivityFilter::Lightning), None, None, None, None, None, None, None).unwrap(); + let activities = db + .get_activities( + Some(ActivityFilter::Lightning), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(activities.len(), 1); if let Activity::Lightning(retrieved) = &activities[0] { assert_eq!(retrieved.id, activity.id); @@ -147,7 +180,18 @@ mod tests { db.insert_onchain_activity(&onchain).unwrap(); db.insert_lightning_activity(&lightning).unwrap(); - let all_activities = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, None, None).unwrap(); + let all_activities = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(all_activities.len(), 2); // Check ordering by timestamp descending (they have the same timestamp in this test) @@ -164,7 +208,18 @@ mod tests { let activity = create_test_onchain_activity(); db.insert_onchain_activity(&activity).unwrap(); - let retrieved = db.get_activities(Some(ActivityFilter::Onchain), None, None, None, None, None, None, None).unwrap(); + let retrieved = db + .get_activities( + Some(ActivityFilter::Onchain), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); if let Activity::Onchain(activity) = &retrieved[0] { assert!(activity.created_at.is_some()); assert!(activity.updated_at.is_some()); @@ -187,7 +242,18 @@ mod tests { db.insert_onchain_activity(&activity1).unwrap(); db_clone.insert_lightning_activity(&activity2).unwrap(); - let all_activities = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, None, None).unwrap(); + let all_activities = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(all_activities.len(), 2); cleanup(&db_path); @@ -208,7 +274,18 @@ mod tests { db.insert_onchain_activity(&onchain2).unwrap(); db.insert_lightning_activity(&lightning).unwrap(); - let activities = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, None, None).unwrap(); + let activities = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); let timestamps: Vec = activities.iter().map(|a| a.get_timestamp()).collect(); assert_eq!(timestamps, vec![2000, 1500, 1000]); @@ -233,17 +310,61 @@ mod tests { } // Test limits with different filters - let all = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, Some(3), None).unwrap(); + let all = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + Some(3), + None, + ) + .unwrap(); assert_eq!(all.len(), 3); - let onchain = db.get_activities(Some(ActivityFilter::Onchain), None, None, None, None, None, Some(2), None).unwrap(); + let onchain = db + .get_activities( + Some(ActivityFilter::Onchain), + None, + None, + None, + None, + None, + Some(2), + None, + ) + .unwrap(); assert_eq!(onchain.len(), 2); - let lightning = db.get_activities(Some(ActivityFilter::Lightning), None, None, None, None, None, Some(4), None).unwrap(); + let lightning = db + .get_activities( + Some(ActivityFilter::Lightning), + None, + None, + None, + None, + None, + Some(4), + None, + ) + .unwrap(); assert_eq!(lightning.len(), 4); // Test without limits - let all = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, None, None).unwrap(); + let all = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(all.len(), 10); cleanup(&db_path); @@ -252,16 +373,51 @@ mod tests { #[test] fn test_zero_limit() { let (mut db, db_path) = setup(); - db.insert_onchain_activity(&create_test_onchain_activity()).unwrap(); - db.insert_lightning_activity(&create_test_lightning_activity()).unwrap(); - - let all = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, Some(0), None).unwrap(); + db.insert_onchain_activity(&create_test_onchain_activity()) + .unwrap(); + db.insert_lightning_activity(&create_test_lightning_activity()) + .unwrap(); + + let all = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + Some(0), + None, + ) + .unwrap(); assert_eq!(all.len(), 0); - let onchain = db.get_activities(Some(ActivityFilter::Onchain), None, None, None, None, None, Some(0), None).unwrap(); + let onchain = db + .get_activities( + Some(ActivityFilter::Onchain), + None, + None, + None, + None, + None, + Some(0), + None, + ) + .unwrap(); assert_eq!(onchain.len(), 0); - let lightning = db.get_activities(Some(ActivityFilter::Lightning), None, None, None, None, None, Some(0), None).unwrap(); + let lightning = db + .get_activities( + Some(ActivityFilter::Lightning), + None, + None, + None, + None, + None, + Some(0), + None, + ) + .unwrap(); assert_eq!(lightning.len(), 0); cleanup(&db_path); @@ -292,7 +448,8 @@ mod tests { let tags = vec!["payment".to_string(), "coffee".to_string()]; db.add_tags(&activity.id, &tags).unwrap(); - db.remove_tags(&activity.id, &vec!["payment".to_string()]).unwrap(); + db.remove_tags(&activity.id, &vec!["payment".to_string()]) + .unwrap(); let remaining_tags = db.get_tags(&activity.id).unwrap(); assert_eq!(remaining_tags.len(), 1); assert_eq!(remaining_tags[0], "coffee"); @@ -311,7 +468,8 @@ mod tests { db.insert_lightning_activity(&lightning).unwrap(); db.add_tags(&onchain.id, &["payment".to_string()]).unwrap(); - db.add_tags(&lightning.id, &["payment".to_string()]).unwrap(); + db.add_tags(&lightning.id, &["payment".to_string()]) + .unwrap(); let activities = db.get_activities_by_tag("payment", None, None).unwrap(); assert_eq!(activities.len(), 2); @@ -367,7 +525,10 @@ mod tests { db.delete_activity_by_id(&activity.id).unwrap(); let tags = db.get_tags(&activity.id).unwrap(); - assert!(tags.is_empty(), "Tags should be removed after activity deletion"); + assert!( + tags.is_empty(), + "Tags should be removed after activity deletion" + ); cleanup(&db_path); } @@ -390,7 +551,9 @@ mod tests { // These operations should fail or return empty results after deletion assert!(db.get_activity_by_id(&activity.id).unwrap().is_none()); - assert!(db.update_onchain_activity_by_id(&activity.id, &activity).is_err()); + assert!(db + .update_onchain_activity_by_id(&activity.id, &activity) + .is_err()); assert!(db.add_tags(&activity.id, &["test".to_string()]).is_err()); cleanup(&db_path); @@ -410,7 +573,11 @@ mod tests { activity.confirm_timestamp = Some(safe_max - 1); let result = db.insert_onchain_activity(&activity); - assert!(result.is_ok(), "Failed to insert activity: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to insert activity: {:?}", + result.err() + ); let retrieved = db.get_activity_by_id(&activity.id).unwrap().unwrap(); if let Activity::Onchain(retrieved) = retrieved { @@ -469,7 +636,18 @@ mod tests { activity.fee = Some(i64::MAX as u64); assert!(db.insert_lightning_activity(&activity).is_ok()); - let activities = db.get_activities(Some(ActivityFilter::Lightning), None, None, None, None, None, None, None).unwrap(); + let activities = db + .get_activities( + Some(ActivityFilter::Lightning), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(activities.len(), 3); for act in activities { @@ -531,7 +709,9 @@ mod tests { // Use a large but safe value activity.value = 1_000_000_000_000; - assert!(db.update_onchain_activity_by_id(&activity.id, &activity).is_ok()); + assert!(db + .update_onchain_activity_by_id(&activity.id, &activity) + .is_ok()); let retrieved = db.get_activity_by_id(&activity.id).unwrap().unwrap(); if let Activity::Onchain(retrieved) = retrieved { @@ -673,12 +853,34 @@ mod tests { } // Test ascending order - let asc_results = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, None, Some(SortDirection::Asc)).unwrap(); + let asc_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + None, + Some(SortDirection::Asc), + ) + .unwrap(); let asc_timestamps: Vec = asc_results.iter().map(|a| a.get_timestamp()).collect(); assert_eq!(asc_timestamps, vec![1000, 1001, 1002]); // Test descending order - let desc_results = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, None, Some(SortDirection::Desc)).unwrap(); + let desc_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + None, + Some(SortDirection::Desc), + ) + .unwrap(); let desc_timestamps: Vec = desc_results.iter().map(|a| a.get_timestamp()).collect(); assert_eq!(desc_timestamps, vec![1002, 1001, 1000]); @@ -705,12 +907,16 @@ mod tests { db.add_tags(&onchain2.id, &[tag.clone()]).unwrap(); // Test ascending order - let asc_activities = db.get_activities_by_tag(&tag, None, Some(SortDirection::Asc)).unwrap(); + let asc_activities = db + .get_activities_by_tag(&tag, None, Some(SortDirection::Asc)) + .unwrap(); let asc_timestamps: Vec = asc_activities.iter().map(|a| a.get_timestamp()).collect(); assert_eq!(asc_timestamps, vec![1000, 2000]); // Test descending order - let desc_activities = db.get_activities_by_tag(&tag, None, Some(SortDirection::Desc)).unwrap(); + let desc_activities = db + .get_activities_by_tag(&tag, None, Some(SortDirection::Desc)) + .unwrap(); let desc_timestamps: Vec = desc_activities.iter().map(|a| a.get_timestamp()).collect(); assert_eq!(desc_timestamps, vec![2000, 1000]); @@ -730,12 +936,34 @@ mod tests { } // Test ascending order with limit - let asc_limited = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, Some(3), Some(SortDirection::Asc)).unwrap(); + let asc_limited = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + Some(3), + Some(SortDirection::Asc), + ) + .unwrap(); let asc_timestamps: Vec = asc_limited.iter().map(|a| a.get_timestamp()).collect(); assert_eq!(asc_timestamps, vec![1000, 1001, 1002]); // Test descending order with limit - let desc_limited = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, Some(3), Some(SortDirection::Desc)).unwrap(); + let desc_limited = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + Some(3), + Some(SortDirection::Desc), + ) + .unwrap(); let desc_timestamps: Vec = desc_limited.iter().map(|a| a.get_timestamp()).collect(); assert_eq!(desc_timestamps, vec![1004, 1003, 1002]); @@ -762,7 +990,18 @@ mod tests { db.insert_onchain_activity(&onchain2).unwrap(); // Test ascending order - let asc_results = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, None, Some(SortDirection::Asc)).unwrap(); + let asc_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + None, + Some(SortDirection::Asc), + ) + .unwrap(); let asc_timestamps: Vec = asc_results.iter().map(|a| a.get_timestamp()).collect(); assert_eq!(asc_timestamps, vec![1000, 2000, 3000]); @@ -789,7 +1028,18 @@ mod tests { db.insert_onchain_activity(&onchain2).unwrap(); // Test with None sort direction (should default to Desc) - let default_results = db.get_activities(Some(ActivityFilter::All), None, None, None, None, None, None, None).unwrap(); + let default_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); let timestamps: Vec = default_results.iter().map(|a| a.get_timestamp()).collect(); assert_eq!(timestamps, vec![2000, 1000]); @@ -812,32 +1062,40 @@ mod tests { db.insert_onchain_activity(&received_activity).unwrap(); // Test filtering by sent - let sent_activities = db.get_activities( - Some(ActivityFilter::All), - Some(PaymentType::Sent), - None, - None, - None, - None, - None, - None - ).unwrap(); + let sent_activities = db + .get_activities( + Some(ActivityFilter::All), + Some(PaymentType::Sent), + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(sent_activities.len(), 1); - assert!(matches!(sent_activities[0], Activity::Onchain(ref a) if a.tx_type == PaymentType::Sent)); + assert!( + matches!(sent_activities[0], Activity::Onchain(ref a) if a.tx_type == PaymentType::Sent) + ); // Test filtering by received - let received_activities = db.get_activities( - Some(ActivityFilter::All), - Some(PaymentType::Received), - None, - None, - None, - None, - None, - None - ).unwrap(); + let received_activities = db + .get_activities( + Some(ActivityFilter::All), + Some(PaymentType::Received), + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(received_activities.len(), 1); - assert!(matches!(received_activities[0], Activity::Onchain(ref a) if a.tx_type == PaymentType::Received)); + assert!( + matches!(received_activities[0], Activity::Onchain(ref a) if a.tx_type == PaymentType::Received) + ); cleanup(&db_path); } @@ -857,30 +1115,34 @@ mod tests { db.insert_lightning_activity(&lightning).unwrap(); // Test address search - let address_results = db.get_activities( - Some(ActivityFilter::All), - None, - None, - Some("xyz123".to_string()), - None, - None, - None, - None - ).unwrap(); + let address_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + Some("xyz123".to_string()), + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(address_results.len(), 1); assert!(matches!(address_results[0], Activity::Onchain(_))); // Test message search - let message_results = db.get_activities( - Some(ActivityFilter::All), - None, - None, - Some("Coffee".to_string()), - None, - None, - None, - None - ).unwrap(); + let message_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + Some("Coffee".to_string()), + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(message_results.len(), 1); assert!(matches!(message_results[0], Activity::Lightning(_))); @@ -907,42 +1169,48 @@ mod tests { db.insert_onchain_activity(&activity3).unwrap(); // Test min date - let min_date_results = db.get_activities( - Some(ActivityFilter::All), - None, - None, - None, - Some(1500), - None, - None, - None - ).unwrap(); + let min_date_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + Some(1500), + None, + None, + None, + ) + .unwrap(); assert_eq!(min_date_results.len(), 2); // Test max date - let max_date_results = db.get_activities( - Some(ActivityFilter::All), - None, - None, - None, - None, - Some(2500), - None, - None - ).unwrap(); + let max_date_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + None, + Some(2500), + None, + None, + ) + .unwrap(); assert_eq!(max_date_results.len(), 2); // Test date range - let range_results = db.get_activities( - Some(ActivityFilter::All), - None, - None, - None, - Some(1500), - Some(2500), - None, - None - ).unwrap(); + let range_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + Some(1500), + Some(2500), + None, + None, + ) + .unwrap(); assert_eq!(range_results.len(), 1); assert_eq!(range_results[0].get_timestamp(), 2000); @@ -969,19 +1237,25 @@ mod tests { // Add tags db.add_tags(&onchain1.id, &["payment".to_string()]).unwrap(); - db.add_tags(&onchain2.id, &["payment".to_string(), "important".to_string()]).unwrap(); + db.add_tags( + &onchain2.id, + &["payment".to_string(), "important".to_string()], + ) + .unwrap(); // Test combined filters - let results = db.get_activities( - Some(ActivityFilter::Onchain), - Some(PaymentType::Received), - Some(vec!["payment".to_string()]), - Some("abc".to_string()), - Some(1500), - Some(2500), - Some(1), - Some(SortDirection::Desc) - ).unwrap(); + let results = db + .get_activities( + Some(ActivityFilter::Onchain), + Some(PaymentType::Received), + Some(vec!["payment".to_string()]), + Some("abc".to_string()), + Some(1500), + Some(2500), + Some(1), + Some(SortDirection::Desc), + ) + .unwrap(); assert_eq!(results.len(), 1); if let Activity::Onchain(activity) = &results[0] { @@ -1004,29 +1278,33 @@ mod tests { db.insert_onchain_activity(&activity).unwrap(); // Test empty search string - should return all results, same as if no search was provided - let empty_search = db.get_activities( - Some(ActivityFilter::All), - None, - None, - Some("".to_string()), - None, - None, - None, - None - ).unwrap(); + let empty_search = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + Some("".to_string()), + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(empty_search.len(), 1); // Changed from 0 to 1 // Test empty tags array - let empty_tags = db.get_activities( - Some(ActivityFilter::All), - None, - Some(vec![]), - None, - None, - None, - None, - None - ).unwrap(); + let empty_tags = db + .get_activities( + Some(ActivityFilter::All), + None, + Some(vec![]), + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(empty_tags.len(), 1); cleanup(&db_path); @@ -1048,34 +1326,41 @@ mod tests { db.insert_onchain_activity(&activity3).unwrap(); // Add different tag combinations - db.add_tags(&activity1.id, &["tag1".to_string(), "tag2".to_string()]).unwrap(); - db.add_tags(&activity2.id, &["tag2".to_string(), "tag3".to_string()]).unwrap(); - db.add_tags(&activity3.id, &["tag1".to_string(), "tag3".to_string()]).unwrap(); + db.add_tags(&activity1.id, &["tag1".to_string(), "tag2".to_string()]) + .unwrap(); + db.add_tags(&activity2.id, &["tag2".to_string(), "tag3".to_string()]) + .unwrap(); + db.add_tags(&activity3.id, &["tag1".to_string(), "tag3".to_string()]) + .unwrap(); // Test filtering with multiple tags (OR condition) - let results = db.get_activities( - Some(ActivityFilter::All), - None, - Some(vec!["tag1".to_string(), "tag2".to_string()]), - None, - None, - None, - None, - None - ).unwrap(); + let results = db + .get_activities( + Some(ActivityFilter::All), + None, + Some(vec!["tag1".to_string(), "tag2".to_string()]), + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(results.len(), 3); // Test with non-existent tag mixed with existing tags - let mixed_results = db.get_activities( - Some(ActivityFilter::All), - None, - Some(vec!["tag1".to_string(), "nonexistent".to_string()]), - None, - None, - None, - None, - None - ).unwrap(); + let mixed_results = db + .get_activities( + Some(ActivityFilter::All), + None, + Some(vec!["tag1".to_string(), "nonexistent".to_string()]), + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(mixed_results.len(), 2); cleanup(&db_path); @@ -1089,29 +1374,33 @@ mod tests { db.insert_onchain_activity(&activity).unwrap(); // Test max date before min date - let invalid_range = db.get_activities( - Some(ActivityFilter::All), - None, - None, - None, - Some(2000), - Some(1000), - None, - None - ).unwrap(); + let invalid_range = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + Some(2000), + Some(1000), + None, + None, + ) + .unwrap(); assert_eq!(invalid_range.len(), 0); // Test dates way in the future - let future_date = db.get_activities( - Some(ActivityFilter::All), - None, - None, - None, - Some(u64::MAX - 1000), - None, - None, - None - ).unwrap(); + let future_date = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + None, + Some(u64::MAX - 1000), + None, + None, + None, + ) + .unwrap(); assert_eq!(future_date.len(), 0); cleanup(&db_path); @@ -1126,42 +1415,48 @@ mod tests { db.insert_lightning_activity(&lightning).unwrap(); // Test lowercase search - let lower_results = db.get_activities( - Some(ActivityFilter::All), - None, - None, - Some("coffee".to_string()), - None, - None, - None, - None - ).unwrap(); + let lower_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + Some("coffee".to_string()), + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(lower_results.len(), 1); // Test uppercase search - let upper_results = db.get_activities( - Some(ActivityFilter::All), - None, - None, - Some("COFFEE".to_string()), - None, - None, - None, - None - ).unwrap(); + let upper_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + Some("COFFEE".to_string()), + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(upper_results.len(), 1); // Test mixed case search - let mixed_results = db.get_activities( - Some(ActivityFilter::All), - None, - None, - Some("CoFfEe".to_string()), - None, - None, - None, - None - ).unwrap(); + let mixed_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + Some("CoFfEe".to_string()), + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(mixed_results.len(), 1); cleanup(&db_path); @@ -1177,19 +1472,23 @@ mod tests { // Add tags from both connections db.add_tags(&activity.id, &["tag1".to_string()]).unwrap(); - db_clone.add_tags(&activity.id, &["tag2".to_string()]).unwrap(); + db_clone + .add_tags(&activity.id, &["tag2".to_string()]) + .unwrap(); // Verify tags from both connections - let results = db.get_activities( - Some(ActivityFilter::All), - None, - Some(vec!["tag1".to_string(), "tag2".to_string()]), - None, - None, - None, - None, - None - ).unwrap(); + let results = db + .get_activities( + Some(ActivityFilter::All), + None, + Some(vec!["tag1".to_string(), "tag2".to_string()]), + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(results.len(), 1); cleanup(&db_path); @@ -1209,29 +1508,33 @@ mod tests { db.insert_lightning_activity(&lightning).unwrap(); // Search with special characters - let special_results = db.get_activities( - Some(ActivityFilter::All), - None, - None, - Some("%chars".to_string()), - None, - None, - None, - None - ).unwrap(); + let special_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + Some("%chars".to_string()), + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(special_results.len(), 1); // Search with underscore - let underscore_results = db.get_activities( - Some(ActivityFilter::All), - None, - None, - Some("_special".to_string()), - None, - None, - None, - None - ).unwrap(); + let underscore_results = db + .get_activities( + Some(ActivityFilter::All), + None, + None, + Some("_special".to_string()), + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(underscore_results.len(), 1); cleanup(&db_path); @@ -1256,30 +1559,34 @@ mod tests { } // Test pagination with combined filters - let page1 = db.get_activities( - Some(ActivityFilter::All), - None, - Some(vec!["even".to_string()]), - Some("address".to_string()), - Some(1000), - None, - Some(2), - Some(SortDirection::Asc) - ).unwrap(); + let page1 = db + .get_activities( + Some(ActivityFilter::All), + None, + Some(vec!["even".to_string()]), + Some("address".to_string()), + Some(1000), + None, + Some(2), + Some(SortDirection::Asc), + ) + .unwrap(); assert_eq!(page1.len(), 2); // Get next page let min_date = page1.last().unwrap().get_timestamp(); - let page2 = db.get_activities( - Some(ActivityFilter::All), - None, - Some(vec!["even".to_string()]), - Some("address".to_string()), - Some(min_date + 1), - None, - Some(2), - Some(SortDirection::Asc) - ).unwrap(); + let page2 = db + .get_activities( + Some(ActivityFilter::All), + None, + Some(vec!["even".to_string()]), + Some("address".to_string()), + Some(min_date + 1), + None, + Some(2), + Some(SortDirection::Asc), + ) + .unwrap(); assert_eq!(page2.len(), 1); assert!(page2[0].get_timestamp() > page1[1].get_timestamp()); @@ -1300,8 +1607,13 @@ mod tests { db.insert_onchain_activity(&activity2).unwrap(); // Add various tags - db.add_tags(&activity1.id, &["payment".to_string(), "coffee".to_string()]).unwrap(); - db.add_tags(&activity2.id, &["payment".to_string(), "food".to_string()]).unwrap(); + db.add_tags( + &activity1.id, + &["payment".to_string(), "coffee".to_string()], + ) + .unwrap(); + db.add_tags(&activity2.id, &["payment".to_string(), "food".to_string()]) + .unwrap(); // Get all unique tags let all_tags = db.get_all_unique_tags().unwrap(); @@ -1378,12 +1690,10 @@ mod tests { db.insert_onchain_activity(&activity).unwrap(); // First upsert - let activity_tags = vec![ - ActivityTags { - activity_id: activity.id.clone(), - tags: vec!["payment".to_string(), "coffee".to_string()], - }, - ]; + let activity_tags = vec![ActivityTags { + activity_id: activity.id.clone(), + tags: vec!["payment".to_string(), "coffee".to_string()], + }]; assert!(db.upsert_tags(&activity_tags).is_ok()); // Second upsert with same tags (should be idempotent) @@ -1408,12 +1718,14 @@ mod tests { db.add_tags(&activity.id, &["payment".to_string()]).unwrap(); // Upsert with additional tags (adds new tags, keeps existing) - let activity_tags = vec![ - ActivityTags { - activity_id: activity.id.clone(), - tags: vec!["payment".to_string(), "coffee".to_string(), "food".to_string()], - }, - ]; + let activity_tags = vec![ActivityTags { + activity_id: activity.id.clone(), + tags: vec![ + "payment".to_string(), + "coffee".to_string(), + "food".to_string(), + ], + }]; assert!(db.upsert_tags(&activity_tags).is_ok()); // Verify all tags are present (payment was already there, coffee and food are new) @@ -1435,12 +1747,10 @@ mod tests { db.insert_onchain_activity(&activity).unwrap(); // Upsert with empty tags mixed in - let activity_tags = vec![ - ActivityTags { - activity_id: activity.id.clone(), - tags: vec!["payment".to_string(), "".to_string(), "coffee".to_string()], - }, - ]; + let activity_tags = vec![ActivityTags { + activity_id: activity.id.clone(), + tags: vec!["payment".to_string(), "".to_string(), "coffee".to_string()], + }]; assert!(db.upsert_tags(&activity_tags).is_ok()); // Verify only non-empty tags were added @@ -1522,8 +1832,10 @@ mod tests { db.insert_lightning_activity(&lightning).unwrap(); // Add tags - db.add_tags(&onchain.id, &["payment".to_string(), "coffee".to_string()]).unwrap(); - db.add_tags(&lightning.id, &["payment".to_string()]).unwrap(); + db.add_tags(&onchain.id, &["payment".to_string(), "coffee".to_string()]) + .unwrap(); + db.add_tags(&lightning.id, &["payment".to_string()]) + .unwrap(); // Get all activity tags let activity_tags = db.get_all_activities_tags().unwrap(); @@ -1531,14 +1843,20 @@ mod tests { assert_eq!(activity_tags.len(), 2); // Find onchain tags - let onchain_tags = activity_tags.iter().find(|at| at.activity_id == onchain.id).unwrap(); + let onchain_tags = activity_tags + .iter() + .find(|at| at.activity_id == onchain.id) + .unwrap(); assert_eq!(onchain_tags.activity_id, onchain.id); assert_eq!(onchain_tags.tags.len(), 2); assert!(onchain_tags.tags.contains(&"payment".to_string())); assert!(onchain_tags.tags.contains(&"coffee".to_string())); // Find lightning tags - let lightning_tags = activity_tags.iter().find(|at| at.activity_id == lightning.id).unwrap(); + let lightning_tags = activity_tags + .iter() + .find(|at| at.activity_id == lightning.id) + .unwrap(); assert_eq!(lightning_tags.activity_id, lightning.id); assert_eq!(lightning_tags.tags.len(), 1); assert!(lightning_tags.tags.contains(&"payment".to_string())); @@ -1566,12 +1884,10 @@ mod tests { db.add_tags(&activity.id, &["old_tag".to_string()]).unwrap(); // Upsert with empty tags (with INSERT OR IGNORE, won't clear existing tags) - let activity_tags = vec![ - ActivityTags { - activity_id: activity.id.clone(), - tags: vec![], - }, - ]; + let activity_tags = vec![ActivityTags { + activity_id: activity.id.clone(), + tags: vec![], + }]; assert!(db.upsert_tags(&activity_tags).is_ok()); @@ -1597,12 +1913,10 @@ mod tests { let (mut db, db_path) = setup(); // Test with empty activity_id - let activity_tags = vec![ - ActivityTags { - activity_id: "".to_string(), - tags: vec!["payment".to_string()], - }, - ]; + let activity_tags = vec![ActivityTags { + activity_id: "".to_string(), + tags: vec!["payment".to_string()], + }]; assert!(db.upsert_tags(&activity_tags).is_err()); @@ -1628,10 +1942,17 @@ mod tests { db.insert_lightning_activity(&activity4).unwrap(); // Add tags - db.add_tags(&activity1.id, &["payment".to_string()]).unwrap(); - db.add_tags(&activity2.id, &["invoice".to_string()]).unwrap(); - db.add_tags(&activity3.id, &["transfer".to_string()]).unwrap(); - db.add_tags(&activity4.id, &["payment".to_string(), "invoice".to_string()]).unwrap(); + db.add_tags(&activity1.id, &["payment".to_string()]) + .unwrap(); + db.add_tags(&activity2.id, &["invoice".to_string()]) + .unwrap(); + db.add_tags(&activity3.id, &["transfer".to_string()]) + .unwrap(); + db.add_tags( + &activity4.id, + &["payment".to_string(), "invoice".to_string()], + ) + .unwrap(); // Insert closed channels let mut channel1 = create_test_closed_channel(); @@ -1642,7 +1963,9 @@ mod tests { db.upsert_closed_channel(&channel2).unwrap(); // Verify data exists - let activities = db.get_activities(None, None, None, None, None, None, None, None).unwrap(); + let activities = db + .get_activities(None, None, None, None, None, None, None, None) + .unwrap(); assert_eq!(activities.len(), 4); let tags = db.get_all_unique_tags().unwrap(); assert_eq!(tags.len(), 3); @@ -1653,7 +1976,9 @@ mod tests { db.wipe_all().unwrap(); // Verify everything is deleted - let activities_after = db.get_activities(None, None, None, None, None, None, None, None).unwrap(); + let activities_after = db + .get_activities(None, None, None, None, None, None, None, None) + .unwrap(); assert_eq!(activities_after.len(), 0); let tags_after = db.get_all_unique_tags().unwrap(); assert_eq!(tags_after.len(), 0); @@ -1663,7 +1988,9 @@ mod tests { // Verify we can still insert new data after wipe let new_activity = create_test_onchain_activity(); db.insert_onchain_activity(&new_activity).unwrap(); - let activities_new = db.get_activities(None, None, None, None, None, None, None, None).unwrap(); + let activities_new = db + .get_activities(None, None, None, None, None, None, None, None) + .unwrap(); assert_eq!(activities_new.len(), 1); cleanup(&db_path); @@ -1673,7 +2000,7 @@ mod tests { fn test_insert_and_retrieve_closed_channel() { let (mut db, db_path) = setup(); let channel = create_test_closed_channel(); - + // Insert closed channel assert!(db.upsert_closed_channel(&channel).is_ok()); @@ -1681,21 +2008,51 @@ mod tests { let retrieved = db.get_closed_channel_by_id(&channel.channel_id).unwrap(); assert!(retrieved.is_some()); let retrieved_channel = retrieved.unwrap(); - + assert_eq!(retrieved_channel.channel_id, channel.channel_id); - assert_eq!(retrieved_channel.counterparty_node_id, channel.counterparty_node_id); + assert_eq!( + retrieved_channel.counterparty_node_id, + channel.counterparty_node_id + ); assert_eq!(retrieved_channel.funding_txo_txid, channel.funding_txo_txid); - assert_eq!(retrieved_channel.funding_txo_index, channel.funding_txo_index); - assert_eq!(retrieved_channel.channel_value_sats, channel.channel_value_sats); + assert_eq!( + retrieved_channel.funding_txo_index, + channel.funding_txo_index + ); + assert_eq!( + retrieved_channel.channel_value_sats, + channel.channel_value_sats + ); assert_eq!(retrieved_channel.closed_at, channel.closed_at); - assert_eq!(retrieved_channel.outbound_capacity_msat, channel.outbound_capacity_msat); - assert_eq!(retrieved_channel.inbound_capacity_msat, channel.inbound_capacity_msat); - assert_eq!(retrieved_channel.counterparty_unspendable_punishment_reserve, channel.counterparty_unspendable_punishment_reserve); - assert_eq!(retrieved_channel.unspendable_punishment_reserve, channel.unspendable_punishment_reserve); - assert_eq!(retrieved_channel.forwarding_fee_proportional_millionths, channel.forwarding_fee_proportional_millionths); - assert_eq!(retrieved_channel.forwarding_fee_base_msat, channel.forwarding_fee_base_msat); + assert_eq!( + retrieved_channel.outbound_capacity_msat, + channel.outbound_capacity_msat + ); + assert_eq!( + retrieved_channel.inbound_capacity_msat, + channel.inbound_capacity_msat + ); + assert_eq!( + retrieved_channel.counterparty_unspendable_punishment_reserve, + channel.counterparty_unspendable_punishment_reserve + ); + assert_eq!( + retrieved_channel.unspendable_punishment_reserve, + channel.unspendable_punishment_reserve + ); + assert_eq!( + retrieved_channel.forwarding_fee_proportional_millionths, + channel.forwarding_fee_proportional_millionths + ); + assert_eq!( + retrieved_channel.forwarding_fee_base_msat, + channel.forwarding_fee_base_msat + ); assert_eq!(retrieved_channel.channel_name, channel.channel_name); - assert_eq!(retrieved_channel.channel_closure_reason, channel.channel_closure_reason); + assert_eq!( + retrieved_channel.channel_closure_reason, + channel.channel_closure_reason + ); cleanup(&db_path); } @@ -1729,7 +2086,9 @@ mod tests { assert_eq!(all_channels[2].channel_id, "channel1"); // Oldest (1000) // Get all channels, ascending sort - let all_channels_asc = db.get_all_closed_channels(Some(SortDirection::Asc)).unwrap(); + let all_channels_asc = db + .get_all_closed_channels(Some(SortDirection::Asc)) + .unwrap(); assert_eq!(all_channels_asc.len(), 3); assert_eq!(all_channels_asc[0].channel_id, "channel1"); // Oldest first assert_eq!(all_channels_asc[1].channel_id, "channel3"); @@ -1764,9 +2123,9 @@ mod tests { fn test_remove_closed_channel_by_id() { let (mut db, db_path) = setup(); let channel = create_test_closed_channel(); - + db.upsert_closed_channel(&channel).unwrap(); - + // Verify it exists let retrieved = db.get_closed_channel_by_id(&channel.channel_id).unwrap(); assert!(retrieved.is_some()); @@ -1844,7 +2203,10 @@ mod tests { assert_eq!(all.len(), 5); for i in 1..=5 { let id = format!("bulk_channel_{}", i); - let ch = all.iter().find(|c| c.channel_id == id).expect("missing channel"); + let ch = all + .iter() + .find(|c| c.channel_id == id) + .expect("missing channel"); assert_eq!(ch.channel_value_sats, 1_000_000 * i as u64); } @@ -1857,11 +2219,20 @@ mod tests { // Verify updates applied let after = db.get_all_closed_channels(None).unwrap(); - let c1 = after.iter().find(|c| c.channel_id == "bulk_channel_1").unwrap(); + let c1 = after + .iter() + .find(|c| c.channel_id == "bulk_channel_1") + .unwrap(); assert_eq!(c1.channel_value_sats, 9_999_999); - let c2 = after.iter().find(|c| c.channel_id == "bulk_channel_2").unwrap(); + let c2 = after + .iter() + .find(|c| c.channel_id == "bulk_channel_2") + .unwrap(); assert_eq!(c2.channel_name, "Updated Name"); - let c3 = after.iter().find(|c| c.channel_id == "bulk_channel_3").unwrap(); + let c3 = after + .iter() + .find(|c| c.channel_id == "bulk_channel_3") + .unwrap(); assert_eq!(c3.forwarding_fee_base_msat, 777); cleanup(&db_path); @@ -1892,7 +2263,18 @@ mod tests { assert!(db.upsert_onchain_activities(&acts).is_ok()); - let all = db.get_activities(Some(ActivityFilter::Onchain), None, None, None, None, None, None, None).unwrap(); + let all = db + .get_activities( + Some(ActivityFilter::Onchain), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(all.len(), 5); let mut updated = acts.clone(); @@ -1902,11 +2284,25 @@ mod tests { updated[3].is_boosted = true; assert!(db.upsert_onchain_activities(&updated).is_ok()); - let after = db.get_activities(Some(ActivityFilter::Onchain), None, None, None, None, None, None, None).unwrap(); - let map: std::collections::HashMap = after.into_iter().map(|a| match a { - Activity::Onchain(o) => (o.id.clone(), o), - _ => unreachable!(), - }).collect(); + let after = db + .get_activities( + Some(ActivityFilter::Onchain), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); + let map: std::collections::HashMap = after + .into_iter() + .map(|a| match a { + Activity::Onchain(o) => (o.id.clone(), o), + _ => unreachable!(), + }) + .collect(); assert_eq!(map["onchain_bulk_0"].value, 999_999); assert_eq!(map["onchain_bulk_1"].fee, 42); assert_eq!(map["onchain_bulk_2"].fee_rate, 7); @@ -1919,7 +2315,18 @@ mod tests { fn test_upsert_onchain_activities_empty() { let (mut db, db_path) = setup(); assert!(db.upsert_onchain_activities(&[]).is_ok()); - let all = db.get_activities(Some(ActivityFilter::Onchain), None, None, None, None, None, None, None).unwrap(); + let all = db + .get_activities( + Some(ActivityFilter::Onchain), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert!(all.is_empty()); cleanup(&db_path); } @@ -1940,7 +2347,18 @@ mod tests { assert!(db.upsert_lightning_activities(&acts).is_ok()); - let all = db.get_activities(Some(ActivityFilter::Lightning), None, None, None, None, None, None, None).unwrap(); + let all = db + .get_activities( + Some(ActivityFilter::Lightning), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert_eq!(all.len(), 5); let mut updated = acts.clone(); @@ -1950,11 +2368,25 @@ mod tests { updated[3].message = "updated".to_string(); assert!(db.upsert_lightning_activities(&updated).is_ok()); - let after = db.get_activities(Some(ActivityFilter::Lightning), None, None, None, None, None, None, None).unwrap(); - let map: std::collections::HashMap = after.into_iter().map(|a| match a { - Activity::Lightning(l) => (l.id.clone(), l), - _ => unreachable!(), - }).collect(); + let after = db + .get_activities( + Some(ActivityFilter::Lightning), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); + let map: std::collections::HashMap = after + .into_iter() + .map(|a| match a { + Activity::Lightning(l) => (l.id.clone(), l), + _ => unreachable!(), + }) + .collect(); assert_eq!(map["lightning_bulk_0"].value, 55); assert_eq!(map["lightning_bulk_1"].status, PaymentState::Failed); assert_eq!(map["lightning_bulk_2"].fee, Some(0)); @@ -1967,7 +2399,18 @@ mod tests { fn test_upsert_lightning_activities_empty() { let (mut db, db_path) = setup(); assert!(db.upsert_lightning_activities(&[]).is_ok()); - let all = db.get_activities(Some(ActivityFilter::Lightning), None, None, None, None, None, None, None).unwrap(); + let all = db + .get_activities( + Some(ActivityFilter::Lightning), + None, + None, + None, + None, + None, + None, + None, + ) + .unwrap(); assert!(all.is_empty()); cleanup(&db_path); } @@ -1980,7 +2423,8 @@ mod tests { let address = "bc1qtest123".to_string(); let tags = vec!["payment".to_string(), "coffee".to_string()]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); metadata.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata).is_ok()); @@ -2004,7 +2448,13 @@ mod tests { let payment_hash = "test_lightning_1".to_string(); let tags = vec!["invoice".to_string(), "payment".to_string()]; - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata(payment_hash.clone(), ActivityType::Lightning, tags.clone())).is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + payment_hash.clone(), + ActivityType::Lightning, + tags.clone() + )) + .is_ok()); // Verify tags are transferred when activity is received let mut activity = create_test_lightning_activity(); @@ -2025,7 +2475,11 @@ mod tests { let (mut db, db_path) = setup(); let tags = vec!["payment".to_string()]; - let result = db.add_pre_activity_metadata(&create_test_pre_activity_metadata("".to_string(), ActivityType::Onchain, tags)); + let result = db.add_pre_activity_metadata(&create_test_pre_activity_metadata( + "".to_string(), + ActivityType::Onchain, + tags, + )); assert!(result.is_err()); cleanup(&db_path); @@ -2037,10 +2491,12 @@ mod tests { let address = "bc1qtest123".to_string(); let tags = vec!["payment".to_string()]; - let mut metadata1 = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata1 = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata1.address = Some(address.clone()); metadata1.is_receive = true; - let mut metadata2 = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata2 = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata2.address = Some(address.clone()); metadata2.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata1).is_ok()); @@ -2066,7 +2522,11 @@ mod tests { let payment_id2 = "payment_id_2".to_string(); // Add metadata with payment_id1 and address - let mut metadata1 = create_test_pre_activity_metadata(payment_id1.clone(), ActivityType::Onchain, vec!["tag1".to_string()]); + let mut metadata1 = create_test_pre_activity_metadata( + payment_id1.clone(), + ActivityType::Onchain, + vec!["tag1".to_string()], + ); metadata1.address = Some(address.clone()); metadata1.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata1).is_ok()); @@ -2079,7 +2539,11 @@ mod tests { assert_eq!(result_by_address1.unwrap().payment_id, payment_id1); // Add metadata with payment_id2 and same address (should replace metadata1) - let mut metadata2 = create_test_pre_activity_metadata(payment_id2.clone(), ActivityType::Onchain, vec!["tag2".to_string()]); + let mut metadata2 = create_test_pre_activity_metadata( + payment_id2.clone(), + ActivityType::Onchain, + vec!["tag2".to_string()], + ); metadata2.address = Some(address.clone()); metadata2.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata2).is_ok()); @@ -2105,10 +2569,18 @@ mod tests { let (mut db, db_path) = setup(); let address = "bc1qtest123".to_string(); - let mut metadata1 = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, vec!["tag1".to_string()]); + let mut metadata1 = create_test_pre_activity_metadata( + address.clone(), + ActivityType::Onchain, + vec!["tag1".to_string()], + ); metadata1.address = Some(address.clone()); metadata1.is_receive = true; - let mut metadata2 = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, vec!["tag2".to_string(), "tag3".to_string()]); + let mut metadata2 = create_test_pre_activity_metadata( + address.clone(), + ActivityType::Onchain, + vec!["tag2".to_string(), "tag3".to_string()], + ); metadata2.address = Some(address.clone()); metadata2.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata1).is_ok()); @@ -2133,10 +2605,18 @@ mod tests { let address = "bc1qtest123".to_string(); // Add initial metadata with one tag - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, vec!["tag1".to_string()])).is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + address.clone(), + ActivityType::Onchain, + vec!["tag1".to_string()] + )) + .is_ok()); // Add more tags to existing metadata - assert!(db.add_pre_activity_metadata_tags(&address, &["tag2".to_string(), "tag3".to_string()]).is_ok()); + assert!(db + .add_pre_activity_metadata_tags(&address, &["tag2".to_string(), "tag3".to_string()]) + .is_ok()); // Verify all tags are present let all_metadata = db.get_all_pre_activity_metadata().unwrap(); @@ -2148,14 +2628,19 @@ mod tests { assert!(metadata.tags.contains(&"tag3".to_string())); // Add duplicate tag (should not add duplicate) - assert!(db.add_pre_activity_metadata_tags(&address, &["tag2".to_string()]).is_ok()); + assert!(db + .add_pre_activity_metadata_tags(&address, &["tag2".to_string()]) + .is_ok()); // Verify no duplicate was added let all_metadata_after = db.get_all_pre_activity_metadata().unwrap(); assert_eq!(all_metadata_after.len(), 1); let metadata_after = &all_metadata_after[0]; assert_eq!(metadata_after.tags.len(), 3); - assert_eq!(metadata_after.tags.iter().filter(|t| *t == "tag2").count(), 1); + assert_eq!( + metadata_after.tags.iter().filter(|t| *t == "tag2").count(), + 1 + ); cleanup(&db_path); } @@ -2178,11 +2663,14 @@ mod tests { let address = "bc1qtest123".to_string(); let tags = vec!["tag1".to_string(), "tag2".to_string(), "tag3".to_string()]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); assert!(db.add_pre_activity_metadata(&metadata).is_ok()); - assert!(db.remove_pre_activity_metadata_tags(&address, &["tag2".to_string()]).is_ok()); + assert!(db + .remove_pre_activity_metadata_tags(&address, &["tag2".to_string()]) + .is_ok()); let mut activity = create_test_onchain_activity(); activity.address = address.clone(); @@ -2202,13 +2690,21 @@ mod tests { fn test_remove_pre_activity_metadata_multiple() { let (mut db, db_path) = setup(); let address = "bc1qtest123".to_string(); - let tags = vec!["tag1".to_string(), "tag2".to_string(), "tag3".to_string(), "tag4".to_string()]; + let tags = vec![ + "tag1".to_string(), + "tag2".to_string(), + "tag3".to_string(), + "tag4".to_string(), + ]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); assert!(db.add_pre_activity_metadata(&metadata).is_ok()); - assert!(db.remove_pre_activity_metadata_tags(&address, &["tag1".to_string(), "tag3".to_string()]).is_ok()); + assert!(db + .remove_pre_activity_metadata_tags(&address, &["tag1".to_string(), "tag3".to_string()]) + .is_ok()); let mut activity = create_test_onchain_activity(); activity.address = address.clone(); @@ -2229,7 +2725,9 @@ mod tests { let address = "bc1qtest123".to_string(); // Try to remove tags that don't exist (should not error) - assert!(db.remove_pre_activity_metadata_tags(&address, &["nonexistent".to_string()]).is_ok()); + assert!(db + .remove_pre_activity_metadata_tags(&address, &["nonexistent".to_string()]) + .is_ok()); cleanup(&db_path); } @@ -2241,7 +2739,13 @@ mod tests { let tags = vec!["tag1".to_string(), "tag2".to_string(), "tag3".to_string()]; // Add tags - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone())).is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + address.clone(), + ActivityType::Onchain, + tags.clone() + )) + .is_ok()); // Reset all tags assert!(db.reset_pre_activity_metadata_tags(&address).is_ok()); @@ -2276,7 +2780,13 @@ mod tests { let tags = vec!["tag1".to_string(), "tag2".to_string(), "tag3".to_string()]; // Add tags - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone())).is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + address.clone(), + ActivityType::Onchain, + tags.clone() + )) + .is_ok()); // Verify metadata exists let all_metadata = db.get_all_pre_activity_metadata().unwrap(); @@ -2307,7 +2817,8 @@ mod tests { let address = "bc1qtest123".to_string(); let tags = vec!["payment".to_string()]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); metadata.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata).is_ok()); @@ -2331,7 +2842,8 @@ mod tests { let tx_id = "txid123".to_string(); let tags = vec!["sent_payment".to_string()]; - let mut metadata = create_test_pre_activity_metadata(tx_id.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(tx_id.clone(), ActivityType::Onchain, tags.clone()); metadata.tx_id = Some(tx_id.clone()); assert!(db.add_pre_activity_metadata(&metadata).is_ok()); @@ -2354,7 +2866,8 @@ mod tests { let metadata_address = "bc1qmetadata456".to_string(); let tags = vec!["sent_payment".to_string()]; - let mut metadata = create_test_pre_activity_metadata(tx_id.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(tx_id.clone(), ActivityType::Onchain, tags.clone()); metadata.tx_id = Some(tx_id.clone()); metadata.address = Some(metadata_address.clone()); assert!(db.add_pre_activity_metadata(&metadata).is_ok()); @@ -2385,7 +2898,8 @@ mod tests { let address = "bc1qtest123".to_string(); let tags = vec!["payment".to_string()]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); metadata.is_receive = true; metadata.fee_rate = 10; @@ -2413,7 +2927,8 @@ mod tests { let address = "bc1qtest123".to_string(); let tags = vec!["payment".to_string()]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); metadata.is_receive = true; metadata.is_transfer = true; @@ -2442,7 +2957,8 @@ mod tests { let channel_id = "channel_abc123".to_string(); let tags = vec!["payment".to_string()]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); metadata.is_receive = true; metadata.channel_id = Some(channel_id.clone()); @@ -2471,7 +2987,8 @@ mod tests { let channel_id = "channel_xyz789".to_string(); let tags = vec!["payment".to_string(), "transfer".to_string()]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); metadata.is_receive = true; metadata.fee_rate = 15; @@ -2510,7 +3027,8 @@ mod tests { let address = "bc1qtest123".to_string(); let tags = vec!["payment".to_string()]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); metadata.is_receive = true; metadata.fee_rate = 0; @@ -2538,7 +3056,8 @@ mod tests { let address = "bc1qtest123".to_string(); let tags = vec!["payment".to_string()]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); metadata.is_receive = true; metadata.is_transfer = false; @@ -2567,7 +3086,13 @@ mod tests { let tags = vec!["sent_invoice".to_string()]; // Add pre-activity metadata using payment hash - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata(payment_hash.clone(), ActivityType::Lightning, tags.clone())).is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + payment_hash.clone(), + ActivityType::Lightning, + tags.clone() + )) + .is_ok()); // Insert sent lightning activity (should transfer tags based on payment hash) let mut sent_activity = create_test_lightning_activity(); @@ -2588,7 +3113,8 @@ mod tests { let address = "bc1qtest123".to_string(); let tags = vec!["tag1".to_string(), "tag2".to_string()]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); metadata.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata).is_ok()); @@ -2619,7 +3145,13 @@ mod tests { let payment_hash = "test_lightning_received_1".to_string(); let tags = vec!["invoice".to_string(), "payment".to_string()]; - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata(payment_hash.clone(), ActivityType::Lightning, tags.clone())).is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + payment_hash.clone(), + ActivityType::Lightning, + tags.clone() + )) + .is_ok()); let mut activity = create_test_lightning_activity(); activity.id = payment_hash.clone(); @@ -2641,7 +3173,8 @@ mod tests { let ln_payment_hash = "ln_payment_hash_abc123".to_string(); let tags = vec!["payment".to_string(), "coffee".to_string()]; - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = + create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone()); metadata.address = Some(address.clone()); assert!(db.add_pre_activity_metadata(&metadata).is_ok()); @@ -2665,10 +3198,18 @@ mod tests { let address1 = "bc1qtest123".to_string(); let address2 = "bc1qtest456".to_string(); - let mut metadata1 = create_test_pre_activity_metadata(address1.clone(), ActivityType::Onchain, vec!["tag1".to_string()]); + let mut metadata1 = create_test_pre_activity_metadata( + address1.clone(), + ActivityType::Onchain, + vec!["tag1".to_string()], + ); metadata1.address = Some(address1.clone()); metadata1.is_receive = true; - let mut metadata2 = create_test_pre_activity_metadata(address2.clone(), ActivityType::Onchain, vec!["tag2".to_string()]); + let mut metadata2 = create_test_pre_activity_metadata( + address2.clone(), + ActivityType::Onchain, + vec!["tag2".to_string()], + ); metadata2.address = Some(address2.clone()); metadata2.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata1).is_ok()); @@ -2704,10 +3245,18 @@ mod tests { let address = "bc1qtest123".to_string(); let payment_hash = "test_lightning_separate_1".to_string(); - let mut metadata1 = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, vec!["onchain_tag".to_string()]); + let mut metadata1 = create_test_pre_activity_metadata( + address.clone(), + ActivityType::Onchain, + vec!["onchain_tag".to_string()], + ); metadata1.address = Some(address.clone()); metadata1.is_receive = true; - let metadata2 = create_test_pre_activity_metadata(payment_hash.clone(), ActivityType::Lightning, vec!["lightning_tag".to_string()]); + let metadata2 = create_test_pre_activity_metadata( + payment_hash.clone(), + ActivityType::Lightning, + vec!["lightning_tag".to_string()], + ); assert!(db.add_pre_activity_metadata(&metadata1).is_ok()); assert!(db.add_pre_activity_metadata(&metadata2).is_ok()); @@ -2741,7 +3290,13 @@ mod tests { let address = "bc1qtest123".to_string(); // Add empty tags (should be allowed, but won't transfer anything meaningful) - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, vec![])).is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + address.clone(), + ActivityType::Onchain, + vec![] + )) + .is_ok()); // Insert received activity let mut activity = create_test_onchain_activity(); @@ -2761,7 +3316,11 @@ mod tests { let (mut db, db_path) = setup(); let address = "bc1qtest123".to_string(); - let mut metadata = create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, vec!["receiving_tag".to_string()]); + let mut metadata = create_test_pre_activity_metadata( + address.clone(), + ActivityType::Onchain, + vec!["receiving_tag".to_string()], + ); metadata.address = Some(address.clone()); metadata.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata).is_ok()); @@ -2772,7 +3331,8 @@ mod tests { db.insert_onchain_activity(&activity).unwrap(); // Add regular tags to the same activity - db.add_tags(&activity.id, &["regular_tag".to_string()]).unwrap(); + db.add_tags(&activity.id, &["regular_tag".to_string()]) + .unwrap(); // Verify both types of tags are present let activity_tags = db.get_tags(&activity.id).unwrap(); @@ -2794,7 +3354,13 @@ mod tests { assert!(result.is_none()); // Add pre-activity metadata - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata(address.clone(), ActivityType::Onchain, tags.clone())).is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + address.clone(), + ActivityType::Onchain, + tags.clone() + )) + .is_ok()); // Get existing metadata let metadata = db.get_pre_activity_metadata(&address, false).unwrap(); @@ -2816,7 +3382,11 @@ mod tests { let tags = vec!["tag1".to_string(), "tag2".to_string()]; // Add pre-activity metadata with address - let mut metadata = create_test_pre_activity_metadata(payment_id.clone(), ActivityType::Onchain, tags.clone()); + let mut metadata = create_test_pre_activity_metadata( + payment_id.clone(), + ActivityType::Onchain, + tags.clone(), + ); metadata.address = Some(address.clone()); metadata.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata).is_ok()); @@ -2849,22 +3419,46 @@ mod tests { let invoice = "lightning:invoice123".to_string(); // Add pre-activity metadata for multiple identifiers - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata(address1.clone(), ActivityType::Onchain, vec!["tag1".to_string(), "tag2".to_string()])).is_ok()); - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata(address2.clone(), ActivityType::Onchain, vec!["tag3".to_string()])).is_ok()); - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata(invoice.clone(), ActivityType::Lightning, vec!["tag4".to_string(), "tag5".to_string()])).is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + address1.clone(), + ActivityType::Onchain, + vec!["tag1".to_string(), "tag2".to_string()] + )) + .is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + address2.clone(), + ActivityType::Onchain, + vec!["tag3".to_string()] + )) + .is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + invoice.clone(), + ActivityType::Lightning, + vec!["tag4".to_string(), "tag5".to_string()] + )) + .is_ok()); // Get all pre-activity metadata let all_tags = db.get_all_pre_activity_metadata().unwrap(); assert_eq!(all_tags.len(), 3); // Find tags for address1 - let addr1_tags = all_tags.iter().find(|rt| rt.payment_id == address1).unwrap(); + let addr1_tags = all_tags + .iter() + .find(|rt| rt.payment_id == address1) + .unwrap(); assert_eq!(addr1_tags.tags.len(), 2); assert!(addr1_tags.tags.contains(&"tag1".to_string())); assert!(addr1_tags.tags.contains(&"tag2".to_string())); // Find tags for address2 - let addr2_tags = all_tags.iter().find(|rt| rt.payment_id == address2).unwrap(); + let addr2_tags = all_tags + .iter() + .find(|rt| rt.payment_id == address2) + .unwrap(); assert_eq!(addr2_tags.tags.len(), 1); assert!(addr2_tags.tags.contains(&"tag3".to_string())); @@ -2932,7 +3526,9 @@ mod tests { ]; // Upsert pre-activity metadata - assert!(db.upsert_pre_activity_metadata(&pre_activity_metadata).is_ok()); + assert!(db + .upsert_pre_activity_metadata(&pre_activity_metadata) + .is_ok()); // Verify tags were added let all_tags = db.get_all_pre_activity_metadata().unwrap(); @@ -2956,24 +3552,26 @@ mod tests { fn test_upsert_pre_activity_metadata_idempotent() { let (mut db, db_path) = setup(); - let pre_activity_metadata = vec![ - PreActivityMetadata { - payment_id: "bc1qtest123".to_string(), - tags: vec!["tag1".to_string(), "tag2".to_string()], - payment_hash: None, - tx_id: None, - address: None, - is_receive: false, - fee_rate: 0, - is_transfer: false, - channel_id: None, - created_at: 0, - }, - ]; + let pre_activity_metadata = vec![PreActivityMetadata { + payment_id: "bc1qtest123".to_string(), + tags: vec!["tag1".to_string(), "tag2".to_string()], + payment_hash: None, + tx_id: None, + address: None, + is_receive: false, + fee_rate: 0, + is_transfer: false, + channel_id: None, + created_at: 0, + }]; // Upsert twice (should be idempotent) - assert!(db.upsert_pre_activity_metadata(&pre_activity_metadata).is_ok()); - assert!(db.upsert_pre_activity_metadata(&pre_activity_metadata).is_ok()); + assert!(db + .upsert_pre_activity_metadata(&pre_activity_metadata) + .is_ok()); + assert!(db + .upsert_pre_activity_metadata(&pre_activity_metadata) + .is_ok()); // Verify tags are still there let all_tags = db.get_all_pre_activity_metadata().unwrap(); @@ -2990,27 +3588,31 @@ mod tests { fn test_upsert_pre_activity_metadata_updates_existing() { let (mut db, db_path) = setup(); - let mut initial_metadata = create_test_pre_activity_metadata("bc1qtest123".to_string(), ActivityType::Onchain, vec!["tag1".to_string()]); + let mut initial_metadata = create_test_pre_activity_metadata( + "bc1qtest123".to_string(), + ActivityType::Onchain, + vec!["tag1".to_string()], + ); initial_metadata.address = Some("bc1qtest123".to_string()); initial_metadata.is_receive = true; assert!(db.add_pre_activity_metadata(&initial_metadata).is_ok()); - let pre_activity_metadata = vec![ - PreActivityMetadata { - payment_id: "bc1qtest123".to_string(), - tags: vec!["tag1".to_string(), "tag2".to_string(), "tag3".to_string()], - payment_hash: None, - tx_id: None, - address: Some("bc1qtest123".to_string()), - is_receive: true, - fee_rate: 0, - is_transfer: false, - channel_id: None, - created_at: 0, - }, - ]; + let pre_activity_metadata = vec![PreActivityMetadata { + payment_id: "bc1qtest123".to_string(), + tags: vec!["tag1".to_string(), "tag2".to_string(), "tag3".to_string()], + payment_hash: None, + tx_id: None, + address: Some("bc1qtest123".to_string()), + is_receive: true, + fee_rate: 0, + is_transfer: false, + channel_id: None, + created_at: 0, + }]; - assert!(db.upsert_pre_activity_metadata(&pre_activity_metadata).is_ok()); + assert!(db + .upsert_pre_activity_metadata(&pre_activity_metadata) + .is_ok()); // Verify all tags are present let all_tags = db.get_all_pre_activity_metadata().unwrap(); @@ -3041,23 +3643,23 @@ mod tests { fn test_upsert_pre_activity_metadata_empty_identifier() { let (mut db, db_path) = setup(); - let pre_activity_metadata = vec![ - PreActivityMetadata { - payment_id: "".to_string(), - tags: vec!["tag1".to_string()], - payment_hash: None, - tx_id: None, - address: None, - is_receive: false, - fee_rate: 0, - is_transfer: false, - channel_id: None, - created_at: 0, - }, - ]; + let pre_activity_metadata = vec![PreActivityMetadata { + payment_id: "".to_string(), + tags: vec!["tag1".to_string()], + payment_hash: None, + tx_id: None, + address: None, + is_receive: false, + fee_rate: 0, + is_transfer: false, + channel_id: None, + created_at: 0, + }]; // Empty identifier is allowed for backup/restore (restores exactly what was backed up) - assert!(db.upsert_pre_activity_metadata(&pre_activity_metadata).is_ok()); + assert!(db + .upsert_pre_activity_metadata(&pre_activity_metadata) + .is_ok()); cleanup(&db_path); } @@ -3066,9 +3668,17 @@ mod tests { fn test_backup_restore_pre_activity_metadata() { let (mut db, db_path) = setup(); - let mut metadata1 = create_test_pre_activity_metadata("bc1qtest123".to_string(), ActivityType::Onchain, vec!["tag1".to_string(), "tag2".to_string()]); + let mut metadata1 = create_test_pre_activity_metadata( + "bc1qtest123".to_string(), + ActivityType::Onchain, + vec!["tag1".to_string(), "tag2".to_string()], + ); metadata1.address = Some("bc1qtest123".to_string()); - let metadata2 = create_test_pre_activity_metadata("lightning:invoice123".to_string(), ActivityType::Lightning, vec!["tag3".to_string()]); + let metadata2 = create_test_pre_activity_metadata( + "lightning:invoice123".to_string(), + ActivityType::Lightning, + vec!["tag3".to_string()], + ); assert!(db.add_pre_activity_metadata(&metadata1).is_ok()); assert!(db.add_pre_activity_metadata(&metadata2).is_ok()); @@ -3077,8 +3687,12 @@ mod tests { assert_eq!(backup.len(), 2); // Simulate restore: Delete and restore - assert!(db.delete_pre_activity_metadata(&"bc1qtest123".to_string()).is_ok()); - assert!(db.delete_pre_activity_metadata(&"lightning:invoice123".to_string()).is_ok()); + assert!(db + .delete_pre_activity_metadata(&"bc1qtest123".to_string()) + .is_ok()); + assert!(db + .delete_pre_activity_metadata(&"lightning:invoice123".to_string()) + .is_ok()); // Verify cleared let after_clear = db.get_all_pre_activity_metadata().unwrap(); @@ -3137,7 +3751,9 @@ mod tests { }, ]; - assert!(db.upsert_pre_activity_metadata(&pre_activity_metadata).is_ok()); + assert!(db + .upsert_pre_activity_metadata(&pre_activity_metadata) + .is_ok()); // Verify only the last one is stored (second replaces first) let all_tags = db.get_all_pre_activity_metadata().unwrap(); @@ -3155,9 +3771,27 @@ mod tests { let (mut db, db_path) = setup(); // Add tags in non-alphabetical order - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata("z_address".to_string(), ActivityType::Onchain, vec!["tag1".to_string()])).is_ok()); - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata("a_address".to_string(), ActivityType::Onchain, vec!["tag2".to_string()])).is_ok()); - assert!(db.add_pre_activity_metadata(&create_test_pre_activity_metadata("m_address".to_string(), ActivityType::Onchain, vec!["tag3".to_string()])).is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + "z_address".to_string(), + ActivityType::Onchain, + vec!["tag1".to_string()] + )) + .is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + "a_address".to_string(), + ActivityType::Onchain, + vec!["tag2".to_string()] + )) + .is_ok()); + assert!(db + .add_pre_activity_metadata(&create_test_pre_activity_metadata( + "m_address".to_string(), + ActivityType::Onchain, + vec!["tag3".to_string()] + )) + .is_ok()); // Get all tags - should be sorted by payment_id let all_tags = db.get_all_pre_activity_metadata().unwrap(); @@ -3173,13 +3807,25 @@ mod tests { fn test_upsert_pre_activity_metadata_partial_update() { let (mut db, db_path) = setup(); - let mut metadata1 = create_test_pre_activity_metadata("address1".to_string(), ActivityType::Onchain, vec!["tag1".to_string()]); + let mut metadata1 = create_test_pre_activity_metadata( + "address1".to_string(), + ActivityType::Onchain, + vec!["tag1".to_string()], + ); metadata1.address = Some("address1".to_string()); metadata1.is_receive = true; - let mut metadata2 = create_test_pre_activity_metadata("address2".to_string(), ActivityType::Onchain, vec!["tag2".to_string()]); + let mut metadata2 = create_test_pre_activity_metadata( + "address2".to_string(), + ActivityType::Onchain, + vec!["tag2".to_string()], + ); metadata2.address = Some("address2".to_string()); metadata2.is_receive = true; - let mut metadata3 = create_test_pre_activity_metadata("address3".to_string(), ActivityType::Onchain, vec!["tag3".to_string()]); + let mut metadata3 = create_test_pre_activity_metadata( + "address3".to_string(), + ActivityType::Onchain, + vec!["tag3".to_string()], + ); metadata3.address = Some("address3".to_string()); metadata3.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata1).is_ok()); @@ -3191,20 +3837,18 @@ mod tests { assert_eq!(all.len(), 3); // Upsert with new tags for address2 (replaces existing tags) - let updated = vec![ - PreActivityMetadata { - payment_id: "address2".to_string(), - tags: vec!["tag2_updated".to_string(), "tag2_new".to_string()], - payment_hash: None, - tx_id: None, - address: None, - is_receive: false, - fee_rate: 0, - is_transfer: false, - channel_id: None, - created_at: 0, - }, - ]; + let updated = vec![PreActivityMetadata { + payment_id: "address2".to_string(), + tags: vec!["tag2_updated".to_string(), "tag2_new".to_string()], + payment_hash: None, + tx_id: None, + address: None, + is_receive: false, + fee_rate: 0, + is_transfer: false, + channel_id: None, + created_at: 0, + }]; assert!(db.upsert_pre_activity_metadata(&updated).is_ok()); @@ -3232,10 +3876,18 @@ mod tests { fn test_get_all_pre_activity_metadata_after_transfer() { let (mut db, db_path) = setup(); - let mut metadata1 = create_test_pre_activity_metadata("bc1qtest123".to_string(), ActivityType::Onchain, vec!["tag1".to_string(), "tag2".to_string()]); + let mut metadata1 = create_test_pre_activity_metadata( + "bc1qtest123".to_string(), + ActivityType::Onchain, + vec!["tag1".to_string(), "tag2".to_string()], + ); metadata1.address = Some("bc1qtest123".to_string()); metadata1.is_receive = true; - let mut metadata2 = create_test_pre_activity_metadata("bc1qtest456".to_string(), ActivityType::Onchain, vec!["tag3".to_string()]); + let mut metadata2 = create_test_pre_activity_metadata( + "bc1qtest456".to_string(), + ActivityType::Onchain, + vec!["tag3".to_string()], + ); metadata2.address = Some("bc1qtest456".to_string()); metadata2.is_receive = true; assert!(db.add_pre_activity_metadata(&metadata1).is_ok()); @@ -3264,10 +3916,10 @@ mod tests { fn test_is_address_used_no_activities() { let (db, db_path) = setup(); let address = "bc1qunused123".to_string(); - + let is_used = db.is_address_used(&address).unwrap(); assert!(!is_used, "Address with no activities should return false"); - + cleanup(&db_path); } @@ -3275,17 +3927,17 @@ mod tests { fn test_is_address_used_with_received_activity() { let (mut db, db_path) = setup(); let address = "bc1qreceived123".to_string(); - + let mut activity = create_test_onchain_activity(); activity.address = address.clone(); activity.tx_type = PaymentType::Received; activity.id = "test_received_1".to_string(); - + db.insert_onchain_activity(&activity).unwrap(); - + let is_used = db.is_address_used(&address).unwrap(); assert!(is_used, "Address with received activity should return true"); - + cleanup(&db_path); } @@ -3293,17 +3945,17 @@ mod tests { fn test_is_address_used_with_sent_activity() { let (mut db, db_path) = setup(); let address = "bc1qsent123".to_string(); - + let mut activity = create_test_onchain_activity(); activity.address = address.clone(); activity.tx_type = PaymentType::Sent; activity.id = "test_sent_1".to_string(); - + db.insert_onchain_activity(&activity).unwrap(); - + let is_used = db.is_address_used(&address).unwrap(); assert!(is_used, "Address with sent activity should return true"); - + cleanup(&db_path); } @@ -3311,7 +3963,7 @@ mod tests { fn test_is_address_used_with_multiple_activities() { let (mut db, db_path) = setup(); let address = "bc1qmultiple123".to_string(); - + // Add received activity let mut received_activity = create_test_onchain_activity(); received_activity.address = address.clone(); @@ -3319,7 +3971,7 @@ mod tests { received_activity.id = "test_received_1".to_string(); received_activity.confirmed = true; db.insert_onchain_activity(&received_activity).unwrap(); - + // Add sent activity let mut sent_activity = create_test_onchain_activity(); sent_activity.address = address.clone(); @@ -3327,10 +3979,13 @@ mod tests { sent_activity.id = "test_sent_1".to_string(); sent_activity.confirmed = false; db.insert_onchain_activity(&sent_activity).unwrap(); - + let is_used = db.is_address_used(&address).unwrap(); - assert!(is_used, "Address with multiple activities should return true"); - + assert!( + is_used, + "Address with multiple activities should return true" + ); + cleanup(&db_path); } @@ -3338,18 +3993,21 @@ mod tests { fn test_is_address_used_with_unconfirmed_activity() { let (mut db, db_path) = setup(); let address = "bc1qunconfirmed123".to_string(); - + let mut activity = create_test_onchain_activity(); activity.address = address.clone(); activity.tx_type = PaymentType::Received; activity.id = "test_unconfirmed_1".to_string(); activity.confirmed = false; - + db.insert_onchain_activity(&activity).unwrap(); - + let is_used = db.is_address_used(&address).unwrap(); - assert!(is_used, "Address with unconfirmed activity should return true"); - + assert!( + is_used, + "Address with unconfirmed activity should return true" + ); + cleanup(&db_path); } @@ -3358,22 +4016,22 @@ mod tests { let (mut db, db_path) = setup(); let used_address = "bc1qused123".to_string(); let unused_address = "bc1qunused456".to_string(); - + // Add activity for one address let mut activity = create_test_onchain_activity(); activity.address = used_address.clone(); activity.tx_type = PaymentType::Received; activity.id = "test_used_1".to_string(); db.insert_onchain_activity(&activity).unwrap(); - + // Check used address let is_used = db.is_address_used(&used_address).unwrap(); assert!(is_used, "Address with activity should return true"); - + // Check unused address let is_unused = db.is_address_used(&unused_address).unwrap(); assert!(!is_unused, "Address without activity should return false"); - + cleanup(&db_path); } @@ -3381,26 +4039,32 @@ mod tests { fn test_is_address_used_only_onchain_activities() { let (mut db, db_path) = setup(); let address = "bc1qonchain123".to_string(); - + // Add lightning activity (should not affect the check) let lightning_activity = create_test_lightning_activity(); db.insert_lightning_activity(&lightning_activity).unwrap(); - + // Address should still be unused since no onchain activity let is_used = db.is_address_used(&address).unwrap(); - assert!(!is_used, "Address should return false if only lightning activities exist"); - + assert!( + !is_used, + "Address should return false if only lightning activities exist" + ); + // Now add onchain activity let mut onchain_activity = create_test_onchain_activity(); onchain_activity.address = address.clone(); onchain_activity.tx_type = PaymentType::Received; onchain_activity.id = "test_onchain_1".to_string(); db.insert_onchain_activity(&onchain_activity).unwrap(); - + // Now it should be used let is_used_after = db.is_address_used(&address).unwrap(); - assert!(is_used_after, "Address should return true after onchain activity is added"); - + assert!( + is_used_after, + "Address should return true after onchain activity is added" + ); + cleanup(&db_path); } @@ -3408,10 +4072,10 @@ mod tests { fn test_get_activity_by_tx_id_not_found() { let (db, db_path) = setup(); let tx_id = "nonexistent_tx_id".to_string(); - + let activity = db.get_activity_by_tx_id(&tx_id).unwrap(); assert!(activity.is_none(), "Non-existent tx_id should return None"); - + cleanup(&db_path); } @@ -3419,16 +4083,16 @@ mod tests { fn test_get_activity_by_tx_id_found() { let (mut db, db_path) = setup(); let tx_id = "test_tx_id_123".to_string(); - + let mut activity = create_test_onchain_activity(); activity.tx_id = tx_id.clone(); activity.id = "test_activity_1".to_string(); - + db.insert_onchain_activity(&activity).unwrap(); - + let retrieved = db.get_activity_by_tx_id(&tx_id).unwrap(); assert!(retrieved.is_some(), "Activity should be found by tx_id"); - + if let Some(retrieved_activity) = retrieved { assert_eq!(retrieved_activity.tx_id, tx_id); assert_eq!(retrieved_activity.id, activity.id); @@ -3436,7 +4100,7 @@ mod tests { } else { panic!("Expected Onchain activity"); } - + cleanup(&db_path); } @@ -3444,25 +4108,25 @@ mod tests { fn test_get_activity_by_tx_id_multiple_activities() { let (mut db, db_path) = setup(); let tx_id = "shared_tx_id".to_string(); - + // Insert first activity let mut activity1 = create_test_onchain_activity(); activity1.tx_id = tx_id.clone(); activity1.id = "test_activity_1".to_string(); activity1.value = 10000; db.insert_onchain_activity(&activity1).unwrap(); - + // Insert second activity with same tx_id (shouldn't happen in practice, but test it) let mut activity2 = create_test_onchain_activity(); activity2.tx_id = tx_id.clone(); activity2.id = "test_activity_2".to_string(); activity2.value = 20000; db.insert_onchain_activity(&activity2).unwrap(); - + // Should return the first one found let retrieved = db.get_activity_by_tx_id(&tx_id).unwrap(); assert!(retrieved.is_some(), "Activity should be found by tx_id"); - + if let Some(retrieved_activity) = retrieved { assert_eq!(retrieved_activity.tx_id, tx_id); // Should return one of them (implementation dependent which one) @@ -3470,7 +4134,7 @@ mod tests { } else { panic!("Expected Onchain activity"); } - + cleanup(&db_path); } @@ -3479,17 +4143,17 @@ mod tests { let (mut db, db_path) = setup(); let tx_id1 = "tx_id_1".to_string(); let tx_id2 = "tx_id_2".to_string(); - + let mut activity1 = create_test_onchain_activity(); activity1.tx_id = tx_id1.clone(); activity1.id = "test_activity_1".to_string(); db.insert_onchain_activity(&activity1).unwrap(); - + let mut activity2 = create_test_onchain_activity(); activity2.tx_id = tx_id2.clone(); activity2.id = "test_activity_2".to_string(); db.insert_onchain_activity(&activity2).unwrap(); - + // Get first activity let retrieved1 = db.get_activity_by_tx_id(&tx_id1).unwrap(); assert!(retrieved1.is_some(), "First activity should be found"); @@ -3499,7 +4163,7 @@ mod tests { } else { panic!("Expected Onchain activity"); } - + // Get second activity let retrieved2 = db.get_activity_by_tx_id(&tx_id2).unwrap(); assert!(retrieved2.is_some(), "Second activity should be found"); @@ -3509,7 +4173,7 @@ mod tests { } else { panic!("Expected Onchain activity"); } - + cleanup(&db_path); } @@ -3517,25 +4181,31 @@ mod tests { fn test_get_activity_by_tx_id_only_onchain() { let (mut db, db_path) = setup(); let tx_id = "onchain_tx_id".to_string(); - + // Add lightning activity (should not be found by tx_id) let lightning_activity = create_test_lightning_activity(); db.insert_lightning_activity(&lightning_activity).unwrap(); - + // Try to get by tx_id - should return None since lightning doesn't have tx_id let retrieved = db.get_activity_by_tx_id(&tx_id).unwrap(); - assert!(retrieved.is_none(), "Lightning activities should not be found by tx_id"); - + assert!( + retrieved.is_none(), + "Lightning activities should not be found by tx_id" + ); + // Add onchain activity let mut onchain_activity = create_test_onchain_activity(); onchain_activity.tx_id = tx_id.clone(); onchain_activity.id = "test_onchain_1".to_string(); db.insert_onchain_activity(&onchain_activity).unwrap(); - + // Now should find it let retrieved = db.get_activity_by_tx_id(&tx_id).unwrap(); - assert!(retrieved.is_some(), "Onchain activity should be found by tx_id"); - + assert!( + retrieved.is_some(), + "Onchain activity should be found by tx_id" + ); + cleanup(&db_path); } @@ -3547,15 +4217,23 @@ mod tests { // Verify initial state - seen_at should be None let retrieved = db.get_activity_by_id(&activity.id).unwrap().unwrap(); - assert!(retrieved.get_seen_at().is_none(), "seen_at should be None initially"); + assert!( + retrieved.get_seen_at().is_none(), + "seen_at should be None initially" + ); // Mark as seen let seen_timestamp = 1234567900u64; - db.mark_activity_as_seen(&activity.id, seen_timestamp).unwrap(); + db.mark_activity_as_seen(&activity.id, seen_timestamp) + .unwrap(); // Verify seen_at is now set let retrieved = db.get_activity_by_id(&activity.id).unwrap().unwrap(); - assert_eq!(retrieved.get_seen_at(), Some(seen_timestamp), "seen_at should be set"); + assert_eq!( + retrieved.get_seen_at(), + Some(seen_timestamp), + "seen_at should be set" + ); cleanup(&db_path); } @@ -3568,15 +4246,23 @@ mod tests { // Verify initial state - seen_at should be None let retrieved = db.get_activity_by_id(&activity.id).unwrap().unwrap(); - assert!(retrieved.get_seen_at().is_none(), "seen_at should be None initially"); + assert!( + retrieved.get_seen_at().is_none(), + "seen_at should be None initially" + ); // Mark as seen let seen_timestamp = 1234567900u64; - db.mark_activity_as_seen(&activity.id, seen_timestamp).unwrap(); + db.mark_activity_as_seen(&activity.id, seen_timestamp) + .unwrap(); // Verify seen_at is now set let retrieved = db.get_activity_by_id(&activity.id).unwrap().unwrap(); - assert_eq!(retrieved.get_seen_at(), Some(seen_timestamp), "seen_at should be set"); + assert_eq!( + retrieved.get_seen_at(), + Some(seen_timestamp), + "seen_at should be set" + ); cleanup(&db_path); } @@ -3595,35 +4281,42 @@ mod tests { #[test] fn test_seen_at_preserved_in_get_activities() { let (mut db, db_path) = setup(); - + // Insert two activities let mut onchain = create_test_onchain_activity(); onchain.timestamp = 1000; let mut lightning = create_test_lightning_activity(); lightning.timestamp = 2000; - + db.insert_onchain_activity(&onchain).unwrap(); db.insert_lightning_activity(&lightning).unwrap(); - + // Mark only onchain as seen let seen_timestamp = 3000u64; - db.mark_activity_as_seen(&onchain.id, seen_timestamp).unwrap(); - + db.mark_activity_as_seen(&onchain.id, seen_timestamp) + .unwrap(); + // Get all activities - let activities = db.get_activities(None, None, None, None, None, None, None, None).unwrap(); + let activities = db + .get_activities(None, None, None, None, None, None, None, None) + .unwrap(); assert_eq!(activities.len(), 2); - + for activity in activities { match activity { Activity::Onchain(o) => { - assert_eq!(o.seen_at, Some(seen_timestamp), "Onchain should have seen_at set"); + assert_eq!( + o.seen_at, + Some(seen_timestamp), + "Onchain should have seen_at set" + ); } Activity::Lightning(l) => { assert!(l.seen_at.is_none(), "Lightning should not have seen_at set"); } } } - + cleanup(&db_path); } @@ -3635,11 +4328,16 @@ mod tests { // Mark as seen let seen_timestamp = 1234567900u64; - db.mark_activity_as_seen(&activity.id, seen_timestamp).unwrap(); + db.mark_activity_as_seen(&activity.id, seen_timestamp) + .unwrap(); // Retrieve by tx_id and verify seen_at let retrieved = db.get_activity_by_tx_id(&activity.tx_id).unwrap().unwrap(); - assert_eq!(retrieved.seen_at, Some(seen_timestamp), "seen_at should be preserved when getting by tx_id"); + assert_eq!( + retrieved.seen_at, + Some(seen_timestamp), + "seen_at should be preserved when getting by tx_id" + ); cleanup(&db_path); } @@ -3648,15 +4346,13 @@ mod tests { TransactionDetails { tx_id: "tx123abc".to_string(), amount_sats: 50000, - inputs: vec![ - TxInput { - txid: "prev_tx_abc".to_string(), - vout: 0, - scriptsig: "00".to_string(), - witness: vec!["witness1".to_string(), "witness2".to_string()], - sequence: 0xffffffff, - }, - ], + inputs: vec![TxInput { + txid: "prev_tx_abc".to_string(), + vout: 0, + scriptsig: "00".to_string(), + witness: vec!["witness1".to_string(), "witness2".to_string()], + sequence: 0xffffffff, + }], outputs: vec![ TxOutput { scriptpubkey: "0014abc123".to_string(), @@ -3680,10 +4376,10 @@ mod tests { fn test_upsert_and_get_transaction_details() { let (mut db, db_path) = setup(); let details = create_test_transaction_details(); - + // Upsert db.upsert_transaction_details(&[details.clone()]).unwrap(); - + // Retrieve let retrieved = db.get_transaction_details(&details.tx_id).unwrap().unwrap(); assert_eq!(retrieved.tx_id, details.tx_id); @@ -3692,17 +4388,17 @@ mod tests { assert_eq!(retrieved.outputs.len(), 2); assert_eq!(retrieved.inputs[0].txid, "prev_tx_abc"); assert_eq!(retrieved.outputs[0].value, 45000); - + cleanup(&db_path); } #[test] fn test_transaction_details_not_found() { let (db, db_path) = setup(); - + let retrieved = db.get_transaction_details("nonexistent_tx").unwrap(); assert!(retrieved.is_none()); - + cleanup(&db_path); } @@ -3710,18 +4406,18 @@ mod tests { fn test_upsert_transaction_details_updates_existing() { let (mut db, db_path) = setup(); let mut details = create_test_transaction_details(); - + // Initial insert db.upsert_transaction_details(&[details.clone()]).unwrap(); - + // Update with new amount details.amount_sats = 100000; db.upsert_transaction_details(&[details.clone()]).unwrap(); - + // Verify update let retrieved = db.get_transaction_details(&details.tx_id).unwrap().unwrap(); assert_eq!(retrieved.amount_sats, 100000); - + cleanup(&db_path); } @@ -3729,165 +4425,173 @@ mod tests { fn test_delete_transaction_details() { let (mut db, db_path) = setup(); let details = create_test_transaction_details(); - + db.upsert_transaction_details(&[details.clone()]).unwrap(); - + // Delete let deleted = db.delete_transaction_details(&details.tx_id).unwrap(); assert!(deleted); - + // Verify deletion let retrieved = db.get_transaction_details(&details.tx_id).unwrap(); assert!(retrieved.is_none()); - + cleanup(&db_path); } #[test] fn test_delete_nonexistent_transaction_details() { let (mut db, db_path) = setup(); - + let deleted = db.delete_transaction_details("nonexistent_tx").unwrap(); assert!(!deleted); - + cleanup(&db_path); } #[test] fn test_upsert_transaction_details_multiple() { let (mut db, db_path) = setup(); - + let details1 = create_test_transaction_details(); let mut details2 = create_test_transaction_details(); details2.tx_id = "tx456def".to_string(); details2.amount_sats = -25000; // Outgoing - - db.upsert_transaction_details(&[details1.clone(), details2.clone()]).unwrap(); - + + db.upsert_transaction_details(&[details1.clone(), details2.clone()]) + .unwrap(); + // Verify both were inserted let all = db.get_all_transaction_details().unwrap(); assert_eq!(all.len(), 2); - - let retrieved1 = db.get_transaction_details(&details1.tx_id).unwrap().unwrap(); + + let retrieved1 = db + .get_transaction_details(&details1.tx_id) + .unwrap() + .unwrap(); assert_eq!(retrieved1.amount_sats, 50000); - - let retrieved2 = db.get_transaction_details(&details2.tx_id).unwrap().unwrap(); + + let retrieved2 = db + .get_transaction_details(&details2.tx_id) + .unwrap() + .unwrap(); assert_eq!(retrieved2.amount_sats, -25000); - + cleanup(&db_path); } #[test] fn test_get_all_transaction_details() { let (mut db, db_path) = setup(); - + // Initially empty let all = db.get_all_transaction_details().unwrap(); assert!(all.is_empty()); - + // Add some let details1 = create_test_transaction_details(); let mut details2 = create_test_transaction_details(); details2.tx_id = "tx789ghi".to_string(); - - db.upsert_transaction_details(&[details1, details2]).unwrap(); - + + db.upsert_transaction_details(&[details1, details2]) + .unwrap(); + let all = db.get_all_transaction_details().unwrap(); assert_eq!(all.len(), 2); - + cleanup(&db_path); } #[test] fn test_wipe_all_transaction_details() { let (mut db, db_path) = setup(); - + let details1 = create_test_transaction_details(); let mut details2 = create_test_transaction_details(); details2.tx_id = "tx999xyz".to_string(); - - db.upsert_transaction_details(&[details1, details2]).unwrap(); - + + db.upsert_transaction_details(&[details1, details2]) + .unwrap(); + // Wipe all db.wipe_all_transaction_details().unwrap(); - + let all = db.get_all_transaction_details().unwrap(); assert!(all.is_empty()); - + cleanup(&db_path); } #[test] fn test_transaction_details_empty_tx_id_fails() { let (mut db, db_path) = setup(); - + let mut details = create_test_transaction_details(); details.tx_id = "".to_string(); - + let result = db.upsert_transaction_details(&[details]); assert!(result.is_err()); - + cleanup(&db_path); } #[test] fn test_transaction_details_complex_witness() { let (mut db, db_path) = setup(); - + let details = TransactionDetails { tx_id: "tx_with_complex_witness".to_string(), amount_sats: 10000, - inputs: vec![ - TxInput { - txid: "prev_tx".to_string(), - vout: 1, - scriptsig: "".to_string(), - witness: vec![ - "304402...".to_string(), - "02abc...".to_string(), - "c0...".to_string(), - ], - sequence: 0xfffffffd, - }, - ], - outputs: vec![ - TxOutput { - scriptpubkey: "5120...".to_string(), - scriptpubkey_type: Some("p2tr".to_string()), - scriptpubkey_address: Some("bc1p...".to_string()), - value: 9500, - n: 0, - }, - ], + inputs: vec![TxInput { + txid: "prev_tx".to_string(), + vout: 1, + scriptsig: "".to_string(), + witness: vec![ + "304402...".to_string(), + "02abc...".to_string(), + "c0...".to_string(), + ], + sequence: 0xfffffffd, + }], + outputs: vec![TxOutput { + scriptpubkey: "5120...".to_string(), + scriptpubkey_type: Some("p2tr".to_string()), + scriptpubkey_address: Some("bc1p...".to_string()), + value: 9500, + n: 0, + }], }; - + db.upsert_transaction_details(&[details.clone()]).unwrap(); - + let retrieved = db.get_transaction_details(&details.tx_id).unwrap().unwrap(); assert_eq!(retrieved.inputs[0].witness.len(), 3); - assert_eq!(retrieved.outputs[0].scriptpubkey_type, Some("p2tr".to_string())); - + assert_eq!( + retrieved.outputs[0].scriptpubkey_type, + Some("p2tr".to_string()) + ); + cleanup(&db_path); } #[test] fn test_wipe_all_includes_transaction_details() { let (mut db, db_path) = setup(); - + // Add activity and transaction details let activity = create_test_onchain_activity(); db.insert_onchain_activity(&activity).unwrap(); - + let details = create_test_transaction_details(); db.upsert_transaction_details(&[details]).unwrap(); - + // Wipe all db.wipe_all().unwrap(); - + // Verify transaction details are also wiped let all = db.get_all_transaction_details().unwrap(); assert!(all.is_empty()); - + cleanup(&db_path); } } diff --git a/src/modules/activity/types.rs b/src/modules/activity/types.rs index 5bba0da..a1cc9ef 100644 --- a/src/modules/activity/types.rs +++ b/src/modules/activity/types.rs @@ -1,7 +1,7 @@ -use serde::{Deserialize, Serialize}; -use thiserror::Error; use crate::activity::ActivityError; use crate::modules::blocktank::BlocktankError; +use serde::{Deserialize, Serialize}; +use thiserror::Error; #[derive(Debug, uniffi::Enum)] pub enum Activity { @@ -232,25 +232,19 @@ pub enum SortDirection { #[non_exhaustive] pub enum DbError { #[error("DB Activity Error: {error_details}")] - DbActivityError { - error_details: ActivityError - }, + DbActivityError { error_details: ActivityError }, #[error("DB Blocktank Error: {error_details}")] - DbBlocktankError { - error_details: BlocktankError - }, + DbBlocktankError { error_details: BlocktankError }, #[error("Initialization Error: {error_details}")] - InitializationError { - error_details: String - }, + InitializationError { error_details: String }, } impl From for DbError { fn from(error: ActivityError) -> Self { DbError::DbActivityError { - error_details: error + error_details: error, } } } @@ -258,7 +252,7 @@ impl From for DbError { impl From for DbError { fn from(error: BlocktankError) -> Self { DbError::DbBlocktankError { - error_details: error + error_details: error, } } -} \ No newline at end of file +} diff --git a/src/modules/blocktank/api.rs b/src/modules/blocktank/api.rs index 8445605..bb05e40 100644 --- a/src/modules/blocktank/api.rs +++ b/src/modules/blocktank/api.rs @@ -1,23 +1,20 @@ +use crate::modules::blocktank::{BlocktankDB, BlocktankError, IGift}; use rust_blocktank_client::{ - CreateCjitOptions, - CreateOrderOptions, - IBt0ConfMinTxFeeWindow, - IBtBolt11Invoice, - IBtEstimateFeeResponse, - IBtEstimateFeeResponse2, - IBtInfo, - IBtOrder, - ICJitEntry + CreateCjitOptions, CreateOrderOptions, IBt0ConfMinTxFeeWindow, IBtBolt11Invoice, + IBtEstimateFeeResponse, IBtEstimateFeeResponse2, IBtInfo, IBtOrder, ICJitEntry, }; -use crate::modules::blocktank::{BlocktankDB, BlocktankError, IGift}; impl BlocktankDB { /// Fetches service information from Blocktank and stores it in the database. /// Returns the fetched information if successful. pub async fn fetch_and_store_info(&self) -> Result { - let info = self.client.get_info().await.map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to fetch info from Blocktank: {}", e) - })?; + let info = self + .client + .get_info() + .await + .map_err(|e| BlocktankError::DataError { + error_details: format!("Failed to fetch info from Blocktank: {}", e), + })?; self.upsert_info(&info).await?; Ok(info) @@ -30,16 +27,15 @@ impl BlocktankDB { channel_expiry_weeks: u32, options: Option, ) -> Result { - let response = self.client.create_order( - lsp_balance_sat, - channel_expiry_weeks, - options - ).await; + let response = self + .client + .create_order(lsp_balance_sat, channel_expiry_weeks, options) + .await; println!("Raw API response: {:#?}", response); let order = response.map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to create order with Blocktank client: {}", e) + error_details: format!("Failed to create order with Blocktank client: {}", e), })?; self.upsert_order(&order).await?; @@ -49,26 +45,32 @@ impl BlocktankDB { pub async fn open_channel( &self, order_id: String, - connection_string: String + connection_string: String, ) -> Result { - let response = self.client.open_channel( - &order_id, - &connection_string, - ).await.map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to open channel with Blocktank client: {}", e) - })?; + let response = self + .client + .open_channel(&order_id, &connection_string) + .await + .map_err(|e| BlocktankError::DataError { + error_details: format!("Failed to open channel with Blocktank client: {}", e), + })?; self.upsert_order(&response).await?; Ok(response) } /// Fetches and updates multiple orders in the database - pub async fn refresh_orders(&self, order_ids: &[String]) -> Result, BlocktankError> { - let orders = self.client.get_orders(order_ids) - .await - .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to fetch orders: {}", e) - })?; + pub async fn refresh_orders( + &self, + order_ids: &[String], + ) -> Result, BlocktankError> { + let orders = + self.client + .get_orders(order_ids) + .await + .map_err(|e| BlocktankError::DataError { + error_details: format!("Failed to fetch orders: {}", e), + })?; for order in &orders { self.upsert_order(order).await?; @@ -85,10 +87,7 @@ impl BlocktankDB { return Ok(Vec::new()); } - let order_ids: Vec = active_orders - .iter() - .map(|order| order.id.clone()) - .collect(); + let order_ids: Vec = active_orders.iter().map(|order| order.id.clone()).collect(); self.refresh_orders(&order_ids).await } @@ -97,10 +96,12 @@ impl BlocktankDB { &self, order_id: String, ) -> Result { - let response = self.client.get_min_zero_conf_tx_fee(&order_id) + let response = self + .client + .get_min_zero_conf_tx_fee(&order_id) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to get minimum zero-conf transaction fee: {}", e) + error_details: format!("Failed to get minimum zero-conf transaction fee: {}", e), })?; Ok(response) @@ -112,13 +113,13 @@ impl BlocktankDB { channel_expiry_weeks: u32, options: Option, ) -> Result { - let response = self.client.estimate_order_fee( - lsp_balance_sat, - channel_expiry_weeks, - options - ).await.map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to estimate order fee: {}", e) - })?; + let response = self + .client + .estimate_order_fee(lsp_balance_sat, channel_expiry_weeks, options) + .await + .map_err(|e| BlocktankError::DataError { + error_details: format!("Failed to estimate order fee: {}", e), + })?; Ok(response) } @@ -129,13 +130,13 @@ impl BlocktankDB { channel_expiry_weeks: u32, options: Option, ) -> Result { - let response = self.client.estimate_order_fee_full( - lsp_balance_sat, - channel_expiry_weeks, - options - ).await.map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to estimate full order fee: {}", e) - })?; + let response = self + .client + .estimate_order_fee_full(lsp_balance_sat, channel_expiry_weeks, options) + .await + .map_err(|e| BlocktankError::DataError { + error_details: format!("Failed to estimate full order fee: {}", e), + })?; Ok(response) } @@ -149,16 +150,20 @@ impl BlocktankDB { channel_expiry_weeks: u32, options: Option, ) -> Result { - let response = self.client.create_cjit_entry( - channel_size_sat, - invoice_sat, - invoice_description, - node_id, - channel_expiry_weeks, - options - ).await.map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to create CJIT entry: {}", e) - })?; + let response = self + .client + .create_cjit_entry( + channel_size_sat, + invoice_sat, + invoice_description, + node_id, + channel_expiry_weeks, + options, + ) + .await + .map_err(|e| BlocktankError::DataError { + error_details: format!("Failed to create CJIT entry: {}", e), + })?; self.upsert_cjit_entry(&response).await?; Ok(response) @@ -167,11 +172,13 @@ impl BlocktankDB { /// Fetches a CJIT entry by ID from Blocktank and stores it in the database. /// Returns the fetched CJIT entry if successful. pub async fn refresh_cjit_entry(&self, entry_id: &str) -> Result { - let response = self.client.get_cjit_entry(entry_id) - .await - .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to fetch CJIT entry from Blocktank: {}", e) - })?; + let response = + self.client + .get_cjit_entry(entry_id) + .await + .map_err(|e| BlocktankError::DataError { + error_details: format!("Failed to fetch CJIT entry from Blocktank: {}", e), + })?; self.upsert_cjit_entry(&response).await?; Ok(response) @@ -208,10 +215,11 @@ impl BlocktankDB { /// Mines blocks on the regtest network pub async fn regtest_mine(&self, count: Option) -> Result<(), BlocktankError> { - self.client.regtest_mine(count) + self.client + .regtest_mine(count) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to mine blocks: {}", e) + error_details: format!("Failed to mine blocks: {}", e), }) } @@ -221,10 +229,11 @@ impl BlocktankDB { address: &str, amount_sat: Option, ) -> Result { - self.client.regtest_deposit(address, amount_sat) + self.client + .regtest_deposit(address, amount_sat) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to deposit to address: {}", e) + error_details: format!("Failed to deposit to address: {}", e), }) } @@ -234,19 +243,24 @@ impl BlocktankDB { invoice: &str, amount_sat: Option, ) -> Result { - self.client.regtest_pay(invoice, amount_sat) + self.client + .regtest_pay(invoice, amount_sat) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to pay invoice: {}", e) + error_details: format!("Failed to pay invoice: {}", e), }) } /// Gets paid invoice on the regtest network - pub async fn regtest_get_payment(&self, payment_id: &str) -> Result { - self.client.regtest_get_payment(payment_id) + pub async fn regtest_get_payment( + &self, + payment_id: &str, + ) -> Result { + self.client + .regtest_get_payment(payment_id) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to get payment: {}", e) + error_details: format!("Failed to get payment: {}", e), }) } @@ -257,10 +271,11 @@ impl BlocktankDB { vout: u32, force_close_after_s: Option, ) -> Result { - self.client.regtest_close_channel(funding_tx_id, vout, force_close_after_s) + self.client + .regtest_close_channel(funding_tx_id, vout, force_close_after_s) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to close channel: {}", e) + error_details: format!("Failed to close channel: {}", e), }) } @@ -276,19 +291,20 @@ impl BlocktankDB { is_production: Option, custom_url: Option<&str>, ) -> Result { - self.client.register_device( - device_token, - public_key, - features, - node_id, - iso_timestamp, - signature, - is_production, - custom_url - ) + self.client + .register_device( + device_token, + public_key, + features, + node_id, + iso_timestamp, + signature, + is_production, + custom_url, + ) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to register device: {}", e) + error_details: format!("Failed to register device: {}", e), }) } @@ -300,54 +316,58 @@ impl BlocktankDB { notification_type: Option<&str>, custom_url: Option<&str>, ) -> Result { - self.client.test_notification( - device_token, - secret_message, - notification_type, - custom_url - ) + self.client + .test_notification(device_token, secret_message, notification_type, custom_url) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to send test notification: {}", e) + error_details: format!("Failed to send test notification: {}", e), }) } /// Makes a payment for a gift invoice pub async fn gift_pay(&self, invoice: &str) -> Result { - self.client.gift_pay(invoice) + self.client + .gift_pay(invoice) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to pay gift invoice: {}", e) + error_details: format!("Failed to pay gift invoice: {}", e), }) .map(|gift| gift.into()) } /// Creates a gift order - pub async fn gift_order(&self, client_node_id: &str, code: &str) -> Result { - self.client.gift_order(client_node_id, code) + pub async fn gift_order( + &self, + client_node_id: &str, + code: &str, + ) -> Result { + self.client + .gift_order(client_node_id, code) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to create gift order: {}", e) + error_details: format!("Failed to create gift order: {}", e), }) .map(|gift| gift.into()) } /// Gets a paid gift payment pub async fn get_gift(&self, gift_id: &str) -> Result { - self.client.get_gift(gift_id) + self.client + .get_gift(gift_id) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to get gift: {}", e) + error_details: format!("Failed to get gift: {}", e), }) .map(|gift| gift.into()) } /// Gets a paid payment pub async fn get_payment(&self, payment_id: &str) -> Result { - self.client.get_payment(payment_id) + self.client + .get_payment(payment_id) .await .map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to get payment: {}", e) + error_details: format!("Failed to get payment: {}", e), }) } -} \ No newline at end of file +} diff --git a/src/modules/blocktank/db.rs b/src/modules/blocktank/db.rs index a54659e..41d78c5 100644 --- a/src/modules/blocktank/db.rs +++ b/src/modules/blocktank/db.rs @@ -1,20 +1,23 @@ +use crate::modules::blocktank::models::*; +use crate::modules::blocktank::{BlocktankDB, BlocktankError}; use rusqlite::{Connection, OptionalExtension}; use rust_blocktank_client::*; -use tokio::sync::Mutex; use std::result::Result; -use crate::modules::blocktank::{BlocktankDB, BlocktankError}; -use crate::modules::blocktank::models::*; +use tokio::sync::Mutex; pub const DEFAULT_BLOCKTANK_URL: &str = "https://api1.blocktank.to/api"; impl BlocktankDB { - pub async fn new(db_path: &str, blocktank_url: Option<&str>) -> Result { + pub async fn new( + db_path: &str, + blocktank_url: Option<&str>, + ) -> Result { let conn = Connection::open(db_path).map_err(|e| BlocktankError::InitializationError { error_details: format!("Error opening database: {}", e), })?; let url = blocktank_url.unwrap_or(DEFAULT_BLOCKTANK_URL); - let client = BlocktankClient::new(Some(url)) - .map_err(|e| BlocktankError::InitializationError { + let client = + BlocktankClient::new(Some(url)).map_err(|e| BlocktankError::InitializationError { error_details: format!("Failed to initialize Blocktank client: {}", e), })?; @@ -32,22 +35,27 @@ impl BlocktankDB { // Create enum tables for create_stmt in CREATE_ENUM_TABLES { - conn.execute(create_stmt, []).map_err(|e| BlocktankError::InitializationError { - error_details: format!("Failed to create enum table: {}", e), - })?; + conn.execute(create_stmt, []) + .map_err(|e| BlocktankError::InitializationError { + error_details: format!("Failed to create enum table: {}", e), + })?; } // Create main tables - conn.execute(CREATE_ORDERS_TABLE, []).map_err(|e| BlocktankError::InitializationError { - error_details: format!("Failed to create orders table: {}", e), - })?; + conn.execute(CREATE_ORDERS_TABLE, []) + .map_err(|e| BlocktankError::InitializationError { + error_details: format!("Failed to create orders table: {}", e), + })?; - conn.execute(CREATE_INFO_TABLE, []).map_err(|e| BlocktankError::InitializationError { - error_details: format!("Failed to create info table: {}", e), - })?; + conn.execute(CREATE_INFO_TABLE, []) + .map_err(|e| BlocktankError::InitializationError { + error_details: format!("Failed to create info table: {}", e), + })?; - conn.execute(CREATE_CJIT_ENTRIES_TABLE, []).map_err(|e| BlocktankError::InitializationError { - error_details: format!("Failed to create CJIT entries table: {}", e), + conn.execute(CREATE_CJIT_ENTRIES_TABLE, []).map_err(|e| { + BlocktankError::InitializationError { + error_details: format!("Failed to create CJIT entries table: {}", e), + } })?; // Populate enum tables @@ -56,17 +64,25 @@ impl BlocktankDB { conn.execute( "INSERT OR IGNORE INTO order_states (state, description) VALUES (?1, ?1)", [state], - ).map_err(|e| BlocktankError::InitializationError { + ) + .map_err(|e| BlocktankError::InitializationError { error_details: format!("Failed to insert order state {}: {}", state, e), })?; } // Payment states - for state in ["Created", "PartiallyPaid", "Paid", "Refunded", "RefundAvailable"] { + for state in [ + "Created", + "PartiallyPaid", + "Paid", + "Refunded", + "RefundAvailable", + ] { conn.execute( "INSERT OR IGNORE INTO payment_states (state, description) VALUES (?1, ?1)", [state], - ).map_err(|e| BlocktankError::InitializationError { + ) + .map_err(|e| BlocktankError::InitializationError { error_details: format!("Failed to insert payment state {}: {}", state, e), })?; } @@ -76,23 +92,26 @@ impl BlocktankDB { conn.execute( "INSERT OR IGNORE INTO cjit_states (state, description) VALUES (?1, ?1)", [state], - ).map_err(|e| BlocktankError::InitializationError { + ) + .map_err(|e| BlocktankError::InitializationError { error_details: format!("Failed to insert cjit state {}: {}", state, e), })?; } // Create triggers for trigger_stmt in TRIGGER_STATEMENTS { - conn.execute(trigger_stmt, []).map_err(|e| BlocktankError::InitializationError { - error_details: format!("Failed to create trigger: {}", e), - })?; + conn.execute(trigger_stmt, []) + .map_err(|e| BlocktankError::InitializationError { + error_details: format!("Failed to create trigger: {}", e), + })?; } // Create indexes for index_stmt in INDEX_STATEMENTS { - conn.execute(index_stmt, []).map_err(|e| BlocktankError::InitializationError { - error_details: format!("Failed to create index: {}", e), - })?; + conn.execute(index_stmt, []) + .map_err(|e| BlocktankError::InitializationError { + error_details: format!("Failed to create index: {}", e), + })?; } Ok(()) @@ -108,8 +127,13 @@ impl BlocktankDB { } // Attempt to create a new BlocktankClient with the new URL - let new_client = BlocktankClient::new(Some(new_url)).map_err(|e| BlocktankError::InitializationError { - error_details: format!("Failed to initialize Blocktank client with the new URL: {}", e), + let new_client = BlocktankClient::new(Some(new_url)).map_err(|e| { + BlocktankError::InitializationError { + error_details: format!( + "Failed to initialize Blocktank client with the new URL: {}", + e + ), + } })?; // Update both the client and URL @@ -122,28 +146,33 @@ impl BlocktankDB { pub async fn upsert_info(&self, info: &IBtInfo) -> Result<(), BlocktankError> { let conn = self.conn.lock().await; - let nodes_json = serde_json::to_string(&info.nodes).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize nodes: {}", e), - })?; + let nodes_json = + serde_json::to_string(&info.nodes).map_err(|e| BlocktankError::SerializationError { + error_details: format!("Failed to serialize nodes: {}", e), + })?; - let options_json = serde_json::to_string(&info.options).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize options: {}", e), + let options_json = serde_json::to_string(&info.options).map_err(|e| { + BlocktankError::SerializationError { + error_details: format!("Failed to serialize options: {}", e), + } })?; - let versions_json = serde_json::to_string(&info.versions).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize versions: {}", e), + let versions_json = serde_json::to_string(&info.versions).map_err(|e| { + BlocktankError::SerializationError { + error_details: format!("Failed to serialize versions: {}", e), + } })?; - let onchain_json = serde_json::to_string(&info.onchain).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize onchain: {}", e), + let onchain_json = serde_json::to_string(&info.onchain).map_err(|e| { + BlocktankError::SerializationError { + error_details: format!("Failed to serialize onchain: {}", e), + } })?; - conn.execute( - "UPDATE info SET is_current = 0 WHERE is_current = 1", - [], - ).map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to update existing info records: {}", e), - })?; + conn.execute("UPDATE info SET is_current = 0 WHERE is_current = 1", []) + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to update existing info records: {}", e), + })?; conn.execute( "INSERT OR REPLACE INTO info ( @@ -158,7 +187,8 @@ impl BlocktankDB { &versions_json, &onchain_json, ), - ).map_err(|e| BlocktankError::InsertError { + ) + .map_err(|e| BlocktankError::InsertError { error_details: format!("Failed to insert info: {}", e), })?; @@ -169,57 +199,67 @@ impl BlocktankDB { pub async fn get_info(&self) -> Result, BlocktankError> { let conn = self.conn.lock().await; - let result = conn.query_row( - "SELECT version, nodes, options, versions, onchain + let result = conn + .query_row( + "SELECT version, nodes, options, versions, onchain FROM info WHERE is_current = 1", - [], - |row| { - let version: u32 = row.get(0)?; - let nodes_json: String = row.get(1)?; - let options_json: String = row.get(2)?; - let versions_json: String = row.get(3)?; - let onchain_json: String = row.get(4)?; - - let nodes: Vec = serde_json::from_str(&nodes_json) - .map_err(|e| rusqlite::Error::FromSqlConversionFailure( - 0, - rusqlite::types::Type::Text, - Box::new(e), - ))?; - - let options: IBtInfoOptions = serde_json::from_str(&options_json) - .map_err(|e| rusqlite::Error::FromSqlConversionFailure( - 0, - rusqlite::types::Type::Text, - Box::new(e), - ))?; - - let versions: IBtInfoVersions = serde_json::from_str(&versions_json) - .map_err(|e| rusqlite::Error::FromSqlConversionFailure( - 0, - rusqlite::types::Type::Text, - Box::new(e), - ))?; - - let onchain: IBtInfoOnchain = serde_json::from_str(&onchain_json) - .map_err(|e| rusqlite::Error::FromSqlConversionFailure( - 0, - rusqlite::types::Type::Text, - Box::new(e), - ))?; - - Ok(IBtInfo { - version, - nodes, - options, - versions, - onchain, - }) - } - ).optional().map_err(|e| BlocktankError::DataError { - error_details: format!("Failed to fetch info from database: {}", e), - })?; + [], + |row| { + let version: u32 = row.get(0)?; + let nodes_json: String = row.get(1)?; + let options_json: String = row.get(2)?; + let versions_json: String = row.get(3)?; + let onchain_json: String = row.get(4)?; + + let nodes: Vec = serde_json::from_str(&nodes_json).map_err(|e| { + rusqlite::Error::FromSqlConversionFailure( + 0, + rusqlite::types::Type::Text, + Box::new(e), + ) + })?; + + let options: IBtInfoOptions = + serde_json::from_str(&options_json).map_err(|e| { + rusqlite::Error::FromSqlConversionFailure( + 0, + rusqlite::types::Type::Text, + Box::new(e), + ) + })?; + + let versions: IBtInfoVersions = + serde_json::from_str(&versions_json).map_err(|e| { + rusqlite::Error::FromSqlConversionFailure( + 0, + rusqlite::types::Type::Text, + Box::new(e), + ) + })?; + + let onchain: IBtInfoOnchain = + serde_json::from_str(&onchain_json).map_err(|e| { + rusqlite::Error::FromSqlConversionFailure( + 0, + rusqlite::types::Type::Text, + Box::new(e), + ) + })?; + + Ok(IBtInfo { + version, + nodes, + options, + versions, + onchain, + }) + }, + ) + .optional() + .map_err(|e| BlocktankError::DataError { + error_details: format!("Failed to fetch info from database: {}", e), + })?; Ok(result) } @@ -229,11 +269,11 @@ impl BlocktankDB { let params = Self::build_order_params(order)?; - let mut stmt = conn.prepare( - INSERT_ORDER_SQL - ).map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = + conn.prepare(INSERT_ORDER_SQL) + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to prepare statement: {}", e), + })?; stmt.execute(rusqlite::params![ params.id, @@ -259,7 +299,8 @@ impl BlocktankDB { params.discount_json, params.updated_at, params.created_at, - ]).map_err(|e| BlocktankError::InsertError { + ]) + .map_err(|e| BlocktankError::InsertError { error_details: format!("Failed to insert order: {}", e), })?; @@ -268,16 +309,18 @@ impl BlocktankDB { pub async fn upsert_orders(&self, orders: &[IBtOrder]) -> Result<(), BlocktankError> { let mut conn = self.conn.lock().await; - let tx = conn.transaction().map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = conn + .transaction() + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to start transaction: {}", e), + })?; { - let mut stmt = tx.prepare( - INSERT_ORDER_SQL - ).map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = + tx.prepare(INSERT_ORDER_SQL) + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to prepare statement: {}", e), + })?; for order in orders { let params = Self::build_order_params(order)?; @@ -306,7 +349,8 @@ impl BlocktankDB { params.discount_json, params.updated_at, params.created_at, - ]).map_err(|e| BlocktankError::InsertError { + ]) + .map_err(|e| BlocktankError::InsertError { error_details: format!("Failed to insert order {}: {}", params.id, e), })?; } @@ -321,24 +365,32 @@ impl BlocktankDB { fn build_order_params(order: &IBtOrder) -> Result { let channel_json = if let Some(channel) = &order.channel { - Some(serde_json::to_string(channel).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize channel: {}", e), + Some(serde_json::to_string(channel).map_err(|e| { + BlocktankError::SerializationError { + error_details: format!("Failed to serialize channel: {}", e), + } })?) } else { None }; - let lsp_node_json = serde_json::to_string(&order.lsp_node).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize lsp_node: {}", e), + let lsp_node_json = serde_json::to_string(&order.lsp_node).map_err(|e| { + BlocktankError::SerializationError { + error_details: format!("Failed to serialize lsp_node: {}", e), + } })?; - let payment_json = serde_json::to_string(&order.payment).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize payment: {}", e), + let payment_json = serde_json::to_string(&order.payment).map_err(|e| { + BlocktankError::SerializationError { + error_details: format!("Failed to serialize payment: {}", e), + } })?; let discount_json = if let Some(discount) = &order.discount { - Some(serde_json::to_string(discount).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize discount: {}", e), + Some(serde_json::to_string(discount).map_err(|e| { + BlocktankError::SerializationError { + error_details: format!("Failed to serialize discount: {}", e), + } })?) } else { None @@ -347,7 +399,11 @@ impl BlocktankDB { Ok(OrderInsertParams { id: order.id.clone(), state: format!("{:?}", order.state), - state2: order.state2.as_ref().map(|s| format!("{:?}", s)).unwrap_or_else(|| "".to_string()), + state2: order + .state2 + .as_ref() + .map(|s| format!("{:?}", s)) + .unwrap_or_else(|| "".to_string()), fee_sat: order.fee_sat, network_fee_sat: order.network_fee_sat, service_fee_sat: order.service_fee_sat, @@ -373,24 +429,32 @@ impl BlocktankDB { fn build_cjit_params(entry: &ICJitEntry) -> Result { let channel_json = if let Some(channel) = &entry.channel { - Some(serde_json::to_string(channel).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize channel: {}", e), + Some(serde_json::to_string(channel).map_err(|e| { + BlocktankError::SerializationError { + error_details: format!("Failed to serialize channel: {}", e), + } })?) } else { None }; - let invoice_json = serde_json::to_string(&entry.invoice).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize invoice: {}", e), + let invoice_json = serde_json::to_string(&entry.invoice).map_err(|e| { + BlocktankError::SerializationError { + error_details: format!("Failed to serialize invoice: {}", e), + } })?; - let lsp_node_json = serde_json::to_string(&entry.lsp_node).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize lsp_node: {}", e), + let lsp_node_json = serde_json::to_string(&entry.lsp_node).map_err(|e| { + BlocktankError::SerializationError { + error_details: format!("Failed to serialize lsp_node: {}", e), + } })?; let discount_json = if let Some(discount) = &entry.discount { - Some(serde_json::to_string(discount).map_err(|e| BlocktankError::SerializationError { - error_details: format!("Failed to serialize discount: {}", e), + Some(serde_json::to_string(discount).map_err(|e| { + BlocktankError::SerializationError { + error_details: format!("Failed to serialize discount: {}", e), + } })?) } else { None @@ -431,16 +495,24 @@ impl BlocktankDB { client_node_id, channel_expiry_weeks, channel_expires_at, order_expires_at, lnurl, coupon_code, source, channel_data, lsp_node_data, payment_data, discount_data, updated_at, created_at - FROM orders WHERE 1=1" + FROM orders WHERE 1=1", ); let mut params: Vec> = Vec::new(); if let Some(ids) = order_ids { query.push_str(" AND id IN ("); - query.push_str(&std::iter::repeat("?").take(ids.len()).collect::>().join(",")); + query.push_str( + &std::iter::repeat("?") + .take(ids.len()) + .collect::>() + .join(","), + ); query.push(')'); - params.extend(ids.iter().map(|id| Box::new(id.clone()) as Box)); + params.extend( + ids.iter() + .map(|id| Box::new(id.clone()) as Box), + ); } if let Some(state) = filter { @@ -450,9 +522,11 @@ impl BlocktankDB { query.push_str(" ORDER BY created_at DESC"); - let mut stmt = conn.prepare(&query).map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to prepare statement: {}", e) - })?; + let mut stmt = conn + .prepare(&query) + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to prepare statement: {}", e), + })?; let orders = stmt .query_map(rusqlite::params_from_iter(params), |row| { @@ -535,11 +609,11 @@ impl BlocktankDB { }) }) .map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to execute query: {}", e) + error_details: format!("Failed to execute query: {}", e), })? .collect::, _>>() .map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to process results: {}", e) + error_details: format!("Failed to process results: {}", e), })?; Ok(orders) @@ -556,12 +630,14 @@ impl BlocktankDB { lsp_node_data, payment_data, discount_data, updated_at, created_at FROM orders WHERE state2 IN ('Created', 'Paid') - ORDER BY created_at DESC" + ORDER BY created_at DESC", ); - let mut stmt = conn.prepare(&query).map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to prepare statement: {}", e) - })?; + let mut stmt = conn + .prepare(&query) + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to prepare statement: {}", e), + })?; let orders = stmt .query_map([], |row| { @@ -644,11 +720,11 @@ impl BlocktankDB { }) }) .map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to execute query: {}", e) + error_details: format!("Failed to execute query: {}", e), })? .collect::, _>>() .map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to process results: {}", e) + error_details: format!("Failed to process results: {}", e), })?; Ok(orders) @@ -659,11 +735,11 @@ impl BlocktankDB { let params = Self::build_cjit_params(entry)?; - let mut stmt = conn.prepare( - INSERT_CJIT_SQL - ).map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = + conn.prepare(INSERT_CJIT_SQL) + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to prepare statement: {}", e), + })?; stmt.execute(rusqlite::params![ params.id, @@ -684,7 +760,8 @@ impl BlocktankDB { params.discount_json, params.updated_at, params.created_at, - ]).map_err(|e| BlocktankError::InsertError { + ]) + .map_err(|e| BlocktankError::InsertError { error_details: format!("Failed to insert CJIT entry: {}", e), })?; @@ -693,16 +770,18 @@ impl BlocktankDB { pub async fn upsert_cjit_entries(&self, entries: &[ICJitEntry]) -> Result<(), BlocktankError> { let mut conn = self.conn.lock().await; - let tx = conn.transaction().map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = conn + .transaction() + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to start transaction: {}", e), + })?; { - let mut stmt = tx.prepare( - INSERT_CJIT_SQL - ).map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to prepare statement: {}", e), - })?; + let mut stmt = + tx.prepare(INSERT_CJIT_SQL) + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to prepare statement: {}", e), + })?; for entry in entries { let params = Self::build_cjit_params(entry)?; @@ -726,7 +805,8 @@ impl BlocktankDB { params.discount_json, params.updated_at, params.created_at, - ]).map_err(|e| BlocktankError::InsertError { + ]) + .map_err(|e| BlocktankError::InsertError { error_details: format!("Failed to insert CJIT entry {}: {}", params.id, e), })?; } @@ -752,16 +832,24 @@ impl BlocktankDB { node_id, coupon_code, source, expires_at, invoice_data, channel_data, lsp_node_data, discount_data, updated_at, created_at - FROM cjit_entries WHERE 1=1" + FROM cjit_entries WHERE 1=1", ); let mut params: Vec> = Vec::new(); if let Some(ids) = entry_ids { query.push_str(" AND id IN ("); - query.push_str(&std::iter::repeat("?").take(ids.len()).collect::>().join(",")); + query.push_str( + &std::iter::repeat("?") + .take(ids.len()) + .collect::>() + .join(","), + ); query.push(')'); - params.extend(ids.iter().map(|id| Box::new(id.clone()) as Box)); + params.extend( + ids.iter() + .map(|id| Box::new(id.clone()) as Box), + ); } if let Some(state) = filter { @@ -771,9 +859,11 @@ impl BlocktankDB { query.push_str(" ORDER BY created_at DESC"); - let mut stmt = conn.prepare(&query).map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to prepare statement: {}", e) - })?; + let mut stmt = conn + .prepare(&query) + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to prepare statement: {}", e), + })?; let entries = stmt .query_map(rusqlite::params_from_iter(params), |row| { @@ -782,13 +872,14 @@ impl BlocktankDB { let lsp_node_json: String = row.get(14)?; let discount_json: Option = row.get(15)?; - let invoice: IBtBolt11Invoice = serde_json::from_str(&invoice_json).map_err(|e| { - rusqlite::Error::FromSqlConversionFailure( - 0, - rusqlite::types::Type::Text, - Box::new(e), - ) - })?; + let invoice: IBtBolt11Invoice = + serde_json::from_str(&invoice_json).map_err(|e| { + rusqlite::Error::FromSqlConversionFailure( + 0, + rusqlite::types::Type::Text, + Box::new(e), + ) + })?; let channel = if let Some(json) = channel_json { Some(serde_json::from_str(&json).map_err(|e| { @@ -824,11 +915,16 @@ impl BlocktankDB { Ok(ICJitEntry { id: row.get(0)?, - state: row.get::<_, String>(1)?.parse::().map_err(|e| rusqlite::Error::FromSqlConversionFailure( - 1, - rusqlite::types::Type::Text, - Box::new(e), - ))?, + state: row + .get::<_, String>(1)? + .parse::() + .map_err(|e| { + rusqlite::Error::FromSqlConversionFailure( + 1, + rusqlite::types::Type::Text, + Box::new(e), + ) + })?, fee_sat: row.get(2)?, network_fee_sat: row.get(3)?, service_fee_sat: row.get(4)?, @@ -848,11 +944,11 @@ impl BlocktankDB { }) }) .map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to execute query: {}", e) + error_details: format!("Failed to execute query: {}", e), })? .collect::, _>>() .map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to process results: {}", e) + error_details: format!("Failed to process results: {}", e), })?; Ok(entries) @@ -869,12 +965,14 @@ impl BlocktankDB { updated_at, created_at FROM cjit_entries WHERE state IN ('Created', 'Failed') - ORDER BY created_at DESC" + ORDER BY created_at DESC", ); - let mut stmt = conn.prepare(&query).map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to prepare statement: {}", e) - })?; + let mut stmt = conn + .prepare(&query) + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to prepare statement: {}", e), + })?; let entries = stmt .query_map([], |row| { @@ -883,13 +981,14 @@ impl BlocktankDB { let lsp_node_json: String = row.get(14)?; let discount_json: Option = row.get(15)?; - let invoice: IBtBolt11Invoice = serde_json::from_str(&invoice_json).map_err(|e| { - rusqlite::Error::FromSqlConversionFailure( - 0, - rusqlite::types::Type::Text, - Box::new(e), - ) - })?; + let invoice: IBtBolt11Invoice = + serde_json::from_str(&invoice_json).map_err(|e| { + rusqlite::Error::FromSqlConversionFailure( + 0, + rusqlite::types::Type::Text, + Box::new(e), + ) + })?; let channel = if let Some(json) = channel_json { Some(serde_json::from_str(&json).map_err(|e| { @@ -925,11 +1024,16 @@ impl BlocktankDB { Ok(ICJitEntry { id: row.get(0)?, - state: row.get::<_, String>(1)?.parse::().map_err(|e| rusqlite::Error::FromSqlConversionFailure( - 1, - rusqlite::types::Type::Text, - Box::new(e), - ))?, + state: row + .get::<_, String>(1)? + .parse::() + .map_err(|e| { + rusqlite::Error::FromSqlConversionFailure( + 1, + rusqlite::types::Type::Text, + Box::new(e), + ) + })?, fee_sat: row.get(2)?, network_fee_sat: row.get(3)?, service_fee_sat: row.get(4)?, @@ -949,11 +1053,11 @@ impl BlocktankDB { }) }) .map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to execute query: {}", e) + error_details: format!("Failed to execute query: {}", e), })? .collect::, _>>() .map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to process results: {}", e) + error_details: format!("Failed to process results: {}", e), })?; Ok(entries) @@ -975,10 +1079,11 @@ impl BlocktankDB { pub async fn remove_all_cjit_entries(&self) -> Result<(), BlocktankError> { let conn = self.conn.lock().await; - conn.execute("DELETE FROM cjit_entries", []) - .map_err(|e| BlocktankError::DatabaseError { + conn.execute("DELETE FROM cjit_entries", []).map_err(|e| { + BlocktankError::DatabaseError { error_details: format!("Failed to delete all CJIT entries: {}", e), - })?; + } + })?; Ok(()) } @@ -995,9 +1100,11 @@ impl BlocktankDB { pub async fn wipe_all(&self) -> Result<(), BlocktankError> { let mut conn = self.conn.lock().await; - let tx = conn.transaction().map_err(|e| BlocktankError::DatabaseError { - error_details: format!("Failed to start transaction: {}", e), - })?; + let tx = conn + .transaction() + .map_err(|e| BlocktankError::DatabaseError { + error_details: format!("Failed to start transaction: {}", e), + })?; tx.execute("DELETE FROM orders", []) .map_err(|e| BlocktankError::DatabaseError { @@ -1067,4 +1174,4 @@ struct CJitInsertParams { discount_json: Option, updated_at: String, created_at: String, -} \ No newline at end of file +} diff --git a/src/modules/blocktank/errors.rs b/src/modules/blocktank/errors.rs index e12c82d..1c51be6 100644 --- a/src/modules/blocktank/errors.rs +++ b/src/modules/blocktank/errors.rs @@ -1,51 +1,32 @@ -use thiserror::Error; use crate::modules::blocktank::BtChannelOrderErrorType; +use thiserror::Error; #[derive(uniffi::Error, Debug, Error)] pub enum BlocktankError { #[error("HTTP client error: {error_details}")] - HttpClient { - error_details: String - }, + HttpClient { error_details: String }, #[error("Blocktank error: {error_details}")] - BlocktankClient { - error_details: String, - }, + BlocktankClient { error_details: String }, #[error("Invalid Blocktank: {error_details}")] - InvalidBlocktank { - error_details: String, - }, + InvalidBlocktank { error_details: String }, #[error("Database initialization failed: {error_details}")] - InitializationError { - error_details: String, - }, + InitializationError { error_details: String }, #[error("Failed to insert blocktank: {error_details}")] - InsertError { - error_details: String, - }, + InsertError { error_details: String }, #[error("Failed to retrieve blocktanks: {error_details}")] - RetrievalError { - error_details: String, - }, + RetrievalError { error_details: String }, #[error("Invalid data format: {error_details}")] - DataError { - error_details: String, - }, + DataError { error_details: String }, #[error("Database connection error: {error_details}")] - ConnectionError { - error_details: String, - }, + ConnectionError { error_details: String }, #[error("Serialization error: {error_details}")] - SerializationError { - error_details: String, - }, - + SerializationError { error_details: String }, #[error("Channel open error: {error_type:?} - {error_details}")] ChannelOpen { @@ -54,24 +35,18 @@ pub enum BlocktankError { }, #[error("Order state error: {error_details}")] - OrderState { - error_details: String - }, + OrderState { error_details: String }, #[error("Invalid parameter: {error_details}")] - InvalidParameter { - error_details: String - }, + InvalidParameter { error_details: String }, #[error("Database error: {error_details}")] - DatabaseError { - error_details: String, - }, + DatabaseError { error_details: String }, } impl From for BlocktankError { fn from(err: serde_json::Error) -> Self { BlocktankError::SerializationError { - error_details: err.to_string() + error_details: err.to_string(), } } } @@ -79,7 +54,7 @@ impl From for BlocktankError { impl From for BlocktankError { fn from(err: url::ParseError) -> Self { BlocktankError::ConnectionError { - error_details: err.to_string() + error_details: err.to_string(), } } } @@ -87,4 +62,4 @@ impl From for BlocktankError { #[derive(uniffi::Record, Debug)] pub struct ErrorData { pub error_details: String, -} \ No newline at end of file +} diff --git a/src/modules/blocktank/liquidity.rs b/src/modules/blocktank/liquidity.rs index e34e1a0..39b147f 100644 --- a/src/modules/blocktank/liquidity.rs +++ b/src/modules/blocktank/liquidity.rs @@ -35,22 +35,23 @@ pub struct DefaultLspBalanceParams { /// Calculates channel liquidity options including default, min, and max LSP balance. /// Used for normal channel opening UI with existing channel deduction and 2% buffer. -pub fn calculate_channel_liquidity_options(params: ChannelLiquidityParams) -> ChannelLiquidityOptions { +pub fn calculate_channel_liquidity_options( + params: ChannelLiquidityParams, +) -> ChannelLiquidityOptions { let threshold_1_sat = THRESHOLD_1_EUR * params.sats_per_eur; let threshold_2_sat = THRESHOLD_2_EUR * params.sats_per_eur; let default_lsp_target_sat = DEFAULT_LSP_TARGET_EUR * params.sats_per_eur; // Apply 2% buffer to max channel size (LSP limits fluctuate with network fees) - let max_channel_size_buffered = (params.max_channel_size_sat as f64 * MAX_CHANNEL_SIZE_BUFFER_PERCENT) as u64; + let max_channel_size_buffered = + (params.max_channel_size_sat as f64 * MAX_CHANNEL_SIZE_BUFFER_PERCENT) as u64; // Subtract existing channels from max (users have a total liquidity cap) - let max_channel_size = max_channel_size_buffered - .saturating_sub(params.existing_channels_total_sat); + let max_channel_size = + max_channel_size_buffered.saturating_sub(params.existing_channels_total_sat); - let min_lsp_balance = calc_min_lsp_balance( - params.client_balance_sat, - params.min_channel_size_sat, - ); + let min_lsp_balance = + calc_min_lsp_balance(params.client_balance_sat, params.min_channel_size_sat); let max_lsp_balance = max_channel_size.saturating_sub(params.client_balance_sat); diff --git a/src/modules/blocktank/mod.rs b/src/modules/blocktank/mod.rs index 4b1cb5e..a223a91 100644 --- a/src/modules/blocktank/mod.rs +++ b/src/modules/blocktank/mod.rs @@ -1,13 +1,13 @@ +mod api; mod db; -mod models; mod errors; -mod types; -mod api; mod liquidity; +mod models; #[cfg(test)] mod tests; +mod types; -pub use models::BlocktankDB; pub use errors::BlocktankError; -pub use types::*; pub use liquidity::*; +pub use models::BlocktankDB; +pub use types::*; diff --git a/src/modules/blocktank/models.rs b/src/modules/blocktank/models.rs index d483287..54b9456 100644 --- a/src/modules/blocktank/models.rs +++ b/src/modules/blocktank/models.rs @@ -1,6 +1,6 @@ +use rusqlite::Connection; use rust_blocktank_client::BlocktankClient; use tokio::sync::Mutex; -use rusqlite::Connection; pub struct BlocktankDB { pub(crate) conn: Mutex, @@ -117,7 +117,6 @@ pub const TRIGGER_STATEMENTS: &[&str] = &[ BEGIN UPDATE info SET is_current = 0; END", - // Ensure single current version trigger - UPDATE "CREATE TRIGGER IF NOT EXISTS ensure_single_current_version BEFORE UPDATE ON info @@ -125,7 +124,7 @@ pub const TRIGGER_STATEMENTS: &[&str] = &[ BEGIN UPDATE info SET is_current = 0 WHERE version != NEW.version; - END" + END", ]; /// Database indexes for optimizing queries @@ -141,10 +140,9 @@ pub const INDEX_STATEMENTS: &[&str] = &[ "CREATE INDEX IF NOT EXISTS idx_orders_created_at ON orders(created_at DESC)", "CREATE INDEX IF NOT EXISTS idx_orders_updated_at ON orders(updated_at DESC)", "CREATE INDEX IF NOT EXISTS idx_orders_expires_at ON orders(order_expires_at DESC)", - // CJIT entries indexes "CREATE INDEX IF NOT EXISTS idx_cjit_state ON cjit_entries(state)", "CREATE INDEX IF NOT EXISTS idx_cjit_node_state ON cjit_entries(node_id, state)", "CREATE INDEX IF NOT EXISTS idx_cjit_expires_at ON cjit_entries(expires_at DESC)", - "CREATE INDEX IF NOT EXISTS idx_cjit_created_at ON cjit_entries(created_at DESC)" -]; \ No newline at end of file + "CREATE INDEX IF NOT EXISTS idx_cjit_created_at ON cjit_entries(created_at DESC)", +]; diff --git a/src/modules/blocktank/tests.rs b/src/modules/blocktank/tests.rs index c91187a..376e983 100644 --- a/src/modules/blocktank/tests.rs +++ b/src/modules/blocktank/tests.rs @@ -4,18 +4,20 @@ const STAGING_SERVER: &str = "https://api.stag.blocktank.to/blocktank/api/v2"; #[cfg(test)] mod tests { - use rust_blocktank_client::*; - use crate::modules::blocktank::{BlocktankDB, BlocktankError}; + use super::*; use crate::modules::blocktank::liquidity::{ - calculate_channel_liquidity_options, ChannelLiquidityParams, - get_default_lsp_balance, DefaultLspBalanceParams, + calculate_channel_liquidity_options, get_default_lsp_balance, ChannelLiquidityParams, + DefaultLspBalanceParams, }; - use super::*; + use crate::modules::blocktank::{BlocktankDB, BlocktankError}; + use rust_blocktank_client::*; #[tokio::test] async fn test_upsert_info() { // Initialize in-memory database - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Create test data let test_info = IBtInfo { @@ -58,11 +60,13 @@ mod tests { // Verify the insert { let conn = db.conn.lock().await; - let row = conn.query_row( - "SELECT version, is_current FROM info WHERE version = 1", - [], - |row| Ok((row.get::<_, u32>(0)?, row.get::<_, bool>(1)?)) - ).unwrap(); + let row = conn + .query_row( + "SELECT version, is_current FROM info WHERE version = 1", + [], + |row| Ok((row.get::<_, u32>(0)?, row.get::<_, bool>(1)?)), + ) + .unwrap(); assert_eq!(row, (1, true)); } // Lock is dropped here @@ -80,7 +84,8 @@ mod tests { let conn = db.conn.lock().await; // Check version statuses - let rows: Vec<(u32, bool)> = conn.prepare("SELECT version, is_current FROM info ORDER BY version") + let rows: Vec<(u32, bool)> = conn + .prepare("SELECT version, is_current FROM info ORDER BY version") .unwrap() .query_map([], |row| Ok((row.get(0)?, row.get(1)?))) .unwrap() @@ -92,11 +97,11 @@ mod tests { assert_eq!(rows[1], (2, true), "Version 2 should be current"); // Verify JSON serialization - let node_data: String = conn.query_row( - "SELECT nodes FROM info WHERE version = 2", - [], - |row| row.get(0) - ).unwrap(); + let node_data: String = conn + .query_row("SELECT nodes FROM info WHERE version = 2", [], |row| { + row.get(0) + }) + .unwrap(); let nodes: Vec = serde_json::from_str(&node_data).unwrap(); assert_eq!(nodes[0].alias, "updated_node"); @@ -106,11 +111,17 @@ mod tests { #[tokio::test] async fn test_fetch_and_store_info() { // Initialize in-memory database - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Test fetch and store let result = db.fetch_and_store_info().await; - assert!(result.is_ok(), "Failed to fetch and store info: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to fetch and store info: {:?}", + result.err() + ); let info = result.unwrap(); @@ -119,20 +130,24 @@ mod tests { let conn = db.conn.lock().await; // Verify version is stored - let row = conn.query_row( - "SELECT version, is_current FROM info WHERE version = ?1", - [info.version], - |row| Ok((row.get::<_, u32>(0)?, row.get::<_, bool>(1)?)) - ).unwrap(); + let row = conn + .query_row( + "SELECT version, is_current FROM info WHERE version = ?1", + [info.version], + |row| Ok((row.get::<_, u32>(0)?, row.get::<_, bool>(1)?)), + ) + .unwrap(); assert_eq!(row.0, info.version); assert_eq!(row.1, true); // Verify JSON data - let nodes_json: String = conn.query_row( - "SELECT nodes FROM info WHERE version = ?1", - [info.version], - |row| row.get(0) - ).unwrap(); + let nodes_json: String = conn + .query_row( + "SELECT nodes FROM info WHERE version = ?1", + [info.version], + |row| row.get(0), + ) + .unwrap(); let stored_nodes: Vec = serde_json::from_str(&nodes_json).unwrap(); assert_eq!(stored_nodes.len(), info.nodes.len()); @@ -148,7 +163,9 @@ mod tests { #[tokio::test] async fn test_fetch_and_store_info_error_handling() { // Initialize with invalid URL to test error handling - let db = BlocktankDB::new(":memory:", Some("http://invalid-url")).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some("http://invalid-url")) + .await + .unwrap(); // Test fetch and store with invalid URL let result = db.fetch_and_store_info().await; @@ -164,11 +181,9 @@ mod tests { // Verify no data was stored { let conn = db.conn.lock().await; - let count: u32 = conn.query_row( - "SELECT COUNT(*) FROM info", - [], - |row| row.get(0) - ).unwrap(); + let count: u32 = conn + .query_row("SELECT COUNT(*) FROM info", [], |row| row.get(0)) + .unwrap(); assert_eq!(count, 0, "No data should be stored when fetch fails"); } } @@ -176,7 +191,9 @@ mod tests { #[tokio::test] async fn test_get_info() { // Initialize in-memory database - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Should return None when no info exists let empty_result = db.get_info().await.unwrap(); @@ -224,16 +241,23 @@ mod tests { assert_eq!(stored_info.version, test_info.version); assert_eq!(stored_info.nodes[0].alias, test_info.nodes[0].alias); assert_eq!(stored_info.nodes[0].pubkey, test_info.nodes[0].pubkey); - assert_eq!(stored_info.options.min_channel_size_sat, test_info.options.min_channel_size_sat); + assert_eq!( + stored_info.options.min_channel_size_sat, + test_info.options.min_channel_size_sat + ); assert_eq!(stored_info.versions.http, test_info.versions.http); assert_eq!(stored_info.onchain.network, test_info.onchain.network); - assert_eq!(stored_info.onchain.fee_rates.fast, test_info.onchain.fee_rates.fast); + assert_eq!( + stored_info.onchain.fee_rates.fast, + test_info.onchain.fee_rates.fast + ); } - #[tokio::test] async fn test_upsert_order() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Get current timestamp in ISO 8601 format let now = chrono::Utc::now(); @@ -252,8 +276,8 @@ mod tests { zero_reserve: false, client_node_id: Some("client123".to_string()), channel_expiry_weeks: 2, - channel_expires_at: future.to_rfc3339(), // Changed from integer to ISO string - order_expires_at: future.to_rfc3339(), // Changed from integer to ISO string + channel_expires_at: future.to_rfc3339(), // Changed from integer to ISO string + order_expires_at: future.to_rfc3339(), // Changed from integer to ISO string channel: None, lsp_node: Some(ILspNode { alias: "test_node".to_string(), @@ -269,8 +293,8 @@ mod tests { bolt11_invoice: Some(IBtBolt11Invoice { request: "lnbc...".to_string(), state: BtBolt11InvoiceState::Pending, - expires_at: future.to_rfc3339(), // Changed from integer to ISO string - updated_at: now.to_rfc3339(), // Changed from integer to ISO string + expires_at: future.to_rfc3339(), // Changed from integer to ISO string + updated_at: now.to_rfc3339(), // Changed from integer to ISO string }), onchain: Some(IBtOnchainTransactions { address: "bc1...".to_string(), @@ -284,8 +308,8 @@ mod tests { coupon_code: None, source: None, discount: None, - updated_at: now.to_rfc3339(), // Changed from integer to ISO string - created_at: now.to_rfc3339(), // Changed from integer to ISO string + updated_at: now.to_rfc3339(), // Changed from integer to ISO string + created_at: now.to_rfc3339(), // Changed from integer to ISO string }; // Test initial insert @@ -295,33 +319,49 @@ mod tests { // Verify the insert { let conn = db.conn.lock().await; - let row = conn.query_row( - "SELECT id, state, fee_sat FROM orders WHERE id = ?1", - [&test_order.id], - |row| Ok((row.get::<_, String>(0)?, row.get::<_, String>(1)?, row.get::<_, u64>(2)?)) - ).unwrap(); + let row = conn + .query_row( + "SELECT id, state, fee_sat FROM orders WHERE id = ?1", + [&test_order.id], + |row| { + Ok(( + row.get::<_, String>(0)?, + row.get::<_, String>(1)?, + row.get::<_, u64>(2)?, + )) + }, + ) + .unwrap(); assert_eq!(row.0, test_order.id); assert_eq!(row.1, format!("{:?}", test_order.state)); assert_eq!(row.2, test_order.fee_sat); // Verify JSON serialization - let lsp_node_json: String = conn.query_row( - "SELECT lsp_node_data FROM orders WHERE id = ?1", - [&test_order.id], - |row| row.get(0) - ).unwrap(); + let lsp_node_json: String = conn + .query_row( + "SELECT lsp_node_data FROM orders WHERE id = ?1", + [&test_order.id], + |row| row.get(0), + ) + .unwrap(); let stored_lsp_node: ILspNode = serde_json::from_str(&lsp_node_json).unwrap(); - assert_eq!(stored_lsp_node.alias, test_order.lsp_node.as_ref().unwrap().alias.clone()); - assert_eq!(stored_lsp_node.pubkey, test_order.lsp_node.as_ref().unwrap().pubkey.clone()); + assert_eq!( + stored_lsp_node.alias, + test_order.lsp_node.as_ref().unwrap().alias.clone() + ); + assert_eq!( + stored_lsp_node.pubkey, + test_order.lsp_node.as_ref().unwrap().pubkey.clone() + ); } // Test update let mut updated_order = test_order.clone(); updated_order.fee_sat = 2000; updated_order.state = BtOrderState::Open; - updated_order.updated_at = chrono::Utc::now().to_rfc3339(); // Update timestamp + updated_order.updated_at = chrono::Utc::now().to_rfc3339(); // Update timestamp let result = db.upsert_order(&updated_order).await; assert!(result.is_ok(), "Failed to update order: {:?}", result.err()); @@ -329,11 +369,13 @@ mod tests { // Verify the update { let conn = db.conn.lock().await; - let row = conn.query_row( - "SELECT state, fee_sat FROM orders WHERE id = ?1", - [&updated_order.id], - |row| Ok((row.get::<_, String>(0)?, row.get::<_, u64>(1)?)) - ).unwrap(); + let row = conn + .query_row( + "SELECT state, fee_sat FROM orders WHERE id = ?1", + [&updated_order.id], + |row| Ok((row.get::<_, String>(0)?, row.get::<_, u64>(1)?)), + ) + .unwrap(); assert_eq!(row.0, format!("{:?}", updated_order.state)); assert_eq!(row.1, updated_order.fee_sat); @@ -342,7 +384,9 @@ mod tests { #[tokio::test] async fn test_upsert_orders() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Create multiple test orders let mut orders = Vec::new(); @@ -355,7 +399,11 @@ mod tests { // Test bulk insert let result = db.upsert_orders(&orders).await; - assert!(result.is_ok(), "Failed to bulk insert orders: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to bulk insert orders: {:?}", + result.err() + ); // Verify all orders were inserted let stored_orders = db.get_orders(None, None).await.unwrap(); @@ -364,7 +412,8 @@ mod tests { // Verify each order's data is correct for i in 1..=5 { let order_id = format!("bulk_order_{}", i); - let stored_order = stored_orders.iter() + let stored_order = stored_orders + .iter() .find(|o| o.id == order_id) .expect(&format!("Order {} not found", order_id)); @@ -380,38 +429,46 @@ mod tests { // Test bulk update - modify some orders let mut updated_orders = orders.clone(); - updated_orders[0].fee_sat = 9999; // Update first order - updated_orders[1].state = BtOrderState::Open; // Update second order + updated_orders[0].fee_sat = 9999; // Update first order + updated_orders[1].state = BtOrderState::Open; // Update second order updated_orders[1].state2 = Some(BtOrderState2::Executed); - updated_orders[2].lsp_balance_sat = 99999; // Update third order + updated_orders[2].lsp_balance_sat = 99999; // Update third order updated_orders[2].updated_at = chrono::Utc::now().to_rfc3339(); // Bulk update let result = db.upsert_orders(&updated_orders).await; - assert!(result.is_ok(), "Failed to bulk update orders: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to bulk update orders: {:?}", + result.err() + ); // Verify updates were applied let stored_orders_after = db.get_orders(None, None).await.unwrap(); assert_eq!(stored_orders_after.len(), 5, "Should still have 5 orders"); - let updated_order_1 = stored_orders_after.iter() + let updated_order_1 = stored_orders_after + .iter() .find(|o| o.id == "bulk_order_1") .expect("Order 1 not found"); assert_eq!(updated_order_1.fee_sat, 9999); - let updated_order_2 = stored_orders_after.iter() + let updated_order_2 = stored_orders_after + .iter() .find(|o| o.id == "bulk_order_2") .expect("Order 2 not found"); assert_eq!(updated_order_2.state, BtOrderState::Open); assert_eq!(updated_order_2.state2, Some(BtOrderState2::Executed)); - let updated_order_3 = stored_orders_after.iter() + let updated_order_3 = stored_orders_after + .iter() .find(|o| o.id == "bulk_order_3") .expect("Order 3 not found"); assert_eq!(updated_order_3.lsp_balance_sat, 99999); // Verify other orders weren't affected - let updated_order_4 = stored_orders_after.iter() + let updated_order_4 = stored_orders_after + .iter() .find(|o| o.id == "bulk_order_4") .expect("Order 4 not found"); assert_eq!(updated_order_4.fee_sat, 4000); @@ -420,7 +477,9 @@ mod tests { #[tokio::test] async fn test_upsert_orders_empty() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Test with empty vector let result = db.upsert_orders(&[]).await; @@ -433,7 +492,9 @@ mod tests { #[tokio::test] async fn test_upsert_orders_large_batch() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Create a larger batch of orders to test performance let mut orders = Vec::new(); @@ -447,7 +508,11 @@ mod tests { let start = std::time::Instant::now(); let result = db.upsert_orders(&orders).await; let bulk_duration = start.elapsed(); - assert!(result.is_ok(), "Failed to bulk insert orders: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to bulk insert orders: {:?}", + result.err() + ); // Verify all orders were inserted let stored_orders = db.get_orders(None, None).await.unwrap(); @@ -456,7 +521,8 @@ mod tests { // Verify a sample of orders for i in (1..=50).step_by(10) { let order_id = format!("large_batch_order_{}", i); - let stored_order = stored_orders.iter() + let stored_order = stored_orders + .iter() .find(|o| o.id == order_id) .expect(&format!("Order {} not found", order_id)); assert_eq!(stored_order.fee_sat, 500 * i as u64); @@ -467,7 +533,9 @@ mod tests { #[tokio::test] async fn test_create_and_store_order() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); let options = CreateOrderOptions { coupon_code: "".to_string(), @@ -478,18 +546,23 @@ mod tests { }; let result = db.create_and_store_order(100000, 4, Some(options)).await; - assert!(result.is_ok(), "Failed to create and store order: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to create and store order: {:?}", + result.err() + ); let order = result.unwrap(); assert_eq!(order.lsp_balance_sat, 100000); assert_eq!(order.client_balance_sat, 0); - } #[tokio::test] async fn test_refresh_orders() { // Initialize in-memory database - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Create actual orders through the API let options = CreateOrderOptions { @@ -503,7 +576,9 @@ mod tests { // Create two real orders println!("Creating first test order..."); - let order1 = db.create_and_store_order(100000, 4, Some(options.clone())).await + let order1 = db + .create_and_store_order(100000, 4, Some(options.clone())) + .await .expect("Failed to create first order"); println!("First order created with ID: {}", order1.id); @@ -512,7 +587,9 @@ mod tests { tokio::time::sleep(tokio::time::Duration::from_secs(1)).await; println!("Creating second test order..."); - let order2 = db.create_and_store_order(150000, 4, Some(options.clone())).await + let order2 = db + .create_and_store_order(150000, 4, Some(options.clone())) + .await .expect("Failed to create second order"); println!("Second order created with ID: {}", order2.id); @@ -549,15 +626,19 @@ mod tests { // Verify database state let conn = db.conn.lock().await; - let row = conn.query_row( - "SELECT id, state, fee_sat FROM orders WHERE id = ?1", - [&order.id], - |row| Ok(( - row.get::<_, String>(0)?, - row.get::<_, String>(1)?, - row.get::<_, u64>(2)? - )) - ).unwrap(); + let row = conn + .query_row( + "SELECT id, state, fee_sat FROM orders WHERE id = ?1", + [&order.id], + |row| { + Ok(( + row.get::<_, String>(0)?, + row.get::<_, String>(1)?, + row.get::<_, u64>(2)?, + )) + }, + ) + .unwrap(); assert_eq!(row.0, order.id); assert_eq!(row.1, format!("{:?}", order.state)); @@ -566,26 +647,31 @@ mod tests { assert!(found_order1, "First order not found in refreshed orders"); assert!(found_order2, "Second order not found in refreshed orders"); - }, + } Err(e) => panic!("Failed to refresh orders: {:?}", e), } // Test error handling with invalid order IDs let invalid_ids = vec!["invalid_id_1".to_string()]; let error_result = db.refresh_orders(&invalid_ids).await; - assert!(error_result.is_err(), "Expected error for invalid order IDs"); + assert!( + error_result.is_err(), + "Expected error for invalid order IDs" + ); match error_result { Err(BlocktankError::DataError { error_details }) => { assert!(error_details.contains("Failed to fetch orders")); - }, + } _ => panic!("Expected DataError"), } } #[tokio::test] async fn test_get_orders() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Get current timestamp and future timestamp as integers let now = chrono::Utc::now(); @@ -665,41 +751,85 @@ mod tests { assert_eq!(all_orders.len(), 3, "Should retrieve all 3 orders"); // Test 2: Get specific orders by ID - let specific_orders = db.get_orders( - Some(&vec![test_order1.id.clone(), test_order2.id.clone()]), - None - ).await.unwrap(); - assert_eq!(specific_orders.len(), 2, "Should retrieve 2 specific orders"); + let specific_orders = db + .get_orders( + Some(&vec![test_order1.id.clone(), test_order2.id.clone()]), + None, + ) + .await + .unwrap(); + assert_eq!( + specific_orders.len(), + 2, + "Should retrieve 2 specific orders" + ); assert!(specific_orders.iter().any(|o| o.id == test_order1.id)); assert!(specific_orders.iter().any(|o| o.id == test_order2.id)); // Test 3: Filter by state - let paid_orders = db.get_orders(None, Some(BtOrderState2::Paid)).await.unwrap(); + let paid_orders = db + .get_orders(None, Some(BtOrderState2::Paid)) + .await + .unwrap(); assert_eq!(paid_orders.len(), 1, "Should retrieve 1 paid order"); assert_eq!(paid_orders[0].id, test_order3.id); // Test 4: Verify complex fields deserialization let order = &all_orders[0]; - assert!(!order.lsp_node.as_ref().unwrap().connection_strings.is_empty()); - assert_eq!(order.payment.as_ref().unwrap().state, test_order1.payment.as_ref().unwrap().state); - assert_eq!(order.payment.as_ref().unwrap().bolt11_invoice.as_ref().unwrap().state, test_order1.payment.as_ref().unwrap().bolt11_invoice.as_ref().unwrap().state); + assert!(!order + .lsp_node + .as_ref() + .unwrap() + .connection_strings + .is_empty()); + assert_eq!( + order.payment.as_ref().unwrap().state, + test_order1.payment.as_ref().unwrap().state + ); + assert_eq!( + order + .payment + .as_ref() + .unwrap() + .bolt11_invoice + .as_ref() + .unwrap() + .state, + test_order1 + .payment + .as_ref() + .unwrap() + .bolt11_invoice + .as_ref() + .unwrap() + .state + ); // Test 5: Test with non-existent order IDs - let non_existent = db.get_orders( - Some(&vec!["non_existent_id".to_string()]), - None - ).await.unwrap(); - assert_eq!(non_existent.len(), 0, "Should return empty vector for non-existent IDs"); + let non_existent = db + .get_orders(Some(&vec!["non_existent_id".to_string()]), None) + .await + .unwrap(); + assert_eq!( + non_existent.len(), + 0, + "Should return empty vector for non-existent IDs" + ); // Test 6: Test with invalid state filter - let executed_orders = db.get_orders(None, Some(BtOrderState2::Executed)).await.unwrap(); + let executed_orders = db + .get_orders(None, Some(BtOrderState2::Executed)) + .await + .unwrap(); assert_eq!(executed_orders.len(), 1, "Should retrieve 1 executed order"); assert_eq!(executed_orders[0].id, test_order2.id); } #[tokio::test] async fn test_get_active_orders() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Get current timestamp and future timestamp let now = chrono::Utc::now(); @@ -709,7 +839,7 @@ mod tests { let test_order1 = IBtOrder { id: "test_order_1".to_string(), state: BtOrderState::Created, - state2: Some(BtOrderState2::Created), // This should be included in active orders + state2: Some(BtOrderState2::Created), // This should be included in active orders fee_sat: 1000, network_fee_sat: 500, service_fee_sat: 500, @@ -758,19 +888,19 @@ mod tests { let mut test_order2 = test_order1.clone(); test_order2.id = "test_order_2".to_string(); test_order2.state = BtOrderState::Open; - test_order2.state2 = Some(BtOrderState2::Paid); // This should be included in active orders + test_order2.state2 = Some(BtOrderState2::Paid); // This should be included in active orders test_order2.fee_sat = 2000; let mut test_order3 = test_order1.clone(); test_order3.id = "test_order_3".to_string(); test_order3.state = BtOrderState::Closed; - test_order3.state2 = Some(BtOrderState2::Expired); // This should NOT be included + test_order3.state2 = Some(BtOrderState2::Expired); // This should NOT be included test_order3.fee_sat = 3000; let mut test_order4 = test_order1.clone(); test_order4.id = "test_order_4".to_string(); test_order4.state = BtOrderState::Closed; - test_order4.state2 = Some(BtOrderState2::Executed); // This should NOT be included + test_order4.state2 = Some(BtOrderState2::Executed); // This should NOT be included test_order4.fee_sat = 4000; // Insert all test orders @@ -783,20 +913,31 @@ mod tests { let active_orders = db.get_active_orders().await.unwrap(); // Verify we only got orders in Created or Paid state - assert_eq!(active_orders.len(), 2, "Should only retrieve orders in Created or Paid state"); + assert_eq!( + active_orders.len(), + 2, + "Should only retrieve orders in Created or Paid state" + ); // Verify the specific orders we got back - let order_ids: Vec = active_orders.iter() - .map(|o| o.id.clone()) - .collect(); + let order_ids: Vec = active_orders.iter().map(|o| o.id.clone()).collect(); - assert!(order_ids.contains(&test_order1.id), "Should contain the Created order"); - assert!(order_ids.contains(&test_order2.id), "Should contain the Paid order"); + assert!( + order_ids.contains(&test_order1.id), + "Should contain the Created order" + ); + assert!( + order_ids.contains(&test_order2.id), + "Should contain the Paid order" + ); // Verify the orders are in the correct state for order in active_orders { assert!( - matches!(order.state2, Some(BtOrderState2::Created) | Some(BtOrderState2::Paid)), + matches!( + order.state2, + Some(BtOrderState2::Created) | Some(BtOrderState2::Paid) + ), "Order {} should be in Created or Paid state, but was in {:?} state", order.id, order.state2 @@ -816,7 +957,9 @@ mod tests { #[tokio::test] async fn test_refresh_active_orders() { // Initialize in-memory database - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Create test orders with different states let options = CreateOrderOptions { @@ -830,7 +973,9 @@ mod tests { // Create orders that should be active (Created and Paid states) println!("Creating first test order (Created state)..."); - let order1 = db.create_and_store_order(100000, 4, Some(options.clone())).await + let order1 = db + .create_and_store_order(100000, 4, Some(options.clone())) + .await .expect("Failed to create first order"); println!("First order created with ID: {}", order1.id); @@ -838,7 +983,9 @@ mod tests { tokio::time::sleep(tokio::time::Duration::from_secs(1)).await; println!("Creating second test order (will be set to Paid state)..."); - let order2 = db.create_and_store_order(150000, 4, Some(options.clone())).await + let order2 = db + .create_and_store_order(150000, 4, Some(options.clone())) + .await .expect("Failed to create second order"); println!("Second order created with ID: {}", order2.id); @@ -851,7 +998,11 @@ mod tests { match result { Ok(refreshed_orders) => { - assert_eq!(refreshed_orders.len(), 2, "Should have refreshed 2 active orders"); + assert_eq!( + refreshed_orders.len(), + 2, + "Should have refreshed 2 active orders" + ); // Verify each refreshed order let mut found_order1 = false; @@ -873,7 +1024,10 @@ mod tests { // Verify order is in an active state assert!( - matches!(order.state2, Some(BtOrderState2::Created) | Some(BtOrderState2::Paid)), + matches!( + order.state2, + Some(BtOrderState2::Created) | Some(BtOrderState2::Paid) + ), "Order should be in Created or Paid state" ); } @@ -884,15 +1038,19 @@ mod tests { // Verify database state let conn = db.conn.lock().await; for order in &refreshed_orders { - let row = conn.query_row( - "SELECT id, state2, fee_sat FROM orders WHERE id = ?1", - [&order.id], - |row| Ok(( - row.get::<_, String>(0)?, - row.get::<_, String>(1)?, - row.get::<_, u64>(2)? - )) - ).unwrap(); + let row = conn + .query_row( + "SELECT id, state2, fee_sat FROM orders WHERE id = ?1", + [&order.id], + |row| { + Ok(( + row.get::<_, String>(0)?, + row.get::<_, String>(1)?, + row.get::<_, u64>(2)?, + )) + }, + ) + .unwrap(); assert_eq!(row.0, order.id); assert!( @@ -900,27 +1058,31 @@ mod tests { "Database order state should be Created or Paid" ); } - }, + } Err(e) => panic!("Failed to refresh active orders: {:?}", e), } // Test with no active orders // First, expire all orders to make them inactive let conn = db.conn.lock().await; - conn.execute( - "UPDATE orders SET state2 = 'Expired'", - [], - ).unwrap(); - drop(conn); // Release the lock + conn.execute("UPDATE orders SET state2 = 'Expired'", []) + .unwrap(); + drop(conn); // Release the lock // Now test refresh_active_orders with no active orders let empty_result = db.refresh_active_orders().await.unwrap(); - assert_eq!(empty_result.len(), 0, "Should return empty vec when no active orders exist"); + assert_eq!( + empty_result.len(), + 0, + "Should return empty vec when no active orders exist" + ); } #[tokio::test] async fn test_get_min_zero_conf_tx_fee() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // First create an order to get a valid order ID let options = CreateOrderOptions { @@ -933,18 +1095,30 @@ mod tests { }; println!("Creating test order..."); - let order = db.create_and_store_order(100000, 4, Some(options)).await + let order = db + .create_and_store_order(100000, 4, Some(options)) + .await .expect("Failed to create order"); println!("Order created with ID: {}", order.id); // Test getting min zero conf fee let result = db.get_min_zero_conf_tx_fee(order.id).await; - assert!(result.is_ok(), "Failed to get min zero conf fee: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to get min zero conf fee: {:?}", + result.err() + ); let fee_window = result.unwrap(); - assert!(fee_window.sat_per_vbyte > 0.0, "sat_per_vbyte should be greater than 0"); - assert!(!fee_window.validity_ends_at.is_empty(), "validity_ends_at should not be empty"); + assert!( + fee_window.sat_per_vbyte > 0.0, + "sat_per_vbyte should be greater than 0" + ); + assert!( + !fee_window.validity_ends_at.is_empty(), + "validity_ends_at should not be empty" + ); // Test with invalid order ID let error_result = db.get_min_zero_conf_tx_fee("invalid_id".to_string()).await; @@ -953,14 +1127,16 @@ mod tests { match error_result { Err(BlocktankError::DataError { error_details }) => { assert!(error_details.contains("Failed to get minimum zero-conf transaction fee")); - }, + } _ => panic!("Expected DataError"), } } #[tokio::test] async fn test_estimate_order_fee() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); let options = Some(CreateOrderOptions { client_balance_sat: 0, @@ -973,27 +1149,42 @@ mod tests { // Test valid estimation let result = db.estimate_order_fee(100000, 4, options.clone()).await; - assert!(result.is_ok(), "Failed to estimate order fee: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to estimate order fee: {:?}", + result.err() + ); let fee_estimate = result.unwrap(); - assert!(fee_estimate.fee_sat > 0, "Fee estimate should be greater than 0"); - assert!(fee_estimate.min_0_conf_tx_fee.sat_per_vbyte > 0.0, "min_0_conf_tx_fee.sat_per_vbyte should be greater than 0"); + assert!( + fee_estimate.fee_sat > 0, + "Fee estimate should be greater than 0" + ); + assert!( + fee_estimate.min_0_conf_tx_fee.sat_per_vbyte > 0.0, + "min_0_conf_tx_fee.sat_per_vbyte should be greater than 0" + ); // Test with invalid parameters let error_result = db.estimate_order_fee(0, 0, None).await; - assert!(error_result.is_err(), "Expected error for invalid parameters"); + assert!( + error_result.is_err(), + "Expected error for invalid parameters" + ); match error_result { Err(BlocktankError::DataError { error_details }) => { assert!(error_details.contains("Failed to estimate order fee")); - }, + } _ => panic!("Expected DataError"), } } #[tokio::test] async fn test_estimate_order_fee_full() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); let options = Some(CreateOrderOptions { client_balance_sat: 20000, @@ -1006,29 +1197,50 @@ mod tests { // Test valid full estimation let result = db.estimate_order_fee_full(100000, 4, options.clone()).await; - assert!(result.is_ok(), "Failed to estimate full order fee: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to estimate full order fee: {:?}", + result.err() + ); let fee_estimate = result.unwrap(); - assert!(fee_estimate.fee_sat > 0, "Fee estimate should be greater than 0"); - assert!(fee_estimate.network_fee_sat > 0, "Network fee should be greater than 0"); - assert!(fee_estimate.service_fee_sat > 0, "Service fee should be greater than 0"); - assert!(fee_estimate.min_0_conf_tx_fee.sat_per_vbyte > 0.0, "min_0_conf_tx_fee.sat_per_vbyte should be greater than 0"); + assert!( + fee_estimate.fee_sat > 0, + "Fee estimate should be greater than 0" + ); + assert!( + fee_estimate.network_fee_sat > 0, + "Network fee should be greater than 0" + ); + assert!( + fee_estimate.service_fee_sat > 0, + "Service fee should be greater than 0" + ); + assert!( + fee_estimate.min_0_conf_tx_fee.sat_per_vbyte > 0.0, + "min_0_conf_tx_fee.sat_per_vbyte should be greater than 0" + ); // Test with invalid parameters let error_result = db.estimate_order_fee_full(0, 0, None).await; - assert!(error_result.is_err(), "Expected error for invalid parameters"); + assert!( + error_result.is_err(), + "Expected error for invalid parameters" + ); match error_result { Err(BlocktankError::DataError { error_details }) => { assert!(error_details.contains("Failed to estimate full order fee")); - }, + } _ => panic!("Expected DataError"), } } #[tokio::test] async fn test_upsert_cjit_entry() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Get current timestamp in ISO 8601 format let now = chrono::Utc::now(); @@ -1067,27 +1279,41 @@ mod tests { // Test initial insert let result = db.upsert_cjit_entry(&test_entry).await; - assert!(result.is_ok(), "Failed to insert CJIT entry: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to insert CJIT entry: {:?}", + result.err() + ); // Verify the insert { let conn = db.conn.lock().await; - let row = conn.query_row( - "SELECT id, state, fee_sat FROM cjit_entries WHERE id = ?1", - [&test_entry.id], - |row| Ok((row.get::<_, String>(0)?, row.get::<_, String>(1)?, row.get::<_, u64>(2)?)) - ).unwrap(); + let row = conn + .query_row( + "SELECT id, state, fee_sat FROM cjit_entries WHERE id = ?1", + [&test_entry.id], + |row| { + Ok(( + row.get::<_, String>(0)?, + row.get::<_, String>(1)?, + row.get::<_, u64>(2)?, + )) + }, + ) + .unwrap(); assert_eq!(row.0, test_entry.id); assert_eq!(row.1, format!("{:?}", test_entry.state)); assert_eq!(row.2, test_entry.fee_sat); // Verify JSON serialization - let lsp_node_json: String = conn.query_row( - "SELECT lsp_node_data FROM cjit_entries WHERE id = ?1", - [&test_entry.id], - |row| row.get(0) - ).unwrap(); + let lsp_node_json: String = conn + .query_row( + "SELECT lsp_node_data FROM cjit_entries WHERE id = ?1", + [&test_entry.id], + |row| row.get(0), + ) + .unwrap(); let stored_lsp_node: ILspNode = serde_json::from_str(&lsp_node_json).unwrap(); assert_eq!(stored_lsp_node.alias, test_entry.lsp_node.alias); @@ -1101,16 +1327,22 @@ mod tests { updated_entry.updated_at = chrono::Utc::now().to_rfc3339(); let result = db.upsert_cjit_entry(&updated_entry).await; - assert!(result.is_ok(), "Failed to update CJIT entry: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to update CJIT entry: {:?}", + result.err() + ); // Verify the update { let conn = db.conn.lock().await; - let row = conn.query_row( - "SELECT state, fee_sat FROM cjit_entries WHERE id = ?1", - [&updated_entry.id], - |row| Ok((row.get::<_, String>(0)?, row.get::<_, u64>(1)?)) - ).unwrap(); + let row = conn + .query_row( + "SELECT state, fee_sat FROM cjit_entries WHERE id = ?1", + [&updated_entry.id], + |row| Ok((row.get::<_, String>(0)?, row.get::<_, u64>(1)?)), + ) + .unwrap(); assert_eq!(row.0, format!("{:?}", updated_entry.state)); assert_eq!(row.1, updated_entry.fee_sat); @@ -1119,7 +1351,9 @@ mod tests { #[tokio::test] async fn test_upsert_cjit_entries() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Create multiple test CJIT entries let mut entries = Vec::new(); @@ -1132,7 +1366,11 @@ mod tests { // Test bulk insert let result = db.upsert_cjit_entries(&entries).await; - assert!(result.is_ok(), "Failed to bulk insert CJIT entries: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to bulk insert CJIT entries: {:?}", + result.err() + ); // Verify all entries were inserted let stored_entries = db.get_cjit_entries(None, None).await.unwrap(); @@ -1141,7 +1379,8 @@ mod tests { // Verify each entry's data is correct for i in 1..=5 { let entry_id = format!("bulk_cjit_{}", i); - let stored_entry = stored_entries.iter() + let stored_entry = stored_entries + .iter() .find(|e| e.id == entry_id) .expect(&format!("Entry {} not found", entry_id)); @@ -1156,36 +1395,44 @@ mod tests { // Test bulk update - modify some entries let mut updated_entries = entries.clone(); - updated_entries[0].fee_sat = 9999; // Update first entry - updated_entries[1].state = CJitStateEnum::Completed; // Update second entry - updated_entries[2].channel_size_sat = 99999; // Update third entry + updated_entries[0].fee_sat = 9999; // Update first entry + updated_entries[1].state = CJitStateEnum::Completed; // Update second entry + updated_entries[2].channel_size_sat = 99999; // Update third entry updated_entries[2].updated_at = chrono::Utc::now().to_rfc3339(); // Bulk update let result = db.upsert_cjit_entries(&updated_entries).await; - assert!(result.is_ok(), "Failed to bulk update CJIT entries: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to bulk update CJIT entries: {:?}", + result.err() + ); // Verify updates were applied let stored_entries_after = db.get_cjit_entries(None, None).await.unwrap(); assert_eq!(stored_entries_after.len(), 5, "Should still have 5 entries"); - let updated_entry_1 = stored_entries_after.iter() + let updated_entry_1 = stored_entries_after + .iter() .find(|e| e.id == "bulk_cjit_1") .expect("Entry 1 not found"); assert_eq!(updated_entry_1.fee_sat, 9999); - let updated_entry_2 = stored_entries_after.iter() + let updated_entry_2 = stored_entries_after + .iter() .find(|e| e.id == "bulk_cjit_2") .expect("Entry 2 not found"); assert_eq!(updated_entry_2.state, CJitStateEnum::Completed); - let updated_entry_3 = stored_entries_after.iter() + let updated_entry_3 = stored_entries_after + .iter() .find(|e| e.id == "bulk_cjit_3") .expect("Entry 3 not found"); assert_eq!(updated_entry_3.channel_size_sat, 99999); // Verify other entries weren't affected - let updated_entry_4 = stored_entries_after.iter() + let updated_entry_4 = stored_entries_after + .iter() .find(|e| e.id == "bulk_cjit_4") .expect("Entry 4 not found"); assert_eq!(updated_entry_4.fee_sat, 4000); @@ -1194,7 +1441,9 @@ mod tests { #[tokio::test] async fn test_upsert_cjit_entries_empty() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Test with empty vector let result = db.upsert_cjit_entries(&[]).await; @@ -1207,7 +1456,9 @@ mod tests { #[tokio::test] async fn test_get_cjit_entries() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Get current timestamp and future timestamp let now = chrono::Utc::now(); @@ -1267,16 +1518,26 @@ mod tests { assert_eq!(all_entries.len(), 3, "Should retrieve all 3 entries"); // Test 2: Get specific entries by ID - let specific_entries = db.get_cjit_entries( - Some(&vec![test_entry1.id.clone(), test_entry2.id.clone()]), - None - ).await.unwrap(); - assert_eq!(specific_entries.len(), 2, "Should retrieve 2 specific entries"); + let specific_entries = db + .get_cjit_entries( + Some(&vec![test_entry1.id.clone(), test_entry2.id.clone()]), + None, + ) + .await + .unwrap(); + assert_eq!( + specific_entries.len(), + 2, + "Should retrieve 2 specific entries" + ); assert!(specific_entries.iter().any(|e| e.id == test_entry1.id)); assert!(specific_entries.iter().any(|e| e.id == test_entry2.id)); // Test 3: Filter by state - let created_entries = db.get_cjit_entries(None, Some(CJitStateEnum::Created)).await.unwrap(); + let created_entries = db + .get_cjit_entries(None, Some(CJitStateEnum::Created)) + .await + .unwrap(); assert_eq!(created_entries.len(), 1, "Should retrieve 1 created entry"); assert_eq!(created_entries[0].id, test_entry1.id); @@ -1286,15 +1547,26 @@ mod tests { assert_eq!(entry.invoice.state, test_entry1.invoice.state); // Test 5: Test with non-existent entry IDs - let non_existent = db.get_cjit_entries( - Some(&vec!["non_existent_id".to_string()]), - None - ).await.unwrap(); - assert_eq!(non_existent.len(), 0, "Should return empty vector for non-existent IDs"); + let non_existent = db + .get_cjit_entries(Some(&vec!["non_existent_id".to_string()]), None) + .await + .unwrap(); + assert_eq!( + non_existent.len(), + 0, + "Should return empty vector for non-existent IDs" + ); // Test 6: Test with completed state filter - let completed_entries = db.get_cjit_entries(None, Some(CJitStateEnum::Completed)).await.unwrap(); - assert_eq!(completed_entries.len(), 1, "Should retrieve 1 completed entry"); + let completed_entries = db + .get_cjit_entries(None, Some(CJitStateEnum::Completed)) + .await + .unwrap(); + assert_eq!( + completed_entries.len(), + 1, + "Should retrieve 1 completed entry" + ); assert_eq!(completed_entries[0].id, test_entry2.id); // Test 7: Verify all fields are correctly loaded for a specific entry @@ -1302,8 +1574,14 @@ mod tests { assert_eq!(specific_entry.fee_sat, test_entry1.fee_sat); assert_eq!(specific_entry.network_fee_sat, test_entry1.network_fee_sat); assert_eq!(specific_entry.service_fee_sat, test_entry1.service_fee_sat); - assert_eq!(specific_entry.channel_size_sat, test_entry1.channel_size_sat); - assert_eq!(specific_entry.channel_expiry_weeks, test_entry1.channel_expiry_weeks); + assert_eq!( + specific_entry.channel_size_sat, + test_entry1.channel_size_sat + ); + assert_eq!( + specific_entry.channel_expiry_weeks, + test_entry1.channel_expiry_weeks + ); assert_eq!(specific_entry.node_id, test_entry1.node_id); assert_eq!(specific_entry.coupon_code, test_entry1.coupon_code); assert_eq!(specific_entry.source, test_entry1.source); @@ -1312,7 +1590,9 @@ mod tests { #[tokio::test] async fn test_cjit_entry_integration() { // Initialize database with staging server - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Create a test CJIT entry let options = Some(CreateCjitOptions { @@ -1327,17 +1607,23 @@ mod tests { let node_id = "03c8533232c155c41c42e5a8f8487b192dd36f1d354b86ef461cc82e67e3388839"; // Example node ID let channel_expiry_weeks = 4; - let result = db.create_cjit_entry( - channel_size_sat, - invoice_sat, - invoice_description, - node_id, - channel_expiry_weeks, - options.clone() - ).await; + let result = db + .create_cjit_entry( + channel_size_sat, + invoice_sat, + invoice_description, + node_id, + channel_expiry_weeks, + options.clone(), + ) + .await; // Verify creation was successful - assert!(result.is_ok(), "Failed to create CJIT entry: {:?}", result.err()); + assert!( + result.is_ok(), + "Failed to create CJIT entry: {:?}", + result.err() + ); let cjit_entry = result.unwrap(); println!("Created CJIT entry with ID: {}", cjit_entry.id); @@ -1359,12 +1645,16 @@ mod tests { assert_eq!(cjit_entry.source, Some("integration_test".to_string())); // Test fetching the created entry - let fetched_entries = db.get_cjit_entries( - Some(&vec![cjit_entry.id.clone()]), - None - ).await.unwrap(); - - assert_eq!(fetched_entries.len(), 1, "Should retrieve exactly one entry"); + let fetched_entries = db + .get_cjit_entries(Some(&vec![cjit_entry.id.clone()]), None) + .await + .unwrap(); + + assert_eq!( + fetched_entries.len(), + 1, + "Should retrieve exactly one entry" + ); let fetched_entry = &fetched_entries[0]; // Verify fetched entry matches created entry @@ -1375,53 +1665,71 @@ mod tests { assert_eq!(fetched_entry.state, CJitStateEnum::Created); // Test filtering by state - let created_entries = db.get_cjit_entries(None, Some(CJitStateEnum::Created)).await.unwrap(); - assert!(created_entries.iter().any(|e| e.id == cjit_entry.id), - "Should find the entry when filtering by Created state"); + let created_entries = db + .get_cjit_entries(None, Some(CJitStateEnum::Created)) + .await + .unwrap(); + assert!( + created_entries.iter().any(|e| e.id == cjit_entry.id), + "Should find the entry when filtering by Created state" + ); // Verify no entries in other states - let completed_entries = db.get_cjit_entries(None, Some(CJitStateEnum::Completed)).await.unwrap(); - assert!(!completed_entries.iter().any(|e| e.id == cjit_entry.id), - "Should not find the entry when filtering by Completed state"); + let completed_entries = db + .get_cjit_entries(None, Some(CJitStateEnum::Completed)) + .await + .unwrap(); + assert!( + !completed_entries.iter().any(|e| e.id == cjit_entry.id), + "Should not find the entry when filtering by Completed state" + ); println!("CJIT entry integration test completed successfully"); } #[tokio::test] async fn test_cjit_entry_invalid_params() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Test with invalid channel size (too small) - let result = db.create_cjit_entry( - 1000, // Very small channel size - 100, - "Test invalid channel size", - "03c8533232c155c41c42e5a8f8487b192dd36f1d354b86ef461cc82e67e3388839", - 4, - None - ).await; + let result = db + .create_cjit_entry( + 1000, // Very small channel size + 100, + "Test invalid channel size", + "03c8533232c155c41c42e5a8f8487b192dd36f1d354b86ef461cc82e67e3388839", + 4, + None, + ) + .await; assert!(result.is_err(), "Should fail with too small channel size"); // Test with invalid node ID - let result = db.create_cjit_entry( - 100000, - 1000, - "Test invalid node ID", - "invalid_node_id", - 4, - None - ).await; + let result = db + .create_cjit_entry( + 100000, + 1000, + "Test invalid node ID", + "invalid_node_id", + 4, + None, + ) + .await; assert!(result.is_err(), "Should fail with invalid node ID"); // Test with invalid expiry weeks (too long) - let result = db.create_cjit_entry( - 100000, - 1000, - "Test invalid expiry", - "03c8533232c155c41c42e5a8f8487b192dd36f1d354b86ef461cc82e67e3388839", - 100, // Too many weeks - None - ).await; + let result = db + .create_cjit_entry( + 100000, + 1000, + "Test invalid expiry", + "03c8533232c155c41c42e5a8f8487b192dd36f1d354b86ef461cc82e67e3388839", + 100, // Too many weeks + None, + ) + .await; assert!(result.is_err(), "Should fail with too many expiry weeks"); println!("CJIT entry invalid parameters test completed successfully"); @@ -1429,7 +1737,9 @@ mod tests { #[tokio::test] async fn test_get_active_cjit_entries() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Get current timestamp and future timestamp let now = chrono::Utc::now(); @@ -1438,7 +1748,7 @@ mod tests { // Create test entries with different states let test_entry1 = ICJitEntry { id: "test_cjit_1".to_string(), - state: CJitStateEnum::Created, // This should be included in active entries + state: CJitStateEnum::Created, // This should be included in active entries fee_sat: 1000, network_fee_sat: 500, service_fee_sat: 500, @@ -1469,20 +1779,20 @@ mod tests { let mut test_entry2 = test_entry1.clone(); test_entry2.id = "test_cjit_2".to_string(); - test_entry2.state = CJitStateEnum::Failed; // This should be included in active entries + test_entry2.state = CJitStateEnum::Failed; // This should be included in active entries test_entry2.fee_sat = 2000; test_entry2.coupon_code = "TEST2".to_string(); test_entry2.channel_open_error = Some("Connection failed".to_string()); let mut test_entry3 = test_entry1.clone(); test_entry3.id = "test_cjit_3".to_string(); - test_entry3.state = CJitStateEnum::Completed; // This should NOT be included + test_entry3.state = CJitStateEnum::Completed; // This should NOT be included test_entry3.fee_sat = 3000; test_entry3.coupon_code = "TEST3".to_string(); let mut test_entry4 = test_entry1.clone(); test_entry4.id = "test_cjit_4".to_string(); - test_entry4.state = CJitStateEnum::Expired; // This should NOT be included + test_entry4.state = CJitStateEnum::Expired; // This should NOT be included test_entry4.fee_sat = 4000; test_entry4.coupon_code = "TEST4".to_string(); @@ -1496,15 +1806,23 @@ mod tests { let active_entries = db.get_active_cjit_entries().await.unwrap(); // Verify we only got entries in Created or Failed state - assert_eq!(active_entries.len(), 2, "Should only retrieve entries in Created or Failed state"); + assert_eq!( + active_entries.len(), + 2, + "Should only retrieve entries in Created or Failed state" + ); // Verify the specific entries we got back - let entry_ids: Vec = active_entries.iter() - .map(|e| e.id.clone()) - .collect(); + let entry_ids: Vec = active_entries.iter().map(|e| e.id.clone()).collect(); - assert!(entry_ids.contains(&test_entry1.id), "Should contain the Created entry"); - assert!(entry_ids.contains(&test_entry2.id), "Should contain the Failed entry"); + assert!( + entry_ids.contains(&test_entry1.id), + "Should contain the Created entry" + ); + assert!( + entry_ids.contains(&test_entry2.id), + "Should contain the Failed entry" + ); // Verify the entries are in the correct state for entry in active_entries { @@ -1523,7 +1841,10 @@ mod tests { } else if entry.id == test_entry2.id { assert_eq!(entry.state, CJitStateEnum::Failed); assert_eq!(entry.fee_sat, 2000); - assert_eq!(entry.channel_open_error, Some("Connection failed".to_string())); + assert_eq!( + entry.channel_open_error, + Some("Connection failed".to_string()) + ); } // Verify LSP node data is loaded correctly @@ -1536,21 +1857,25 @@ mod tests { // Test with no active entries // First, mark all entries as completed to make them inactive let conn = db.conn.lock().await; - conn.execute( - "UPDATE cjit_entries SET state = 'Completed'", - [], - ).unwrap(); - drop(conn); // Release the lock + conn.execute("UPDATE cjit_entries SET state = 'Completed'", []) + .unwrap(); + drop(conn); // Release the lock // Now test get_active_cjit_entries with no active entries let empty_result = db.get_active_cjit_entries().await.unwrap(); - assert_eq!(empty_result.len(), 0, "Should return empty vec when no active entries exist"); + assert_eq!( + empty_result.len(), + 0, + "Should return empty vec when no active entries exist" + ); } #[tokio::test] async fn test_refresh_active_cjit_entries() { // Initialize database with staging server - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); let options = Some(CreateCjitOptions { source: Some("integration_test".to_string()), @@ -1559,28 +1884,34 @@ mod tests { // Create two test entries that will start in Created state println!("Creating first test CJIT entry..."); - let entry1 = db.create_cjit_entry( - 100000, // channel_size_sat - 20000, // invoice_sat - "Test CJIT 1", // description - "03c8533232c155c41c42e5a8f8487b192dd36f1d354b86ef461cc82e67e3388839", // node_id - 4, // channel_expiry_weeks - options.clone() - ).await.expect("Failed to create first CJIT entry"); + let entry1 = db + .create_cjit_entry( + 100000, // channel_size_sat + 20000, // invoice_sat + "Test CJIT 1", // description + "03c8533232c155c41c42e5a8f8487b192dd36f1d354b86ef461cc82e67e3388839", // node_id + 4, // channel_expiry_weeks + options.clone(), + ) + .await + .expect("Failed to create first CJIT entry"); println!("First CJIT entry created with ID: {}", entry1.id); // Add a small delay between entry creations tokio::time::sleep(tokio::time::Duration::from_secs(1)).await; println!("Creating second test CJIT entry..."); - let entry2 = db.create_cjit_entry( - 150000, // channel_size_sat - 30000, // invoice_sat - "Test CJIT 2", // description - "03c8533232c155c41c42e5a8f8487b192dd36f1d354b86ef461cc82e67e3388839", // node_id - 4, // channel_expiry_weeks - options.clone() - ).await.expect("Failed to create second CJIT entry"); + let entry2 = db + .create_cjit_entry( + 150000, // channel_size_sat + 30000, // invoice_sat + "Test CJIT 2", // description + "03c8533232c155c41c42e5a8f8487b192dd36f1d354b86ef461cc82e67e3388839", // node_id + 4, // channel_expiry_weeks + options.clone(), + ) + .await + .expect("Failed to create second CJIT entry"); println!("Second CJIT entry created with ID: {}", entry2.id); // Test refreshing active CJIT entries @@ -1590,7 +1921,10 @@ mod tests { match result { Ok(refreshed_entries) => { // Verify we got entries back - assert!(!refreshed_entries.is_empty(), "Should have received refreshed entries"); + assert!( + !refreshed_entries.is_empty(), + "Should have received refreshed entries" + ); // All refreshed entries should be in an active state for entry in &refreshed_entries { @@ -1617,35 +1951,41 @@ mod tests { // Verify database state let conn = db.conn.lock().await; for entry in &refreshed_entries { - let row = conn.query_row( - "SELECT id FROM cjit_entries WHERE id = ?1", - [&entry.id], - |row| Ok(row.get::<_, String>(0)?) - ).unwrap(); + let row = conn + .query_row( + "SELECT id FROM cjit_entries WHERE id = ?1", + [&entry.id], + |row| Ok(row.get::<_, String>(0)?), + ) + .unwrap(); assert_eq!(row, entry.id); } - }, + } Err(e) => panic!("Failed to refresh active CJIT entries: {:?}", e), } // Test with no active entries by manually marking all entries as completed in the database { let conn = db.conn.lock().await; - conn.execute( - "UPDATE cjit_entries SET state = 'Completed'", - [], - ).unwrap(); + conn.execute("UPDATE cjit_entries SET state = 'Completed'", []) + .unwrap(); } // Now test refresh_active_cjit_entries with no active entries let empty_result = db.refresh_active_cjit_entries().await.unwrap(); - assert_eq!(empty_result.len(), 0, "Should return empty vec when no active entries exist"); + assert_eq!( + empty_result.len(), + 0, + "Should return empty vec when no active entries exist" + ); } #[tokio::test] async fn test_regtest_mine() { - let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)).await.unwrap(); + let db = BlocktankDB::new(":memory:", Some(STAGING_SERVER)) + .await + .unwrap(); // Test mining 1 block let result = db.regtest_mine(Some(1)).await; assert!(result.is_ok(), "Failed to mine 1 block: {:?}", result.err()); @@ -1653,8 +1993,8 @@ mod tests { #[tokio::test] async fn test_regtest_deposit() { - let client = BlocktankClient::new(Some(STAGING_SERVER)) - .expect("Failed to create BlocktankClient"); + let client = + BlocktankClient::new(Some(STAGING_SERVER)).expect("Failed to create BlocktankClient"); let test_address = "bcrt1qcr8te4kr609gcawutmrza0j4xv80jy8z306fyu"; @@ -1665,14 +2005,18 @@ mod tests { Ok(txid) => { println!("Successfully deposited to address, txid: {}", txid); assert!(!txid.is_empty(), "Transaction ID should not be empty"); - }, + } Err(err) => { - if err.to_string().contains("not in regtest mode") || - err.to_string().contains("Bad Request") { + if err.to_string().contains("not in regtest mode") + || err.to_string().contains("Bad Request") + { println!("Skipping test_regtest_deposit: Not in regtest mode or not supported in this environment"); return; } else { - panic!("API call to regtest_deposit failed with unexpected error: {:?}", err); + panic!( + "API call to regtest_deposit failed with unexpected error: {:?}", + err + ); } } } @@ -1683,7 +2027,8 @@ mod tests { let temp_dir = tempfile::tempdir().expect("Failed to create temp directory"); let db_path = format!("{}/test_blocktank.db", temp_dir.path().display()); - let db = BlocktankDB::new(&db_path, None).await + let db = BlocktankDB::new(&db_path, None) + .await .expect("Failed to create BlocktankDB"); // Create and insert test orders @@ -1691,25 +2036,44 @@ mod tests { let order2 = create_test_order("order_2"); let order3 = create_test_order("order_3"); - db.upsert_order(&order1).await.expect("Failed to insert order1"); - db.upsert_order(&order2).await.expect("Failed to insert order2"); - db.upsert_order(&order3).await.expect("Failed to insert order3"); + db.upsert_order(&order1) + .await + .expect("Failed to insert order1"); + db.upsert_order(&order2) + .await + .expect("Failed to insert order2"); + db.upsert_order(&order3) + .await + .expect("Failed to insert order3"); // Verify orders exist - let orders = db.get_orders(None, None).await.expect("Failed to get orders"); + let orders = db + .get_orders(None, None) + .await + .expect("Failed to get orders"); assert_eq!(orders.len(), 3); // Remove all orders - db.remove_all_orders().await.expect("Failed to remove all orders"); + db.remove_all_orders() + .await + .expect("Failed to remove all orders"); // Verify all orders are deleted - let orders_after = db.get_orders(None, None).await.expect("Failed to get orders"); + let orders_after = db + .get_orders(None, None) + .await + .expect("Failed to get orders"); assert_eq!(orders_after.len(), 0); // Verify we can still insert new orders after wipe let new_order = create_test_order("new_order"); - db.upsert_order(&new_order).await.expect("Failed to insert new order"); - let orders_new = db.get_orders(None, None).await.expect("Failed to get orders"); + db.upsert_order(&new_order) + .await + .expect("Failed to insert new order"); + let orders_new = db + .get_orders(None, None) + .await + .expect("Failed to get orders"); assert_eq!(orders_new.len(), 1); } @@ -1718,7 +2082,8 @@ mod tests { let temp_dir = tempfile::tempdir().expect("Failed to create temp directory"); let db_path = format!("{}/test_blocktank.db", temp_dir.path().display()); - let db = BlocktankDB::new(&db_path, None).await + let db = BlocktankDB::new(&db_path, None) + .await .expect("Failed to create BlocktankDB"); // Create and insert test CJIT entries @@ -1726,25 +2091,44 @@ mod tests { let entry2 = create_test_cjit_entry("cjit_2"); let entry3 = create_test_cjit_entry("cjit_3"); - db.upsert_cjit_entry(&entry1).await.expect("Failed to insert entry1"); - db.upsert_cjit_entry(&entry2).await.expect("Failed to insert entry2"); - db.upsert_cjit_entry(&entry3).await.expect("Failed to insert entry3"); + db.upsert_cjit_entry(&entry1) + .await + .expect("Failed to insert entry1"); + db.upsert_cjit_entry(&entry2) + .await + .expect("Failed to insert entry2"); + db.upsert_cjit_entry(&entry3) + .await + .expect("Failed to insert entry3"); // Verify entries exist - let entries = db.get_cjit_entries(None, None).await.expect("Failed to get entries"); + let entries = db + .get_cjit_entries(None, None) + .await + .expect("Failed to get entries"); assert_eq!(entries.len(), 3); // Remove all CJIT entries - db.remove_all_cjit_entries().await.expect("Failed to remove all CJIT entries"); + db.remove_all_cjit_entries() + .await + .expect("Failed to remove all CJIT entries"); // Verify all entries are deleted - let entries_after = db.get_cjit_entries(None, None).await.expect("Failed to get entries"); + let entries_after = db + .get_cjit_entries(None, None) + .await + .expect("Failed to get entries"); assert_eq!(entries_after.len(), 0); // Verify we can still insert new entries after wipe let new_entry = create_test_cjit_entry("new_cjit"); - db.upsert_cjit_entry(&new_entry).await.expect("Failed to insert new entry"); - let entries_new = db.get_cjit_entries(None, None).await.expect("Failed to get entries"); + db.upsert_cjit_entry(&new_entry) + .await + .expect("Failed to insert new entry"); + let entries_new = db + .get_cjit_entries(None, None) + .await + .expect("Failed to get entries"); assert_eq!(entries_new.len(), 1); } @@ -1753,27 +2137,42 @@ mod tests { let temp_dir = tempfile::tempdir().expect("Failed to create temp directory"); let db_path = format!("{}/test_blocktank.db", temp_dir.path().display()); - let db = BlocktankDB::new(&db_path, None).await + let db = BlocktankDB::new(&db_path, None) + .await .expect("Failed to create BlocktankDB"); // Insert various types of data let order1 = create_test_order("order_1"); let order2 = create_test_order("order_2"); - db.upsert_order(&order1).await.expect("Failed to insert order"); - db.upsert_order(&order2).await.expect("Failed to insert order"); + db.upsert_order(&order1) + .await + .expect("Failed to insert order"); + db.upsert_order(&order2) + .await + .expect("Failed to insert order"); let entry1 = create_test_cjit_entry("cjit_1"); let entry2 = create_test_cjit_entry("cjit_2"); - db.upsert_cjit_entry(&entry1).await.expect("Failed to insert entry"); - db.upsert_cjit_entry(&entry2).await.expect("Failed to insert entry"); + db.upsert_cjit_entry(&entry1) + .await + .expect("Failed to insert entry"); + db.upsert_cjit_entry(&entry2) + .await + .expect("Failed to insert entry"); let info = create_test_info(); db.upsert_info(&info).await.expect("Failed to insert info"); // Verify all data exists - let orders = db.get_orders(None, None).await.expect("Failed to get orders"); + let orders = db + .get_orders(None, None) + .await + .expect("Failed to get orders"); assert_eq!(orders.len(), 2); - let entries = db.get_cjit_entries(None, None).await.expect("Failed to get entries"); + let entries = db + .get_cjit_entries(None, None) + .await + .expect("Failed to get entries"); assert_eq!(entries.len(), 2); let info_check = db.get_info().await.expect("Failed to get info"); assert!(info_check.is_some()); @@ -1782,17 +2181,28 @@ mod tests { db.wipe_all().await.expect("Failed to wipe all"); // Verify everything is deleted - let orders_after = db.get_orders(None, None).await.expect("Failed to get orders"); + let orders_after = db + .get_orders(None, None) + .await + .expect("Failed to get orders"); assert_eq!(orders_after.len(), 0); - let entries_after = db.get_cjit_entries(None, None).await.expect("Failed to get entries"); + let entries_after = db + .get_cjit_entries(None, None) + .await + .expect("Failed to get entries"); assert_eq!(entries_after.len(), 0); let info_after = db.get_info().await.expect("Failed to get info"); assert!(info_after.is_none()); // Verify we can still insert new data after wipe let new_order = create_test_order("new_order"); - db.upsert_order(&new_order).await.expect("Failed to insert new order"); - let orders_new = db.get_orders(None, None).await.expect("Failed to get orders"); + db.upsert_order(&new_order) + .await + .expect("Failed to insert new order"); + let orders_new = db + .get_orders(None, None) + .await + .expect("Failed to get orders"); assert_eq!(orders_new.len(), 1); } @@ -2209,4 +2619,4 @@ mod tests { // default_target = 450€ = 450_000 sats assert_eq!(get_default_lsp_balance(params), 450_000); } -} \ No newline at end of file +} diff --git a/src/modules/blocktank/types.rs b/src/modules/blocktank/types.rs index b64e342..9b7a8e9 100644 --- a/src/modules/blocktank/types.rs +++ b/src/modules/blocktank/types.rs @@ -2,42 +2,27 @@ use rust_blocktank_client::{ BitcoinNetworkEnum as ExternalBitcoinNetworkEnum, BtBolt11InvoiceState as ExternalBtBolt11InvoiceState, BtChannelOrderErrorType as ExternalBtChannelOrderErrorType, - BtOpenChannelState as ExternalBtOpenChannelState, - BtOrderState as ExternalBtOrderState, - BtOrderState2 as ExternalBtOrderState2, - BtPaymentState as ExternalBtPaymentState, - BtPaymentState2 as ExternalBtPaymentState2, - CJitStateEnum as ExternalCJitStateEnum, - ManualRefundStateEnum as ExternalManualRefundStateEnum, - IBtInfoOptions as ExternalIBtInfoOptions, - IBtInfo as ExternalIBtInfo, - IBtInfoVersions as ExternalIBtInfoVersions, - IBtInfoOnchain as ExternalIBtInfoOnchain, + BtOpenChannelState as ExternalBtOpenChannelState, BtOrderState as ExternalBtOrderState, + BtOrderState2 as ExternalBtOrderState2, BtPaymentState as ExternalBtPaymentState, + BtPaymentState2 as ExternalBtPaymentState2, CJitStateEnum as ExternalCJitStateEnum, + CreateCjitOptions as ExternalCreateCjitOptions, + CreateOrderOptions as ExternalCreateOrderOptions, FeeRates as ExternalFeeRates, + FundingTx as ExternalFundingTx, IBt0ConfMinTxFeeWindow as ExternalIBt0ConfMinTxFeeWindow, + IBtBolt11Invoice as ExternalIBtBolt11Invoice, IBtChannel as ExternalIBtChannel, + IBtChannelClose as ExternalIBtChannelClose, IBtEstimateFeeResponse as ExternalIBtEstimateFeeResponse, - IBtEstimateFeeResponse2 as ExternalIBtEstimateFeeResponse2, - IBt0ConfMinTxFeeWindow as ExternalIBt0ConfMinTxFeeWindow, + IBtEstimateFeeResponse2 as ExternalIBtEstimateFeeResponse2, IBtInfo as ExternalIBtInfo, + IBtInfoOnchain as ExternalIBtInfoOnchain, IBtInfoOptions as ExternalIBtInfoOptions, + IBtInfoVersions as ExternalIBtInfoVersions, IBtOnchainTransaction as ExternalIBtOnchainTransaction, - IBtOnchainTransactions as ExternalIBtOnchainTransactions, - IBtChannel as ExternalIBtChannel, - IBtChannelClose as ExternalIBtChannelClose, - IBtBolt11Invoice as ExternalIBtBolt11Invoice, - IBtPayment as ExternalIBtPayment, - IBtOrder as ExternalIBtOrder, - ICJitEntry as ExternalICJitEntry, - ILspNode as ExternalILspNode, - IDiscount as ExternalIDiscount, - FeeRates as ExternalFeeRates, - FundingTx as ExternalFundingTx, - IManualRefund as ExternalIManualRefund, - CreateOrderOptions as ExternalCreateOrderOptions, - CreateCjitOptions as ExternalCreateCjitOptions, - IGift as ExternalIGift, - IGiftCode as ExternalIGiftCode, - IGiftOrder as ExternalIGiftOrder, - IGiftLspNode as ExternalIGiftLspNode, - IGiftPayment as ExternalIGiftPayment, - IGiftBolt11Invoice as ExternalIGiftBolt11Invoice, - IGiftBtcAddress as ExternalIGiftBtcAddress, + IBtOnchainTransactions as ExternalIBtOnchainTransactions, IBtOrder as ExternalIBtOrder, + IBtPayment as ExternalIBtPayment, ICJitEntry as ExternalICJitEntry, + IDiscount as ExternalIDiscount, IGift as ExternalIGift, + IGiftBolt11Invoice as ExternalIGiftBolt11Invoice, IGiftBtcAddress as ExternalIGiftBtcAddress, + IGiftCode as ExternalIGiftCode, IGiftLspNode as ExternalIGiftLspNode, + IGiftOrder as ExternalIGiftOrder, IGiftPayment as ExternalIGiftPayment, + ILspNode as ExternalILspNode, IManualRefund as ExternalIManualRefund, + ManualRefundStateEnum as ExternalManualRefundStateEnum, }; use serde::{Deserialize, Serialize}; @@ -113,15 +98,21 @@ pub enum BtChannelOrderErrorType { impl From for BtChannelOrderErrorType { fn from(other: ExternalBtChannelOrderErrorType) -> Self { match other { - ExternalBtChannelOrderErrorType::WrongOrderState => BtChannelOrderErrorType::WrongOrderState, - ExternalBtChannelOrderErrorType::PeerNotReachable => BtChannelOrderErrorType::PeerNotReachable, + ExternalBtChannelOrderErrorType::WrongOrderState => { + BtChannelOrderErrorType::WrongOrderState + } + ExternalBtChannelOrderErrorType::PeerNotReachable => { + BtChannelOrderErrorType::PeerNotReachable + } ExternalBtChannelOrderErrorType::ChannelRejectedByDestination => { BtChannelOrderErrorType::ChannelRejectedByDestination } ExternalBtChannelOrderErrorType::ChannelRejectedByLsp => { BtChannelOrderErrorType::ChannelRejectedByLsp } - ExternalBtChannelOrderErrorType::BlocktankNotReady => BtChannelOrderErrorType::BlocktankNotReady, + ExternalBtChannelOrderErrorType::BlocktankNotReady => { + BtChannelOrderErrorType::BlocktankNotReady + } } } } @@ -129,18 +120,25 @@ impl From for BtChannelOrderErrorType { impl From for ExternalBtChannelOrderErrorType { fn from(other: BtChannelOrderErrorType) -> Self { match other { - BtChannelOrderErrorType::WrongOrderState => ExternalBtChannelOrderErrorType::WrongOrderState, - BtChannelOrderErrorType::PeerNotReachable => ExternalBtChannelOrderErrorType::PeerNotReachable, + BtChannelOrderErrorType::WrongOrderState => { + ExternalBtChannelOrderErrorType::WrongOrderState + } + BtChannelOrderErrorType::PeerNotReachable => { + ExternalBtChannelOrderErrorType::PeerNotReachable + } BtChannelOrderErrorType::ChannelRejectedByDestination => { ExternalBtChannelOrderErrorType::ChannelRejectedByDestination } - BtChannelOrderErrorType::ChannelRejectedByLsp => ExternalBtChannelOrderErrorType::ChannelRejectedByLsp, - BtChannelOrderErrorType::BlocktankNotReady => ExternalBtChannelOrderErrorType::BlocktankNotReady, + BtChannelOrderErrorType::ChannelRejectedByLsp => { + ExternalBtChannelOrderErrorType::ChannelRejectedByLsp + } + BtChannelOrderErrorType::BlocktankNotReady => { + ExternalBtChannelOrderErrorType::BlocktankNotReady + } } } } - #[derive(uniffi::Enum, Deserialize, Serialize)] pub enum BtOpenChannelState { Opening, @@ -168,7 +166,6 @@ impl From for ExternalBtOpenChannelState { } } - #[derive(uniffi::Enum, Deserialize, Serialize)] pub enum BtOrderState { Created, @@ -199,7 +196,6 @@ impl From for ExternalBtOrderState { } } - #[derive(uniffi::Enum, Deserialize, Serialize)] pub enum BtOrderState2 { Created, @@ -263,7 +259,6 @@ impl From for ExternalBtPaymentState { } } - #[derive(uniffi::Enum, Deserialize, Serialize)] pub enum BtPaymentState2 { Created, @@ -297,7 +292,6 @@ impl From for ExternalBtPaymentState2 { } } - #[derive(uniffi::Enum, Deserialize, Serialize)] pub enum CJitStateEnum { Created, @@ -358,7 +352,6 @@ impl From for ExternalManualRefundStateEnum { } } - #[derive(uniffi::Record, Deserialize, Serialize)] pub struct ILspNode { pub alias: String, @@ -431,7 +424,6 @@ impl From for ExternalIBtInfoOptions { } } - #[derive(uniffi::Record, Deserialize, Serialize)] pub struct IDiscount { pub code: String, @@ -462,7 +454,6 @@ impl From for ExternalIDiscount { } } - #[derive(uniffi::Record, Deserialize, Serialize)] pub struct IBtBolt11Invoice { pub request: String, @@ -720,9 +711,9 @@ impl From for IBtPayment { bolt11_invoice: other.bolt11_invoice.map(|b| b.into()), onchain: other.onchain.map(|o| o.into()), is_manually_paid: other.is_manually_paid, - manual_refunds: other.manual_refunds.map(|refunds| { - refunds.into_iter().map(|refund| refund.into()).collect() - }), + manual_refunds: other + .manual_refunds + .map(|refunds| refunds.into_iter().map(|refund| refund.into()).collect()), } } } @@ -736,9 +727,9 @@ impl From for ExternalIBtPayment { bolt11_invoice: other.bolt11_invoice.map(|b| b.into()), onchain: other.onchain.map(|o| o.into()), is_manually_paid: other.is_manually_paid, - manual_refunds: other.manual_refunds.map(|refunds| { - refunds.into_iter().map(|refund| refund.into()).collect() - }), + manual_refunds: other + .manual_refunds + .map(|refunds| refunds.into_iter().map(|refund| refund.into()).collect()), } } } @@ -1189,7 +1180,11 @@ impl From for IGiftBtcAddress { id: other.id, address: other.address, transactions: other.transactions.iter().map(|_| "".to_string()).collect(), // Simplified - all_transactions: other.all_transactions.iter().map(|_| "".to_string()).collect(), // Simplified + all_transactions: other + .all_transactions + .iter() + .map(|_| "".to_string()) + .collect(), // Simplified is_blacklisted: other.is_blacklisted, watch_until: other.watch_until, watch_for_block_confirmations: other.watch_for_block_confirmations, @@ -1204,7 +1199,7 @@ impl From for ExternalIGiftBtcAddress { Self { id: other.id, address: other.address, - transactions: vec![], // Simplified + transactions: vec![], // Simplified all_transactions: vec![], // Simplified is_blacklisted: other.is_blacklisted, watch_until: other.watch_until, @@ -1339,7 +1334,11 @@ impl From for IGiftPayment { btc_address_id: other.btc_address_id, bolt11_invoice: other.bolt11_invoice.map(|b| b.into()), bolt11_invoice_id: other.bolt11_invoice_id, - manual_refunds: other.manual_refunds.iter().map(|_| "".to_string()).collect(), // Simplified + manual_refunds: other + .manual_refunds + .iter() + .map(|_| "".to_string()) + .collect(), // Simplified } } } @@ -1543,4 +1542,4 @@ impl From for ExternalIGift { updated_at: other.updated_at, } } -} \ No newline at end of file +} diff --git a/src/modules/lnurl/errors.rs b/src/modules/lnurl/errors.rs index 88af54b..eae58e2 100644 --- a/src/modules/lnurl/errors.rs +++ b/src/modules/lnurl/errors.rs @@ -18,9 +18,7 @@ pub enum LnurlError { max: u64, }, #[error("Failed to generate invoice: {error_details}")] - InvoiceCreationFailed { - error_details: String, - }, + InvoiceCreationFailed { error_details: String }, #[error("LNURL authentication failed")] AuthenticationFailed, -} \ No newline at end of file +} diff --git a/src/modules/lnurl/implementation.rs b/src/modules/lnurl/implementation.rs index ca123c6..b08c6e0 100644 --- a/src/modules/lnurl/implementation.rs +++ b/src/modules/lnurl/implementation.rs @@ -1,11 +1,11 @@ -use std::str::FromStr; -use lnurl::{LnUrlResponse, AsyncClient, Builder, Response, get_derivation_path}; +use crate::lnurl::{ChannelRequestParams, LnurlAuthParams, LnurlError, WithdrawCallbackParams}; +use bitcoin::bip32::Xpriv; +use bitcoin::secp256k1::{Message, PublicKey, Secp256k1}; use lnurl::lightning_address::LightningAddress; use lnurl::lnurl::LnUrl; +use lnurl::{get_derivation_path, AsyncClient, Builder, LnUrlResponse, Response}; +use std::str::FromStr; use url::Url; -use bitcoin::secp256k1::{PublicKey, Secp256k1, Message}; -use bitcoin::bip32::Xpriv; -use crate::lnurl::{LnurlError, ChannelRequestParams, WithdrawCallbackParams, LnurlAuthParams}; pub async fn get_lnurl_invoice(address: &str, amount_satoshis: u64) -> Result { let ln_addr = match parse_lightning_address(address) { @@ -24,16 +24,19 @@ pub async fn get_lnurl_invoice(address: &str, amount_satoshis: u64) -> Result Result { - LightningAddress::from_str(address) - .map_err(|_| LnurlError::InvalidAddress) + LightningAddress::from_str(address).map_err(|_| LnurlError::InvalidAddress) } fn create_async_client() -> Result { - Builder::default().build_async() + Builder::default() + .build_async() .map_err(|_| LnurlError::ClientCreationFailed) } -async fn fetch_lnurl_pay_response(client: &AsyncClient, ln_addr: &LightningAddress) -> Result { +async fn fetch_lnurl_pay_response( + client: &AsyncClient, + ln_addr: &LightningAddress, +) -> Result { match client.make_request(&ln_addr.lnurlp_url()).await { Ok(response @ LnUrlResponse::LnUrlPayResponse(_)) => Ok(response), Ok(_) => Err(LnurlError::InvalidResponse), @@ -44,7 +47,7 @@ async fn fetch_lnurl_pay_response(client: &AsyncClient, ln_addr: &LightningAddre async fn generate_invoice( client: &AsyncClient, pay_response: &LnUrlResponse, - amount_satoshis: u64 + amount_satoshis: u64, ) -> Result { let pay = match pay_response { LnUrlResponse::LnUrlPayResponse(pay) => pay, @@ -63,7 +66,8 @@ async fn generate_invoice( } // Generate invoice - client.get_invoice(pay, amount_msats, None, None) + client + .get_invoice(pay, amount_msats, None, None) .await .map(|invoice| invoice.pr) .map_err(|e| LnurlError::InvoiceCreationFailed { @@ -72,9 +76,8 @@ async fn generate_invoice( } pub fn create_channel_request_url(params: ChannelRequestParams) -> Result { - let mut url = Url::parse(¶ms.callback) - .map_err(|_| LnurlError::InvalidAddress)?; - + let mut url = Url::parse(¶ms.callback).map_err(|_| LnurlError::InvalidAddress)?; + // Collect all query parameters, excluding "k1" let existing_params: Vec<(String, String)> = url .query_pairs() @@ -98,13 +101,12 @@ pub fn create_channel_request_url(params: ChannelRequestParams) -> Result Result { - let mut url = Url::parse(¶ms.callback) - .map_err(|_| LnurlError::InvalidAddress)?; + let mut url = Url::parse(¶ms.callback).map_err(|_| LnurlError::InvalidAddress)?; // Collect all query parameters, excluding "k1" and "pr" let existing_params: Vec<(String, String)> = url @@ -134,41 +136,44 @@ pub fn create_withdraw_callback_url(params: WithdrawCallbackParams) -> Result Result { let domain_url = Url::parse(&format!("https://{}", params.domain)) .map_err(|_| LnurlError::InvalidAddress)?; - + let derivation_path = get_derivation_path(params.hashing_key, &domain_url) .map_err(|_| LnurlError::AuthenticationFailed)?; - + let secp = Secp256k1::new(); let master_key = Xpriv::new_master(bitcoin::Network::Bitcoin, ¶ms.hashing_key) .map_err(|_| LnurlError::AuthenticationFailed)?; - - let derived_key = master_key.derive_priv(&secp, &derivation_path) + + let derived_key = master_key + .derive_priv(&secp, &derivation_path) .map_err(|_| LnurlError::AuthenticationFailed)?; - + let private_key = derived_key.private_key; let public_key = PublicKey::from_secret_key(&secp, &private_key); - - let k1_bytes = hex::decode(¶ms.k1) - .map_err(|_| LnurlError::AuthenticationFailed)?; - let message = Message::from_digest_slice(&k1_bytes) - .map_err(|_| LnurlError::AuthenticationFailed)?; - + + let k1_bytes = hex::decode(¶ms.k1).map_err(|_| LnurlError::AuthenticationFailed)?; + let message = + Message::from_digest_slice(&k1_bytes).map_err(|_| LnurlError::AuthenticationFailed)?; + let signature = secp.sign_ecdsa(&message, &private_key); - + let lnurl = if params.callback.starts_with("lnurl1") { - LnUrl::from_str(¶ms.callback) - .map_err(|_| LnurlError::InvalidAddress)? + LnUrl::from_str(¶ms.callback).map_err(|_| LnurlError::InvalidAddress)? } else { - LnUrl { url: params.callback } + LnUrl { + url: params.callback, + } }; let client = create_async_client()?; - - let response = client.lnurl_auth(lnurl, signature, public_key).await + + let response = client + .lnurl_auth(lnurl, signature, public_key) + .await .map_err(|_| LnurlError::RequestFailed)?; - + match response { Response::Ok { .. } => Ok("Authentication successful".to_string()), Response::Error { reason: _ } => Err(LnurlError::AuthenticationFailed), } -} \ No newline at end of file +} diff --git a/src/modules/lnurl/mod.rs b/src/modules/lnurl/mod.rs index c239b19..7050d65 100644 --- a/src/modules/lnurl/mod.rs +++ b/src/modules/lnurl/mod.rs @@ -1,12 +1,16 @@ +mod errors; mod implementation; mod types; -mod errors; mod utils; #[cfg(test)] mod tests; -pub use implementation::{get_lnurl_invoice, create_channel_request_url, create_withdraw_callback_url, lnurl_auth}; -pub use utils::is_lnurl_address; -pub use types::{LightningAddressInvoice, ChannelRequestParams, WithdrawCallbackParams, LnurlAuthParams}; pub use errors::LnurlError; +pub use implementation::{ + create_channel_request_url, create_withdraw_callback_url, get_lnurl_invoice, lnurl_auth, +}; +pub use types::{ + ChannelRequestParams, LightningAddressInvoice, LnurlAuthParams, WithdrawCallbackParams, +}; +pub use utils::is_lnurl_address; diff --git a/src/modules/lnurl/tests.rs b/src/modules/lnurl/tests.rs index fa27043..7a3b4bd 100644 --- a/src/modules/lnurl/tests.rs +++ b/src/modules/lnurl/tests.rs @@ -1,9 +1,11 @@ #[cfg(test)] mod tests { - use crate::lnurl::{ChannelRequestParams, WithdrawCallbackParams, LnurlAuthParams, LnurlError}; - use crate::lnurl::implementation::{create_channel_request_url, create_withdraw_callback_url, lnurl_auth}; + use crate::lnurl::implementation::{ + create_channel_request_url, create_withdraw_callback_url, lnurl_auth, + }; + use crate::lnurl::{ChannelRequestParams, LnurlAuthParams, LnurlError, WithdrawCallbackParams}; use lnurl::get_derivation_path; - + const TEST_MNEMONIC: &str = "stable inch effort skull suggest circle charge lemon amazing clean giant quantum party grow visa best rule icon gown disagree win drop smile love"; #[test] @@ -11,15 +13,18 @@ mod tests { let params = ChannelRequestParams { k1: "test_k1_value".to_string(), callback: "https://example.com/callback".to_string(), - local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234".to_string(), + local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234" + .to_string(), is_private: true, cancel: false, }; let result = create_channel_request_url(params).unwrap(); - + assert!(result.contains("k1=test_k1_value")); - assert!(result.contains("remoteid=03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234")); + assert!(result.contains( + "remoteid=03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234" + )); assert!(result.contains("private=1")); assert!(result.contains("cancel=0")); assert!(result.starts_with("https://example.com/callback?")); @@ -30,16 +35,19 @@ mod tests { let params = ChannelRequestParams { k1: "test_k1_value".to_string(), callback: "https://example.com/callback?existing=param".to_string(), - local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234".to_string(), + local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234" + .to_string(), is_private: false, cancel: true, }; let result = create_channel_request_url(params).unwrap(); - + assert!(result.contains("existing=param")); assert!(result.contains("k1=test_k1_value")); - assert!(result.contains("remoteid=03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234")); + assert!(result.contains( + "remoteid=03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234" + )); assert!(result.contains("private=0")); assert!(result.contains("cancel=1")); } @@ -50,7 +58,8 @@ mod tests { let params = ChannelRequestParams { k1: "new_k1_value".to_string(), callback: "https://example.com/callback?k1=existing_k1_value&foo=bar".to_string(), - local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234".to_string(), + local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234" + .to_string(), is_private: false, cancel: true, }; @@ -59,14 +68,26 @@ mod tests { // Check that we have exactly one k1 parameter (the new one) let k1_count = result.matches("k1=").count(); - assert_eq!(k1_count, 1, "URL should have exactly 1 k1 parameter after fix"); + assert_eq!( + k1_count, 1, + "URL should have exactly 1 k1 parameter after fix" + ); // The URL should contain only the new k1 value - assert!(!result.contains("k1=existing_k1_value"), "Old k1 value should be replaced"); - assert!(result.contains("k1=new_k1_value"), "New k1 value should be present"); + assert!( + !result.contains("k1=existing_k1_value"), + "Old k1 value should be replaced" + ); + assert!( + result.contains("k1=new_k1_value"), + "New k1 value should be present" + ); // Other parameters should be preserved - assert!(result.contains("foo=bar"), "Other query parameters should be preserved"); + assert!( + result.contains("foo=bar"), + "Other query parameters should be preserved" + ); } #[test] @@ -78,7 +99,7 @@ mod tests { }; let result = create_withdraw_callback_url(params).unwrap(); - + assert!(result.contains("k1=test_k1_value")); assert!(result.contains("pr=lnbc1230n1pjqqqqqqpp5abcdef...")); assert!(result.starts_with("https://example.com/withdraw?")); @@ -93,7 +114,7 @@ mod tests { }; let result = create_withdraw_callback_url(params).unwrap(); - + assert!(result.contains("existing=param")); assert!(result.contains("k1=test_k1_value")); assert!(result.contains("pr=lnbc1230n1pjqqqqqqpp5abcdef...")); @@ -104,7 +125,8 @@ mod tests { let params = ChannelRequestParams { k1: "test_k1_value".to_string(), callback: "invalid_url".to_string(), - local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234".to_string(), + local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234" + .to_string(), is_private: true, cancel: false, }; @@ -117,53 +139,53 @@ mod tests { #[test] fn test_get_derivation_path() { use url::Url; - + // Test with a simple domain let hashing_key: [u8; 32] = [ - 0x7d, 0x41, 0x7a, 0x6a, 0x5e, 0x9a, 0x6a, 0x4a, - 0x87, 0x9a, 0xea, 0xba, 0x11, 0xa1, 0x18, 0x38, - 0x76, 0x4c, 0x8f, 0xa2, 0xb9, 0x59, 0xc2, 0x42, - 0xd4, 0x3d, 0xea, 0x68, 0x2b, 0x3e, 0x40, 0x9b, + 0x7d, 0x41, 0x7a, 0x6a, 0x5e, 0x9a, 0x6a, 0x4a, 0x87, 0x9a, 0xea, 0xba, 0x11, 0xa1, + 0x18, 0x38, 0x76, 0x4c, 0x8f, 0xa2, 0xb9, 0x59, 0xc2, 0x42, 0xd4, 0x3d, 0xea, 0x68, + 0x2b, 0x3e, 0x40, 0x9b, ]; let url = Url::parse("https://site.com").unwrap(); let path = get_derivation_path(hashing_key, &url).unwrap(); - + // Based on the test vector, the expected path should be: // 138'/1588488367/511787106'/38110259/1988853114' let expected_path = "138'/1588488367/511787106'/38110259/1988853114'"; assert_eq!(path.to_string(), expected_path); - + // Test that same inputs produce same path let path2 = get_derivation_path(hashing_key, &url).unwrap(); assert_eq!(path.to_string(), path2.to_string()); } - + #[test] fn test_create_channel_request_url_matches_reference() { let params = ChannelRequestParams { k1: "test_k1_value".to_string(), callback: "https://example.com/callback".to_string(), - local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234".to_string(), + local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234" + .to_string(), is_private: true, cancel: false, }; let result = create_channel_request_url(params).unwrap(); - + let expected_parts = [ "https://example.com/callback?", "k1=test_k1_value", - "remoteid=03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234", + "remoteid=03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234", "private=1", - "cancel=0" + "cancel=0", ]; - + for part in expected_parts { assert!(result.contains(part), "Result should contain: {}", part); } } - + #[test] fn test_create_withdraw_callback_url_matches_reference() { let params = WithdrawCallbackParams { @@ -173,13 +195,13 @@ mod tests { }; let result = create_withdraw_callback_url(params).unwrap(); - + let expected_parts = [ "https://example.com/withdraw?", "k1=test_k1_value", - "pr=lnbc1230n1pjqqqqqqpp5abcdef..." + "pr=lnbc1230n1pjqqqqqqpp5abcdef...", ]; - + for part in expected_parts { assert!(result.contains(part), "Result should contain: {}", part); } @@ -289,19 +311,20 @@ mod tests { assert_eq!(params.callback, "https://example.com/auth"); assert_eq!(params.hashing_key, hashing_key); } - + #[test] fn test_url_parameter_encoding() { let params = ChannelRequestParams { k1: "special+chars&test=value".to_string(), callback: "https://example.com/callback".to_string(), - local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234".to_string(), + local_node_id: "03abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234" + .to_string(), is_private: false, cancel: true, }; let result = create_channel_request_url(params).unwrap(); - + assert!(result.contains("cancel=1")); assert!(result.contains("private=0")); assert!(result.contains("k1=")); @@ -318,17 +341,32 @@ mod tests { }; let result = create_withdraw_callback_url(params).unwrap(); - + // Check that we have exactly one k1 parameter (the new one) let k1_count = result.matches("k1=").count(); - assert_eq!(k1_count, 1, "URL should have exactly 1 k1 parameter after fix"); - + assert_eq!( + k1_count, 1, + "URL should have exactly 1 k1 parameter after fix" + ); + // The URL should contain only the new k1 value - assert!(!result.contains("k1=existing_k1_value"), "Old k1 value should be replaced"); - assert!(result.contains("k1=new_k1_value"), "New k1 value should be present"); - + assert!( + !result.contains("k1=existing_k1_value"), + "Old k1 value should be replaced" + ); + assert!( + result.contains("k1=new_k1_value"), + "New k1 value should be present" + ); + // Other parameters should be preserved - assert!(result.contains("foo=bar"), "Other query parameters should be preserved"); - assert!(result.contains("pr=lnbc1230n1pjqqqqqqpp5abcdef..."), "Payment request should be added"); + assert!( + result.contains("foo=bar"), + "Other query parameters should be preserved" + ); + assert!( + result.contains("pr=lnbc1230n1pjqqqqqqpp5abcdef..."), + "Payment request should be added" + ); } -} \ No newline at end of file +} diff --git a/src/modules/lnurl/types.rs b/src/modules/lnurl/types.rs index d2af6d0..3877150 100644 --- a/src/modules/lnurl/types.rs +++ b/src/modules/lnurl/types.rs @@ -23,4 +23,4 @@ pub struct LnurlAuthParams { pub k1: String, pub callback: String, pub hashing_key: [u8; 32], -} \ No newline at end of file +} diff --git a/src/modules/lnurl/utils.rs b/src/modules/lnurl/utils.rs index 6dc7049..da01db8 100644 --- a/src/modules/lnurl/utils.rs +++ b/src/modules/lnurl/utils.rs @@ -1,9 +1,8 @@ use lazy_regex::Lazy; use regex::Regex; -static LNURL_ADDRESS_REGEX: Lazy = Lazy::new(|| { - Regex::new(r"^[a-z0-9._-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$").unwrap() -}); +static LNURL_ADDRESS_REGEX: Lazy = + Lazy::new(|| Regex::new(r"^[a-z0-9._-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$").unwrap()); pub fn is_lnurl_address(address: &str) -> bool { LNURL_ADDRESS_REGEX.is_match(address) diff --git a/src/modules/mod.rs b/src/modules/mod.rs index 7af6026..3bc1337 100644 --- a/src/modules/mod.rs +++ b/src/modules/mod.rs @@ -1,7 +1,7 @@ -pub mod scanner; -pub mod lnurl; -pub mod onchain; pub mod activity; pub mod blocktank; -pub mod trezor; +pub mod lnurl; +pub mod onchain; pub mod pubky; +pub mod scanner; +pub mod trezor; diff --git a/src/modules/onchain/compose.rs b/src/modules/onchain/compose.rs index 0fa94d8..e98c08c 100644 --- a/src/modules/onchain/compose.rs +++ b/src/modules/onchain/compose.rs @@ -47,11 +47,8 @@ async fn compose_inner(params: ComposeParams) -> Result, Acco params.wallet.fingerprint.as_deref(), )?; - validate_outputs(¶ms.outputs, setup.network).map_err(|e| { - AccountInfoError::WalletError { - error_details: e, - } - })?; + validate_outputs(¶ms.outputs, setup.network) + .map_err(|e| AccountInfoError::WalletError { error_details: e })?; for rate in ¶ms.fee_rates { if !rate.is_finite() || *rate <= 0.0 { @@ -72,11 +69,16 @@ async fn compose_inner(params: ComposeParams) -> Result, Acco let mut results = Vec::with_capacity(fee_rates.len()); for rate in &fee_rates { - let result = - match build_psbt(&mut wallet, &outputs, *rate, setup.network, coin_selection.as_ref()) { - Ok(r) => r, - Err(msg) => ComposeResult::Error { error: msg }, - }; + let result = match build_psbt( + &mut wallet, + &outputs, + *rate, + setup.network, + coin_selection.as_ref(), + ) { + Ok(r) => r, + Err(msg) => ComposeResult::Error { error: msg }, + }; results.push(result); } Ok(results) @@ -112,7 +114,12 @@ fn build_psbt( /// Build a PSBT from pre-validated outputs and fee rate. fn finish_psbt>( - mut builder: bdk::wallet::tx_builder::TxBuilder<'_, MemoryDatabase, Cs, bdk::wallet::tx_builder::CreateTx>, + mut builder: bdk::wallet::tx_builder::TxBuilder< + '_, + MemoryDatabase, + Cs, + bdk::wallet::tx_builder::CreateTx, + >, outputs: &[ComposeOutput], fee_rate: f32, network: BdkNetwork, @@ -135,8 +142,8 @@ fn finish_psbt>( builder.drain_wallet(); } ComposeOutput::OpReturn { data_hex } => { - let data = hex::decode(data_hex) - .map_err(|e| format!("Invalid OP_RETURN hex: {}", e))?; + let data = + hex::decode(data_hex).map_err(|e| format!("Invalid OP_RETURN hex: {}", e))?; let push_data = bdk::bitcoin::script::PushBytesBuf::try_from(data) .map_err(|e| format!("OP_RETURN data too large: {}", e))?; let script = bdk::bitcoin::blockdata::script::Builder::new() @@ -176,13 +183,19 @@ fn parse_address(address: &str, network: BdkNetwork) -> Result Result<(), String> { +pub(crate) fn validate_outputs( + outputs: &[ComposeOutput], + network: BdkNetwork, +) -> Result<(), String> { if outputs.is_empty() { return Err("At least one output is required".into()); } - let has_recipient = outputs - .iter() - .any(|o| matches!(o, ComposeOutput::Payment { .. } | ComposeOutput::SendMax { .. })); + let has_recipient = outputs.iter().any(|o| { + matches!( + o, + ComposeOutput::Payment { .. } | ComposeOutput::SendMax { .. } + ) + }); if !has_recipient { return Err("At least one Payment or SendMax output is required".into()); } @@ -190,7 +203,10 @@ pub(crate) fn validate_outputs(outputs: &[ComposeOutput], network: BdkNetwork) - let mut has_drain = false; for output in outputs { match output { - ComposeOutput::Payment { address, amount_sats } => { + ComposeOutput::Payment { + address, + amount_sats, + } => { if *amount_sats == 0 { return Err("Payment amount must be greater than zero".into()); } @@ -207,8 +223,8 @@ pub(crate) fn validate_outputs(outputs: &[ComposeOutput], network: BdkNetwork) - if data_hex.is_empty() { return Err("OP_RETURN data must not be empty".into()); } - let data = hex::decode(data_hex) - .map_err(|e| format!("Invalid OP_RETURN hex: {}", e))?; + let data = + hex::decode(data_hex).map_err(|e| format!("Invalid OP_RETURN hex: {}", e))?; if data.len() > 80 { return Err(format!( "OP_RETURN data exceeds 80-byte standard relay limit ({} bytes)", diff --git a/src/modules/onchain/implementation.rs b/src/modules/onchain/implementation.rs index b6dd9a0..366b7f3 100644 --- a/src/modules/onchain/implementation.rs +++ b/src/modules/onchain/implementation.rs @@ -7,8 +7,8 @@ use bdk::bitcoin::bip32::ExtendedPubKey; use bdk::bitcoin::consensus::deserialize; use bdk::bitcoin::psbt::PartiallySignedTransaction as Psbt; use bdk::bitcoin::{ - Address as BdkAddress, Network as BdkNetwork, OutPoint, ScriptBuf, Sequence, Transaction, Txid, - TxIn, TxOut, Witness, + Address as BdkAddress, Network as BdkNetwork, OutPoint, ScriptBuf, Sequence, Transaction, TxIn, + TxOut, Txid, Witness, }; use bdk::blockchain::ElectrumBlockchain; use bdk::database::MemoryDatabase; @@ -57,7 +57,7 @@ impl BitcoinAddressValidator { Err(e) => return Err(e), }; match verify_network(unchecked_addr, expected_network.into()) { - Ok(_) => {}, + Ok(_) => {} Err(e) => return Err(e), } let address_type = get_address_type(address)?; @@ -71,16 +71,14 @@ impl BitcoinAddressValidator { }) } - pub fn genenerate_mnemonic( - word_count: Option, - ) -> Result { + pub fn genenerate_mnemonic(word_count: Option) -> Result { let external_word_count = word_count.map(|wc| wc.into()); let mnemonic = bitcoin_address_generator::generate_mnemonic(external_word_count, None); match mnemonic { Ok(mnemonic) => { println!("✓ Generated mnemonic: {}", mnemonic); Ok(mnemonic) - }, + } Err(e) => { println!("✗ Failed to generate mnemonic: {:?}", e); Err(AddressError::MnemonicGenerationFailed) @@ -135,10 +133,10 @@ impl BitcoinAddressValidator { network.into(), bip39_passphrase, ) - .map_err(|e| { - println!("✗ Failed to derive address: {:?}", e); - AddressError::AddressDerivationFailed - })?; + .map_err(|e| { + println!("✗ Failed to derive address: {:?}", e); + AddressError::AddressDerivationFailed + })?; Ok(address.into()) } @@ -161,10 +159,10 @@ impl BitcoinAddressValidator { start_index, count, ) - .map_err(|e| { - println!("✗ Failed to derive addresses: {:?}", e); - AddressError::AddressDerivationFailed - })?; + .map_err(|e| { + println!("✗ Failed to derive addresses: {:?}", e); + AddressError::AddressDerivationFailed + })?; Ok(addresses.into()) } @@ -181,10 +179,10 @@ impl BitcoinAddressValidator { network.into(), bip39_passphrase, ) - .map_err(|e| { - println!("✗ Failed to derive private key: {:?}", e); - AddressError::AddressDerivationFailed - })?; + .map_err(|e| { + println!("✗ Failed to derive private key: {:?}", e); + AddressError::AddressDerivationFailed + })?; Ok(private_key) } @@ -500,7 +498,9 @@ impl BitcoinAddressValidator { ))); } - let total_input: u64 = psbt.inputs.iter() + let total_input: u64 = psbt + .inputs + .iter() .filter_map(|i| i.witness_utxo.as_ref()) .map(|u| u.value) .sum(); @@ -568,9 +568,10 @@ pub async fn broadcast_raw_tx( })?; // Validate that the bytes are a valid transaction - let _tx: Transaction = deserialize(&tx_bytes).map_err(|e| BroadcastError::InvalidTransaction { - error_details: format!("Invalid transaction data: {}", e), - })?; + let _tx: Transaction = + deserialize(&tx_bytes).map_err(|e| BroadcastError::InvalidTransaction { + error_details: format!("Invalid transaction data: {}", e), + })?; let electrum_url_owned = electrum_url.to_string(); @@ -602,9 +603,11 @@ pub async fn broadcast_raw_tx( /// Detect the account type from an extended public key prefix. /// `xpub`/`tpub` default to `Legacy`; use `account_type_override` for other script types. pub fn detect_account_type(extended_key: &str) -> Result { - let prefix = extended_key.get(..4).ok_or(AccountInfoError::InvalidExtendedKey { - error_details: "Key too short".to_string(), - })?; + let prefix = extended_key + .get(..4) + .ok_or(AccountInfoError::InvalidExtendedKey { + error_details: "Key too short".to_string(), + })?; match prefix { "xpub" | "tpub" => Ok(AccountType::Legacy), "ypub" | "upub" => Ok(AccountType::WrappedSegwit), @@ -617,9 +620,11 @@ pub fn detect_account_type(extended_key: &str) -> Result Result { - let prefix = extended_key.get(..4).ok_or(AccountInfoError::InvalidExtendedKey { - error_details: "Key too short".to_string(), - })?; + let prefix = extended_key + .get(..4) + .ok_or(AccountInfoError::InvalidExtendedKey { + error_details: "Key too short".to_string(), + })?; match prefix { "xpub" | "ypub" | "zpub" => Ok(BdkNetwork::Bitcoin), "tpub" | "upub" | "vpub" => Ok(BdkNetwork::Testnet), @@ -632,11 +637,13 @@ pub fn detect_network_from_key(extended_key: &str) -> Result Result { - let prefix = extended_key.get(..4).ok_or(AccountInfoError::InvalidExtendedKey { - error_details: "Key too short".to_string(), - })?; + let prefix = extended_key + .get(..4) + .ok_or(AccountInfoError::InvalidExtendedKey { + error_details: "Key too short".to_string(), + })?; let target_version: Option<[u8; 4]> = match prefix { - "xpub" | "tpub" => None, // Already standard format + "xpub" | "tpub" => None, // Already standard format "ypub" | "zpub" => Some([0x04, 0x88, 0xB2, 0x1E]), // Convert to xpub "upub" | "vpub" => Some([0x04, 0x35, 0x87, 0xCF]), // Convert to tpub _ => { @@ -703,7 +710,11 @@ pub fn build_descriptors( } /// Determine the BIP derivation base path. -pub fn derive_base_path(account_type: AccountType, network: BdkNetwork, account_index: u32) -> String { +pub fn derive_base_path( + account_type: AccountType, + network: BdkNetwork, + account_index: u32, +) -> String { let purpose = match account_type { AccountType::Legacy => 44, AccountType::WrappedSegwit => 49, @@ -790,8 +801,7 @@ pub(crate) fn resolve_wallet_setup( }; let derivation = base_path.strip_prefix("m/").unwrap_or(&base_path); - let key_origin: Option<(&str, &str)> = - normalized_fp.as_deref().map(|fp| (fp, derivation)); + let key_origin: Option<(&str, &str)> = normalized_fp.as_deref().map(|fp| (fp, derivation)); let (external_desc, internal_desc) = build_descriptors(&normalized_key, account_type, key_origin); @@ -838,15 +848,13 @@ pub(crate) fn connect_and_get_tip( electrum_url: &str, ) -> Result<(bdk::electrum_client::Client, u32), AccountInfoError> { let client = connect_electrum(electrum_url)?; - let header = client.block_headers_subscribe().map_err(|e| { - AccountInfoError::ElectrumError { + let header = client + .block_headers_subscribe() + .map_err(|e| AccountInfoError::ElectrumError { error_details: format!("Failed to get block height: {}", e), - } - })?; - let tip_height = u32::try_from(header.height).map_err(|_| { - AccountInfoError::ElectrumError { - error_details: format!("Invalid block height: {}", header.height), - } + })?; + let tip_height = u32::try_from(header.height).map_err(|_| AccountInfoError::ElectrumError { + error_details: format!("Invalid block height: {}", header.height), })?; Ok((client, tip_height)) } @@ -984,9 +992,11 @@ pub async fn get_account_info( } // Extract UTXOs - let utxos = wallet.list_unspent().map_err(|e| AccountInfoError::WalletError { - error_details: format!("Failed to list UTXOs: {}", e), - })?; + let utxos = wallet + .list_unspent() + .map_err(|e| AccountInfoError::WalletError { + error_details: format!("Failed to list UTXOs: {}", e), + })?; // Get transaction details for confirmation info and coinbase detection let transactions = @@ -1006,7 +1016,8 @@ pub async fn get_account_info( None => { log::warn!( "No derivation path found for UTXO {}:{}", - utxo.outpoint.txid, utxo.outpoint.vout, + utxo.outpoint.txid, + utxo.outpoint.vout, ); String::new() } @@ -1017,9 +1028,7 @@ pub async fn get_account_info( .unwrap_or_default(); // Get confirmation info and coinbase status from transaction details - let tx_detail = transactions - .iter() - .find(|tx| tx.txid == utxo.outpoint.txid); + let tx_detail = transactions.iter().find(|tx| tx.txid == utxo.outpoint.txid); let (block_height, confirmations) = tx_detail .and_then(|tx| tx.confirmation_time.as_ref()) @@ -1112,9 +1121,11 @@ pub async fn get_transaction_history( let wallet = create_and_sync_wallet(&setup, client)?; // Balance - let bdk_balance = wallet.get_balance().map_err(|e| AccountInfoError::WalletError { - error_details: format!("Failed to get balance: {}", e), - })?; + let bdk_balance = wallet + .get_balance() + .map_err(|e| AccountInfoError::WalletError { + error_details: format!("Failed to get balance: {}", e), + })?; let balance: WalletBalance = bdk_balance.into(); // Transaction history @@ -1129,14 +1140,13 @@ pub async fn get_transaction_history( .map(|tx| { let (direction, amount, net) = classify_tx(tx.sent, tx.received, tx.fee); - let (block_height, timestamp, confirmations) = - match tx.confirmation_time.as_ref() { - Some(conf) => { - let confs = tip_height.saturating_sub(conf.height) + 1; - (Some(conf.height), Some(conf.timestamp), confs) - } - None => (None, None, 0), - }; + let (block_height, timestamp, confirmations) = match tx.confirmation_time.as_ref() { + Some(conf) => { + let confs = tip_height.saturating_sub(conf.height) + 1; + (Some(conf.height), Some(conf.timestamp), confs) + } + None => (None, None, 0), + }; HistoryTransaction { txid: tx.txid.to_string(), @@ -1154,13 +1164,11 @@ pub async fn get_transaction_history( .collect(); // Sort: unconfirmed first, then by timestamp descending - history.sort_by(|a, b| { - match (a.timestamp, b.timestamp) { - (None, Some(_)) => std::cmp::Ordering::Less, - (Some(_), None) => std::cmp::Ordering::Greater, - (None, None) => std::cmp::Ordering::Equal, - (Some(a_ts), Some(b_ts)) => b_ts.cmp(&a_ts), - } + history.sort_by(|a, b| match (a.timestamp, b.timestamp) { + (None, Some(_)) => std::cmp::Ordering::Less, + (Some(_), None) => std::cmp::Ordering::Greater, + (None, None) => std::cmp::Ordering::Equal, + (Some(a_ts), Some(b_ts)) => b_ts.cmp(&a_ts), }); let tx_count = u32::try_from(history.len()).unwrap_or(u32::MAX); @@ -1215,12 +1223,11 @@ pub async fn get_transaction_detail( error_details: format!("Failed to list transactions: {}", e), })?; - let tx = txs - .iter() - .find(|t| t.txid == target_txid) - .ok_or_else(|| AccountInfoError::TransactionNotFound { + let tx = txs.iter().find(|t| t.txid == target_txid).ok_or_else(|| { + AccountInfoError::TransactionNotFound { error_details: format!("Transaction {} not found in wallet", target_txid), - })?; + } + })?; // Summary fields let (direction, amount, net) = classify_tx(tx.sent, tx.received, tx.fee); @@ -1234,14 +1241,12 @@ pub async fn get_transaction_detail( }; // Raw transaction details - let raw_tx = tx.transaction.as_ref().ok_or_else(|| { - AccountInfoError::WalletError { - error_details: format!( - "Raw transaction data not available for {}", - target_txid - ), - } - })?; + let raw_tx = tx + .transaction + .as_ref() + .ok_or_else(|| AccountInfoError::WalletError { + error_details: format!("Raw transaction data not available for {}", target_txid), + })?; let inputs: Vec = raw_tx .input @@ -1322,10 +1327,9 @@ pub async fn get_address_info( network: Option, ) -> Result { // Parse with BDK's bitcoin crate for script_pubkey generation - let bdk_addr = BdkAddress::from_str(address) - .map_err(|e| AccountInfoError::InvalidAddress { - error_details: format!("Invalid address: {}", e), - })?; + let bdk_addr = BdkAddress::from_str(address).map_err(|e| AccountInfoError::InvalidAddress { + error_details: format!("Invalid address: {}", e), + })?; let bdk_addr = match network { Some(net) => { let bdk_network = onchain_to_bdk_network(net); @@ -1341,70 +1345,70 @@ pub async fn get_address_info( let electrum_url_owned = electrum_url.to_string(); let addr_str = address.to_string(); - let result = tokio::task::spawn_blocking(move || { - let (client, tip_height) = connect_and_get_tip(&electrum_url_owned)?; + let result = + tokio::task::spawn_blocking(move || { + let (client, tip_height) = connect_and_get_tip(&electrum_url_owned)?; - let script = bdk_addr.script_pubkey(); + let script = bdk_addr.script_pubkey(); - // Get UTXOs for this address - let utxos = client.script_list_unspent(&script).map_err(|e| { - AccountInfoError::ElectrumError { - error_details: format!("Failed to list UTXOs: {}", e), - } - })?; + // Get UTXOs for this address + let utxos = client.script_list_unspent(&script).map_err(|e| { + AccountInfoError::ElectrumError { + error_details: format!("Failed to list UTXOs: {}", e), + } + })?; - // Get history for transfer count - let history = client.script_get_history(&script).map_err(|e| { - AccountInfoError::ElectrumError { - error_details: format!("Failed to get history: {}", e), - } - })?; + // Get history for transfer count + let history = client.script_get_history(&script).map_err(|e| { + AccountInfoError::ElectrumError { + error_details: format!("Failed to get history: {}", e), + } + })?; - let account_utxos: Vec = utxos - .iter() - .map(|utxo| { - let height = u32::try_from(utxo.height).unwrap_or(0); - let confirmations = if height > 0 { - tip_height.saturating_sub(height) + 1 - } else { - 0 - }; + let account_utxos: Vec = utxos + .iter() + .map(|utxo| { + let height = u32::try_from(utxo.height).unwrap_or(0); + let confirmations = if height > 0 { + tip_height.saturating_sub(height) + 1 + } else { + 0 + }; - let vout = u32::try_from(utxo.tx_pos).map_err(|_| { - AccountInfoError::WalletError { - error_details: format!("Output index {} exceeds u32", utxo.tx_pos), - } - })?; + let vout = + u32::try_from(utxo.tx_pos).map_err(|_| AccountInfoError::WalletError { + error_details: format!("Output index {} exceeds u32", utxo.tx_pos), + })?; - Ok(AccountUtxo { - txid: utxo.tx_hash.to_string(), - vout, - amount: utxo.value, - block_height: height, - address: addr_str.clone(), - path: String::new(), // No derivation path for single address - confirmations, - coinbase: false, - own: true, - required: None, + Ok(AccountUtxo { + txid: utxo.tx_hash.to_string(), + vout, + amount: utxo.value, + block_height: height, + address: addr_str.clone(), + path: String::new(), // No derivation path for single address + confirmations, + coinbase: false, + own: true, + required: None, + }) }) - }) - .collect::, AccountInfoError>>()?; + .collect::, AccountInfoError>>()?; - let balance: u64 = utxos.iter().map(|u| u.value).sum(); + let balance: u64 = utxos.iter().map(|u| u.value).sum(); - Ok::<_, AccountInfoError>(SingleAddressInfoResult { - address: addr_str, - balance, - utxos: account_utxos, - transfers: u32::try_from(history.len()).unwrap_or(u32::MAX), - block_height: tip_height, + Ok::<_, AccountInfoError>(SingleAddressInfoResult { + address: addr_str, + balance, + utxos: account_utxos, + transfers: u32::try_from(history.len()).unwrap_or(u32::MAX), + block_height: tip_height, + }) }) - }) - .await - .map_err(|e| AccountInfoError::SyncError { - error_details: format!("Task failed: {}", e), - })??; + .await + .map_err(|e| AccountInfoError::SyncError { + error_details: format!("Task failed: {}", e), + })??; Ok(result) } @@ -1426,15 +1430,19 @@ fn determine_network(address: &str) -> Result { s if s.starts_with("1") || s.starts_with("3") || s.starts_with("bc1") => { println!("✓ Determined network: Bitcoin"); Ok(Network::Bitcoin) - }, - s if s.starts_with("2") || s.starts_with("tb1") || s.starts_with("m") || s.starts_with("n") => { + } + s if s.starts_with("2") + || s.starts_with("tb1") + || s.starts_with("m") + || s.starts_with("n") => + { println!("✓ Determined network: Testnet"); Ok(Network::Testnet) - }, + } s if s.starts_with("bcrt1") => { println!("✓ Determined network: Regtest"); Ok(Network::Regtest) - }, + } _ => { println!("✗ Could not determine network"); Err(AddressError::InvalidNetwork) @@ -1442,10 +1450,16 @@ fn determine_network(address: &str) -> Result { } } -fn verify_network(unchecked_addr: Address, expected_network: Network) - -> Result { - println!("Attempting to verify address for network: {:?}", expected_network); - unchecked_addr.require_network(expected_network) +fn verify_network( + unchecked_addr: Address, + expected_network: Network, +) -> Result { + println!( + "Attempting to verify address for network: {:?}", + expected_network + ); + unchecked_addr + .require_network(expected_network) .map_err(|e| { println!("✗ Network verification failed: {:?}", e); AddressError::InvalidNetwork @@ -1459,15 +1473,21 @@ fn verify_network(unchecked_addr: Address, expected_network: N fn get_address_type(address: &str) -> Result { let address_type = match address { // Legacy addresses (P2PKH) - s if s.starts_with("1") || s.starts_with("m") || s.starts_with("n") => Some(AddressType::P2PKH), + s if s.starts_with("1") || s.starts_with("m") || s.starts_with("n") => { + Some(AddressType::P2PKH) + } // SegWit addresses (P2SH) s if s.starts_with("3") || s.starts_with("2") => Some(AddressType::P2SH), // Taproot addresses (P2TR) s if s.starts_with("bc1p") || s.starts_with("tb1p") => Some(AddressType::P2TR), // Native SegWit addresses (P2WPKH) - s if (s.starts_with("bc1q") || s.starts_with("tb1q")) && s.len() == 42 => Some(AddressType::P2WPKH), + s if (s.starts_with("bc1q") || s.starts_with("tb1q")) && s.len() == 42 => { + Some(AddressType::P2WPKH) + } // Native SegWit Script addresses (P2WSH) - s if (s.starts_with("bc1q") || s.starts_with("tb1q")) && s.len() == 62 => Some(AddressType::P2WSH), + s if (s.starts_with("bc1q") || s.starts_with("tb1q")) && s.len() == 62 => { + Some(AddressType::P2WSH) + } // Regtest addresses s if s.starts_with("bcrt1") => { if s.len() == 42 { @@ -1477,15 +1497,17 @@ fn get_address_type(address: &str) -> Result { } else { Some(AddressType::Unknown) } - }, - _ => Some(AddressType::Unknown) + } + _ => Some(AddressType::Unknown), }; - address_type.map(|t| { - println!("✓ Determined address type: {:?}", t); - t - }).ok_or_else(|| { - println!("✗ Could not determine address type"); - AddressError::InvalidAddress - }) + address_type + .map(|t| { + println!("✓ Determined address type: {:?}", t); + t + }) + .ok_or_else(|| { + println!("✗ Could not determine address type"); + AddressError::InvalidAddress + }) } diff --git a/src/modules/onchain/mod.rs b/src/modules/onchain/mod.rs index 1b53bbd..524fbee 100644 --- a/src/modules/onchain/mod.rs +++ b/src/modules/onchain/mod.rs @@ -1,8 +1,9 @@ +mod compose; mod errors; mod implementation; mod types; -mod compose; +pub use compose::compose_transaction; pub use errors::{AccountInfoError, AddressError, BroadcastError, SweepError}; pub use implementation::{ broadcast_raw_tx, build_descriptors, derive_base_path, detect_account_type, @@ -10,14 +11,13 @@ pub use implementation::{ get_transaction_history, normalize_extended_key, BitcoinAddressValidator, }; pub use types::{ - AccountAddresses, AccountInfoResult, AccountType, AccountUtxo, AddressInfo, - AddressType, CoinSelection, ComposeAccount, ComposeOutput, ComposeParams, - ComposeResult, GetAddressResponse, GetAddressesResponse, HistoryTransaction, Network, - SingleAddressInfoResult, SweepResult, SweepTransactionPreview, SweepableBalances, - TransactionDetail, TransactionHistoryResult, TxDetailInput, TxDetailOutput, TxDirection, - ValidationResult, WalletBalance, WalletParams, WordCount, + AccountAddresses, AccountInfoResult, AccountType, AccountUtxo, AddressInfo, AddressType, + CoinSelection, ComposeAccount, ComposeOutput, ComposeParams, ComposeResult, GetAddressResponse, + GetAddressesResponse, HistoryTransaction, Network, SingleAddressInfoResult, SweepResult, + SweepTransactionPreview, SweepableBalances, TransactionDetail, TransactionHistoryResult, + TxDetailInput, TxDetailOutput, TxDirection, ValidationResult, WalletBalance, WalletParams, + WordCount, }; -pub use compose::compose_transaction; #[cfg(test)] mod tests; diff --git a/src/modules/onchain/tests.rs b/src/modules/onchain/tests.rs index 482dc67..a02136b 100644 --- a/src/modules/onchain/tests.rs +++ b/src/modules/onchain/tests.rs @@ -8,10 +8,26 @@ mod tests { #[test] fn test_address_types() { let test_cases = vec![ - ("1BvBMSEYstWetqTFn5Au4m4GFg7xJaNVN2", AddressType::P2PKH, "Legacy"), - ("3J98t1WpEZ73CNmQviecrnyiWrnqRhWNLy", AddressType::P2SH, "SegWit"), - ("bc1qw508d6qejxtdg4y5r3zarvary0c5xw7kv8f3t4", AddressType::P2WPKH, "Native SegWit"), - ("bc1pt2a0lztpd6ejcswsxaw3n5l56jvf0yu0ah6fcapgqfs7hx9fyf0sufnaej", AddressType::P2TR, "Taproot"), + ( + "1BvBMSEYstWetqTFn5Au4m4GFg7xJaNVN2", + AddressType::P2PKH, + "Legacy", + ), + ( + "3J98t1WpEZ73CNmQviecrnyiWrnqRhWNLy", + AddressType::P2SH, + "SegWit", + ), + ( + "bc1qw508d6qejxtdg4y5r3zarvary0c5xw7kv8f3t4", + AddressType::P2WPKH, + "Native SegWit", + ), + ( + "bc1pt2a0lztpd6ejcswsxaw3n5l56jvf0yu0ah6fcapgqfs7hx9fyf0sufnaej", + AddressType::P2TR, + "Taproot", + ), ]; for (address, expected_type, expected_common) in test_cases { @@ -72,7 +88,8 @@ mod tests { assert_eq!(mnemonic.split_whitespace().count(), 12); // Test with 24 words - let mnemonic = BitcoinAddressValidator::genenerate_mnemonic(Some(WordCount::Words24)).unwrap(); + let mnemonic = + BitcoinAddressValidator::genenerate_mnemonic(Some(WordCount::Words24)).unwrap(); assert_eq!(mnemonic.split_whitespace().count(), 24); } @@ -88,7 +105,8 @@ mod tests { Some(path), Some(Network::Bitcoin), None, - ).unwrap(); + ) + .unwrap(); assert_eq!(result.address, "1LqBGSKuX5yYUonjxT5qGfpUsXKYYWeabA"); assert_eq!(result.path, path); @@ -100,7 +118,8 @@ mod tests { Some(path), Some(Network::Bitcoin), None, - ).unwrap(); + ) + .unwrap(); assert_eq!(result.address, "bc1qcr8te4kr609gcawutmrza0j4xv80jy8z306fyu"); assert_eq!(result.path, path); @@ -112,7 +131,8 @@ mod tests { Some(path), Some(Network::Bitcoin), None, - ).unwrap(); + ) + .unwrap(); assert_eq!(result.address, "37VucYSaXLCAsxYyAPfbSi9eh4iEcbShgf"); assert_eq!(result.path, path); @@ -133,7 +153,8 @@ mod tests { None, None, Some(3), - ).unwrap(); + ) + .unwrap(); // Check count and correct paths assert_eq!(result.addresses.len(), 3); @@ -142,7 +163,10 @@ mod tests { assert_eq!(result.addresses[2].path, "m/84'/0'/0'/0/2"); // Verify first address matches expected - assert_eq!(result.addresses[0].address, "bc1qcr8te4kr609gcawutmrza0j4xv80jy8z306fyu"); + assert_eq!( + result.addresses[0].address, + "bc1qcr8te4kr609gcawutmrza0j4xv80jy8z306fyu" + ); // Test change addresses derivation let result = BitcoinAddressValidator::derive_bitcoin_addresses( @@ -153,7 +177,8 @@ mod tests { Some(true), None, Some(2), - ).unwrap(); + ) + .unwrap(); // Check change addresses use correct paths assert_eq!(result.addresses.len(), 2); @@ -169,7 +194,8 @@ mod tests { None, Some(5), Some(2), - ).unwrap(); + ) + .unwrap(); assert_eq!(result.addresses.len(), 2); assert_eq!(result.addresses[0].path, "m/84'/0'/0'/0/5"); @@ -188,9 +214,13 @@ mod tests { Some(path), Some(Network::Bitcoin), None, - ).unwrap(); + ) + .unwrap(); - assert_eq!(private_key, "KyZpNDKnfs94vbrwhJneDi77V6jF64PWPF8x5cdJb8ifgg2DUc9d"); + assert_eq!( + private_key, + "KyZpNDKnfs94vbrwhJneDi77V6jF64PWPF8x5cdJb8ifgg2DUc9d" + ); // Test for P2PKH path let path = "m/44'/0'/0'/0/0"; @@ -199,9 +229,13 @@ mod tests { Some(path), Some(Network::Bitcoin), None, - ).unwrap(); + ) + .unwrap(); - assert_eq!(private_key, "L4p2b9VAf8k5aUahF1JCJUzZkgNEAqLfq8DDdQiyAprQAKSbu8hf"); + assert_eq!( + private_key, + "L4p2b9VAf8k5aUahF1JCJUzZkgNEAqLfq8DDdQiyAprQAKSbu8hf" + ); } #[test] @@ -253,7 +287,8 @@ mod tests { let seed1 = BitcoinAddressValidator::mnemonic_to_seed(mnemonic, None).unwrap(); assert_eq!(seed1.len(), 64); - let seed2 = BitcoinAddressValidator::mnemonic_to_seed(mnemonic, Some("passphrase")).unwrap(); + let seed2 = + BitcoinAddressValidator::mnemonic_to_seed(mnemonic, Some("passphrase")).unwrap(); assert_eq!(seed2.len(), 64); assert_ne!(seed1, seed2); @@ -478,7 +513,10 @@ mod tests { .await .expect("Failed to check balances"); - assert!(balances.total_balance > 0, "Mnemonic must be funded to run this test"); + assert!( + balances.total_balance > 0, + "Mnemonic must be funded to run this test" + ); let preview = BitcoinAddressValidator::prepare_sweep_transaction( mnemonic, @@ -523,7 +561,10 @@ mod tests { )); assert!(result.is_err()); - assert!(matches!(result.unwrap_err(), BroadcastError::InvalidHex { .. })); + assert!(matches!( + result.unwrap_err(), + BroadcastError::InvalidHex { .. } + )); } #[test] @@ -539,7 +580,10 @@ mod tests { )); assert!(result.is_err()); - assert!(matches!(result.unwrap_err(), BroadcastError::InvalidTransaction { .. })); + assert!(matches!( + result.unwrap_err(), + BroadcastError::InvalidTransaction { .. } + )); } // ======================================================================== @@ -565,17 +609,41 @@ mod tests { use crate::modules::onchain::AccountType; // Standard prefixes - assert_eq!(detect_account_type("xpub6ABC").unwrap(), AccountType::Legacy); - assert_eq!(detect_account_type("tpub6ABC").unwrap(), AccountType::Legacy); - assert_eq!(detect_account_type("ypub6ABC").unwrap(), AccountType::WrappedSegwit); - assert_eq!(detect_account_type("upub6ABC").unwrap(), AccountType::WrappedSegwit); - assert_eq!(detect_account_type("zpub6ABC").unwrap(), AccountType::NativeSegwit); - assert_eq!(detect_account_type("vpub6ABC").unwrap(), AccountType::NativeSegwit); + assert_eq!( + detect_account_type("xpub6ABC").unwrap(), + AccountType::Legacy + ); + assert_eq!( + detect_account_type("tpub6ABC").unwrap(), + AccountType::Legacy + ); + assert_eq!( + detect_account_type("ypub6ABC").unwrap(), + AccountType::WrappedSegwit + ); + assert_eq!( + detect_account_type("upub6ABC").unwrap(), + AccountType::WrappedSegwit + ); + assert_eq!( + detect_account_type("zpub6ABC").unwrap(), + AccountType::NativeSegwit + ); + assert_eq!( + detect_account_type("vpub6ABC").unwrap(), + AccountType::NativeSegwit + ); // Actual test keys assert_eq!(detect_account_type(TEST_TPUB).unwrap(), AccountType::Legacy); - assert_eq!(detect_account_type(TEST_UPUB).unwrap(), AccountType::WrappedSegwit); - assert_eq!(detect_account_type(TEST_VPUB).unwrap(), AccountType::NativeSegwit); + assert_eq!( + detect_account_type(TEST_UPUB).unwrap(), + AccountType::WrappedSegwit + ); + assert_eq!( + detect_account_type(TEST_VPUB).unwrap(), + AccountType::NativeSegwit + ); // Error cases assert!(detect_account_type("invalid_key").is_err()); @@ -588,19 +656,46 @@ mod tests { use bdk::bitcoin::Network as BdkNetwork; // Mainnet prefixes - assert_eq!(detect_network_from_key("xpub6ABC").unwrap(), BdkNetwork::Bitcoin); - assert_eq!(detect_network_from_key("ypub6ABC").unwrap(), BdkNetwork::Bitcoin); - assert_eq!(detect_network_from_key("zpub6ABC").unwrap(), BdkNetwork::Bitcoin); + assert_eq!( + detect_network_from_key("xpub6ABC").unwrap(), + BdkNetwork::Bitcoin + ); + assert_eq!( + detect_network_from_key("ypub6ABC").unwrap(), + BdkNetwork::Bitcoin + ); + assert_eq!( + detect_network_from_key("zpub6ABC").unwrap(), + BdkNetwork::Bitcoin + ); // Testnet prefixes - assert_eq!(detect_network_from_key("tpub6ABC").unwrap(), BdkNetwork::Testnet); - assert_eq!(detect_network_from_key("upub6ABC").unwrap(), BdkNetwork::Testnet); - assert_eq!(detect_network_from_key("vpub6ABC").unwrap(), BdkNetwork::Testnet); + assert_eq!( + detect_network_from_key("tpub6ABC").unwrap(), + BdkNetwork::Testnet + ); + assert_eq!( + detect_network_from_key("upub6ABC").unwrap(), + BdkNetwork::Testnet + ); + assert_eq!( + detect_network_from_key("vpub6ABC").unwrap(), + BdkNetwork::Testnet + ); // Actual test keys - assert_eq!(detect_network_from_key(TEST_TPUB).unwrap(), BdkNetwork::Testnet); - assert_eq!(detect_network_from_key(TEST_UPUB).unwrap(), BdkNetwork::Testnet); - assert_eq!(detect_network_from_key(TEST_VPUB).unwrap(), BdkNetwork::Testnet); + assert_eq!( + detect_network_from_key(TEST_TPUB).unwrap(), + BdkNetwork::Testnet + ); + assert_eq!( + detect_network_from_key(TEST_UPUB).unwrap(), + BdkNetwork::Testnet + ); + assert_eq!( + detect_network_from_key(TEST_VPUB).unwrap(), + BdkNetwork::Testnet + ); // Error cases assert!(detect_network_from_key("invalid").is_err()); @@ -613,16 +708,27 @@ mod tests { // tpub should remain unchanged let normalized_tpub = normalize_extended_key(TEST_TPUB).unwrap(); - assert!(normalized_tpub.starts_with("tpub"), "tpub should remain as tpub"); + assert!( + normalized_tpub.starts_with("tpub"), + "tpub should remain as tpub" + ); assert_eq!(normalized_tpub, TEST_TPUB); // upub should be converted to tpub let normalized_upub = normalize_extended_key(TEST_UPUB).unwrap(); - assert!(normalized_upub.starts_with("tpub"), "upub should be converted to tpub, got: {}", &normalized_upub[..4]); + assert!( + normalized_upub.starts_with("tpub"), + "upub should be converted to tpub, got: {}", + &normalized_upub[..4] + ); // vpub should be converted to tpub let normalized_vpub = normalize_extended_key(TEST_VPUB).unwrap(); - assert!(normalized_vpub.starts_with("tpub"), "vpub should be converted to tpub, got: {}", &normalized_vpub[..4]); + assert!( + normalized_vpub.starts_with("tpub"), + "vpub should be converted to tpub, got: {}", + &normalized_vpub[..4] + ); // Error cases assert!(normalize_extended_key("ab").is_err()); @@ -665,18 +771,42 @@ mod tests { use crate::modules::onchain::AccountType; use bdk::bitcoin::Network as BdkNetwork; - assert_eq!(derive_base_path(AccountType::Legacy, BdkNetwork::Bitcoin, 0), "m/44'/0'/0'"); - assert_eq!(derive_base_path(AccountType::WrappedSegwit, BdkNetwork::Bitcoin, 0), "m/49'/0'/0'"); - assert_eq!(derive_base_path(AccountType::NativeSegwit, BdkNetwork::Bitcoin, 0), "m/84'/0'/0'"); - assert_eq!(derive_base_path(AccountType::Taproot, BdkNetwork::Bitcoin, 0), "m/86'/0'/0'"); + assert_eq!( + derive_base_path(AccountType::Legacy, BdkNetwork::Bitcoin, 0), + "m/44'/0'/0'" + ); + assert_eq!( + derive_base_path(AccountType::WrappedSegwit, BdkNetwork::Bitcoin, 0), + "m/49'/0'/0'" + ); + assert_eq!( + derive_base_path(AccountType::NativeSegwit, BdkNetwork::Bitcoin, 0), + "m/84'/0'/0'" + ); + assert_eq!( + derive_base_path(AccountType::Taproot, BdkNetwork::Bitcoin, 0), + "m/86'/0'/0'" + ); // Testnet uses coin_type 1 - assert_eq!(derive_base_path(AccountType::Legacy, BdkNetwork::Testnet, 0), "m/44'/1'/0'"); - assert_eq!(derive_base_path(AccountType::NativeSegwit, BdkNetwork::Testnet, 0), "m/84'/1'/0'"); + assert_eq!( + derive_base_path(AccountType::Legacy, BdkNetwork::Testnet, 0), + "m/44'/1'/0'" + ); + assert_eq!( + derive_base_path(AccountType::NativeSegwit, BdkNetwork::Testnet, 0), + "m/84'/1'/0'" + ); // Non-zero account index - assert_eq!(derive_base_path(AccountType::WrappedSegwit, BdkNetwork::Bitcoin, 2), "m/49'/0'/2'"); - assert_eq!(derive_base_path(AccountType::NativeSegwit, BdkNetwork::Testnet, 5), "m/84'/1'/5'"); + assert_eq!( + derive_base_path(AccountType::WrappedSegwit, BdkNetwork::Bitcoin, 2), + "m/49'/0'/2'" + ); + assert_eq!( + derive_base_path(AccountType::NativeSegwit, BdkNetwork::Testnet, 5), + "m/84'/1'/5'" + ); } #[test] @@ -709,7 +839,10 @@ mod tests { // Uppercase hex is accepted and normalized to lowercase in descriptor let result = resolve_wallet_setup(TEST_VPUB, None, None, Some("73C5DA0A")); - assert!(result.is_ok(), "Uppercase hex fingerprint should be accepted"); + assert!( + result.is_ok(), + "Uppercase hex fingerprint should be accepted" + ); let setup = result.unwrap(); assert!( setup.external_desc.contains("73c5da0a"), @@ -734,7 +867,9 @@ mod tests { // OP_RETURN only (no recipient) let r = validate_outputs( - &[ComposeOutput::OpReturn { data_hex: "cafe".into() }], + &[ComposeOutput::OpReturn { + data_hex: "cafe".into(), + }], net, ); assert!(r.is_err()); @@ -742,7 +877,10 @@ mod tests { // Zero-amount payment let r = validate_outputs( - &[ComposeOutput::Payment { address: valid_addr.into(), amount_sats: 0 }], + &[ComposeOutput::Payment { + address: valid_addr.into(), + amount_sats: 0, + }], net, ); assert!(r.is_err()); @@ -751,8 +889,12 @@ mod tests { // Multiple SendMax let r = validate_outputs( &[ - ComposeOutput::SendMax { address: valid_addr.into() }, - ComposeOutput::SendMax { address: valid_addr.into() }, + ComposeOutput::SendMax { + address: valid_addr.into(), + }, + ComposeOutput::SendMax { + address: valid_addr.into(), + }, ], net, ); @@ -762,8 +904,13 @@ mod tests { // Empty OP_RETURN data let r = validate_outputs( &[ - ComposeOutput::Payment { address: valid_addr.into(), amount_sats: 1_000 }, - ComposeOutput::OpReturn { data_hex: "".into() }, + ComposeOutput::Payment { + address: valid_addr.into(), + amount_sats: 1_000, + }, + ComposeOutput::OpReturn { + data_hex: "".into(), + }, ], net, ); @@ -773,8 +920,13 @@ mod tests { // OP_RETURN > 80 bytes (81 bytes = 162 hex chars) let r = validate_outputs( &[ - ComposeOutput::Payment { address: valid_addr.into(), amount_sats: 1_000 }, - ComposeOutput::OpReturn { data_hex: "aa".repeat(81) }, + ComposeOutput::Payment { + address: valid_addr.into(), + amount_sats: 1_000, + }, + ComposeOutput::OpReturn { + data_hex: "aa".repeat(81), + }, ], net, ); @@ -796,8 +948,13 @@ mod tests { // Valid outputs pass let r = validate_outputs( &[ - ComposeOutput::Payment { address: valid_addr.into(), amount_sats: 5_000 }, - ComposeOutput::OpReturn { data_hex: "deadbeef".into() }, + ComposeOutput::Payment { + address: valid_addr.into(), + amount_sats: 5_000, + }, + ComposeOutput::OpReturn { + data_hex: "deadbeef".into(), + }, ], net, ); @@ -806,9 +963,16 @@ mod tests { // Valid SendMax + Payment + OpReturn let r = validate_outputs( &[ - ComposeOutput::Payment { address: valid_addr.into(), amount_sats: 1_000 }, - ComposeOutput::SendMax { address: valid_addr.into() }, - ComposeOutput::OpReturn { data_hex: "cafe".into() }, + ComposeOutput::Payment { + address: valid_addr.into(), + amount_sats: 1_000, + }, + ComposeOutput::SendMax { + address: valid_addr.into(), + }, + ComposeOutput::OpReturn { + data_hex: "cafe".into(), + }, ], net, ); @@ -817,7 +981,7 @@ mod tests { #[tokio::test] async fn test_compose_empty_fee_rates() { - use crate::modules::onchain::{ComposeParams, ComposeOutput}; + use crate::modules::onchain::{ComposeOutput, ComposeParams}; let params = ComposeParams { wallet: test_wallet_params(None), @@ -830,7 +994,10 @@ mod tests { }; let results = crate::modules::onchain::compose_transaction(params).await; - assert!(results.is_empty(), "Empty fee_rates should return empty vec"); + assert!( + results.is_empty(), + "Empty fee_rates should return empty vec" + ); } // --- Integration Tests: get_account_info --- @@ -841,26 +1008,34 @@ mod tests { use crate::modules::onchain::get_account_info; use crate::modules::onchain::AccountType; - let result = get_account_info( - TEST_TPUB, - ACCOUNT_INFO_ELECTRUM_URL, - None, - None, - None, - ) - .await; + let result = get_account_info(TEST_TPUB, ACCOUNT_INFO_ELECTRUM_URL, None, None, None).await; let info = result.expect("get_account_info(tpub) should succeed"); assert_eq!(info.account_type, AccountType::Legacy); let balance: u64 = info.balance; - assert!(balance >= 100_000, "Expected balance >= 100,000 sats, got {}", balance); - assert!(info.utxo_count >= 1, "Expected at least 1 UTXO, got {}", info.utxo_count); + assert!( + balance >= 100_000, + "Expected balance >= 100,000 sats, got {}", + balance + ); + assert!( + info.utxo_count >= 1, + "Expected at least 1 UTXO, got {}", + info.utxo_count + ); assert!(info.block_height > 0, "Expected block_height > 0"); - assert!(info.account.path.starts_with("m/44'/1'/"), "Expected BIP44 testnet path, got {}", info.account.path); + assert!( + info.account.path.starts_with("m/44'/1'/"), + "Expected BIP44 testnet path, got {}", + info.account.path + ); assert!(!info.account.utxo.is_empty(), "Expected non-empty UTXOs"); // Verify address structure - assert!(!info.account.addresses.unused.is_empty(), "Expected unused addresses"); + assert!( + !info.account.addresses.unused.is_empty(), + "Expected unused addresses" + ); for addr in &info.account.addresses.used { assert!(!addr.address.is_empty()); assert!(addr.path.starts_with("m/44'/1'/")); @@ -874,8 +1049,10 @@ mod tests { assert!(!utxo.path.is_empty(), "UTXO should have a derivation path"); } - println!("tpub account info: balance={}, utxos={}, path={}, block_height={}", - info.balance, info.utxo_count, info.account.path, info.block_height); + println!( + "tpub account info: balance={}, utxos={}, path={}, block_height={}", + info.balance, info.utxo_count, info.account.path, info.block_height + ); } #[tokio::test] @@ -884,22 +1061,27 @@ mod tests { use crate::modules::onchain::get_account_info; use crate::modules::onchain::AccountType; - let result = get_account_info( - TEST_UPUB, - ACCOUNT_INFO_ELECTRUM_URL, - None, - None, - None, - ) - .await; + let result = get_account_info(TEST_UPUB, ACCOUNT_INFO_ELECTRUM_URL, None, None, None).await; let info = result.expect("get_account_info(upub) should succeed"); assert_eq!(info.account_type, AccountType::WrappedSegwit); let balance: u64 = info.balance; - assert!(balance >= 100_000, "Expected balance >= 100,000 sats, got {}", balance); - assert!(info.utxo_count >= 1, "Expected at least 1 UTXO, got {}", info.utxo_count); + assert!( + balance >= 100_000, + "Expected balance >= 100,000 sats, got {}", + balance + ); + assert!( + info.utxo_count >= 1, + "Expected at least 1 UTXO, got {}", + info.utxo_count + ); assert!(info.block_height > 0); - assert!(info.account.path.starts_with("m/49'/1'/"), "Expected BIP49 testnet path, got {}", info.account.path); + assert!( + info.account.path.starts_with("m/49'/1'/"), + "Expected BIP49 testnet path, got {}", + info.account.path + ); assert!(!info.account.utxo.is_empty()); for utxo in &info.account.utxo { @@ -908,8 +1090,10 @@ mod tests { assert!(amount > 0); } - println!("upub account info: balance={}, utxos={}, path={}, block_height={}", - info.balance, info.utxo_count, info.account.path, info.block_height); + println!( + "upub account info: balance={}, utxos={}, path={}, block_height={}", + info.balance, info.utxo_count, info.account.path, info.block_height + ); } #[tokio::test] @@ -918,22 +1102,27 @@ mod tests { use crate::modules::onchain::get_account_info; use crate::modules::onchain::AccountType; - let result = get_account_info( - TEST_VPUB, - ACCOUNT_INFO_ELECTRUM_URL, - None, - None, - None, - ) - .await; + let result = get_account_info(TEST_VPUB, ACCOUNT_INFO_ELECTRUM_URL, None, None, None).await; let info = result.expect("get_account_info(vpub) should succeed"); assert_eq!(info.account_type, AccountType::NativeSegwit); let balance: u64 = info.balance; - assert!(balance >= 100_000, "Expected balance >= 100,000 sats, got {}", balance); - assert!(info.utxo_count >= 1, "Expected at least 1 UTXO, got {}", info.utxo_count); + assert!( + balance >= 100_000, + "Expected balance >= 100,000 sats, got {}", + balance + ); + assert!( + info.utxo_count >= 1, + "Expected at least 1 UTXO, got {}", + info.utxo_count + ); assert!(info.block_height > 0); - assert!(info.account.path.starts_with("m/84'/1'/"), "Expected BIP84 testnet path, got {}", info.account.path); + assert!( + info.account.path.starts_with("m/84'/1'/"), + "Expected BIP84 testnet path, got {}", + info.account.path + ); assert!(!info.account.utxo.is_empty()); for utxo in &info.account.utxo { @@ -942,8 +1131,10 @@ mod tests { assert!(amount > 0); } - println!("vpub account info: balance={}, utxos={}, path={}, block_height={}", - info.balance, info.utxo_count, info.account.path, info.block_height); + println!( + "vpub account info: balance={}, utxos={}, path={}, block_height={}", + info.balance, info.utxo_count, info.account.path, info.block_height + ); } // --- Integration Tests: get_address_info --- @@ -953,19 +1144,22 @@ mod tests { async fn test_get_address_info_legacy() { use crate::modules::onchain::get_address_info; - let result = get_address_info( - TEST_LEGACY_ADDR, - ACCOUNT_INFO_ELECTRUM_URL, - None, - ) - .await; + let result = get_address_info(TEST_LEGACY_ADDR, ACCOUNT_INFO_ELECTRUM_URL, None).await; let info = result.expect("get_address_info(legacy) should succeed"); assert_eq!(info.address, TEST_LEGACY_ADDR); let balance: u64 = info.balance; - assert!(balance >= 100_000, "Expected balance >= 100,000 sats, got {}", balance); + assert!( + balance >= 100_000, + "Expected balance >= 100,000 sats, got {}", + balance + ); assert!(!info.utxos.is_empty(), "Expected non-empty UTXOs"); - assert!(info.transfers >= 1, "Expected at least 1 transfer, got {}", info.transfers); + assert!( + info.transfers >= 1, + "Expected at least 1 transfer, got {}", + info.transfers + ); assert!(info.block_height > 0); for utxo in &info.utxos { @@ -975,8 +1169,13 @@ mod tests { assert!(amount > 0); } - println!("Legacy address info: balance={}, utxos={}, transfers={}, block_height={}", - info.balance, info.utxos.len(), info.transfers, info.block_height); + println!( + "Legacy address info: balance={}, utxos={}, transfers={}, block_height={}", + info.balance, + info.utxos.len(), + info.transfers, + info.block_height + ); } #[tokio::test] @@ -984,17 +1183,16 @@ mod tests { async fn test_get_address_info_p2sh() { use crate::modules::onchain::get_address_info; - let result = get_address_info( - TEST_P2SH_ADDR, - ACCOUNT_INFO_ELECTRUM_URL, - None, - ) - .await; + let result = get_address_info(TEST_P2SH_ADDR, ACCOUNT_INFO_ELECTRUM_URL, None).await; let info = result.expect("get_address_info(p2sh) should succeed"); assert_eq!(info.address, TEST_P2SH_ADDR); let balance: u64 = info.balance; - assert!(balance >= 100_000, "Expected balance >= 100,000 sats, got {}", balance); + assert!( + balance >= 100_000, + "Expected balance >= 100,000 sats, got {}", + balance + ); assert!(!info.utxos.is_empty()); assert!(info.transfers >= 1); assert!(info.block_height > 0); @@ -1003,8 +1201,13 @@ mod tests { assert_eq!(utxo.address, TEST_P2SH_ADDR); } - println!("P2SH address info: balance={}, utxos={}, transfers={}, block_height={}", - info.balance, info.utxos.len(), info.transfers, info.block_height); + println!( + "P2SH address info: balance={}, utxos={}, transfers={}, block_height={}", + info.balance, + info.utxos.len(), + info.transfers, + info.block_height + ); } #[tokio::test] @@ -1012,17 +1215,17 @@ mod tests { async fn test_get_address_info_regtest_bech32() { use crate::modules::onchain::get_address_info; - let result = get_address_info( - TEST_REGTEST_BECH32_ADDR, - ACCOUNT_INFO_ELECTRUM_URL, - None, - ) - .await; + let result = + get_address_info(TEST_REGTEST_BECH32_ADDR, ACCOUNT_INFO_ELECTRUM_URL, None).await; let info = result.expect("get_address_info(regtest bech32) should succeed"); assert_eq!(info.address, TEST_REGTEST_BECH32_ADDR); let balance: u64 = info.balance; - assert!(balance >= 100_000, "Expected balance >= 100,000 sats, got {}", balance); + assert!( + balance >= 100_000, + "Expected balance >= 100,000 sats, got {}", + balance + ); assert!(!info.utxos.is_empty()); assert!(info.transfers >= 1); assert!(info.block_height > 0); @@ -1031,8 +1234,13 @@ mod tests { assert_eq!(utxo.address, TEST_REGTEST_BECH32_ADDR); } - println!("Regtest bech32 address info: balance={}, utxos={}, transfers={}, block_height={}", - info.balance, info.utxos.len(), info.transfers, info.block_height); + println!( + "Regtest bech32 address info: balance={}, utxos={}, transfers={}, block_height={}", + info.balance, + info.utxos.len(), + info.transfers, + info.block_height + ); } // --- Error / Edge Case Tests --- @@ -1099,12 +1307,7 @@ mod tests { async fn test_get_address_info_invalid_electrum() { use crate::modules::onchain::get_address_info; - let result = get_address_info( - TEST_LEGACY_ADDR, - "invalid://url", - None, - ) - .await; + let result = get_address_info(TEST_LEGACY_ADDR, "invalid://url", None).await; assert!(result.is_err(), "Expected error for invalid electrum URL"); } @@ -1164,13 +1367,8 @@ mod tests { use crate::modules::onchain::get_transaction_history; use crate::modules::onchain::{AccountType, TxDirection}; - let result = get_transaction_history( - TEST_VPUB, - ACCOUNT_INFO_ELECTRUM_URL, - None, - None, - ) - .await; + let result = + get_transaction_history(TEST_VPUB, ACCOUNT_INFO_ELECTRUM_URL, None, None).await; let info = result.expect("get_transaction_history(vpub) should succeed"); assert_eq!(info.account_type, AccountType::NativeSegwit); @@ -1187,8 +1385,14 @@ mod tests { // Verify transaction fields for tx in &info.transactions { - assert!(!tx.txid.is_empty(), "Transaction should have non-empty txid"); - assert!(tx.received > 0 || tx.sent > 0, "Transaction should have some value"); + assert!( + !tx.txid.is_empty(), + "Transaction should have non-empty txid" + ); + assert!( + tx.received > 0 || tx.sent > 0, + "Transaction should have some value" + ); match tx.direction { TxDirection::Sent => assert!(tx.sent > 0), @@ -1205,11 +1409,17 @@ mod tests { let mut seen_confirmed = false; for tx in &info.transactions { if tx.timestamp.is_none() { - assert!(!seen_confirmed, "Unconfirmed txs should come before confirmed"); + assert!( + !seen_confirmed, + "Unconfirmed txs should come before confirmed" + ); } else { seen_confirmed = true; if let Some(prev) = prev_timestamp { - assert!(tx.timestamp.unwrap() <= prev, "Confirmed txs should be sorted newest first"); + assert!( + tx.timestamp.unwrap() <= prev, + "Confirmed txs should be sorted newest first" + ); } prev_timestamp = tx.timestamp; } @@ -1378,19 +1588,19 @@ mod tests { #[tokio::test] #[ignore] async fn test_get_transaction_detail_vpub() { - use crate::modules::onchain::{get_transaction_detail, get_transaction_history, TxDirection}; + use crate::modules::onchain::{ + get_transaction_detail, get_transaction_history, TxDirection, + }; // First get a known txid from the history - let history = get_transaction_history( - TEST_VPUB, - ACCOUNT_INFO_ELECTRUM_URL, - None, - None, - ) - .await - .expect("get_transaction_history should succeed"); + let history = get_transaction_history(TEST_VPUB, ACCOUNT_INFO_ELECTRUM_URL, None, None) + .await + .expect("get_transaction_history should succeed"); - assert!(!history.transactions.is_empty(), "Need at least 1 tx to test detail"); + assert!( + !history.transactions.is_empty(), + "Need at least 1 tx to test detail" + ); let target_tx = &history.transactions[0]; let target_txid = &target_tx.txid; @@ -1418,7 +1628,10 @@ mod tests { // Verify detail fields assert!(!detail.inputs.is_empty(), "Transaction should have inputs"); - assert!(!detail.outputs.is_empty(), "Transaction should have outputs"); + assert!( + !detail.outputs.is_empty(), + "Transaction should have outputs" + ); assert!(detail.size > 0, "Transaction size should be > 0"); assert!(detail.vsize > 0, "Transaction vsize should be > 0"); assert!(detail.weight > 0, "Transaction weight should be > 0"); @@ -1426,8 +1639,14 @@ mod tests { // Fee rate should be present when fee is known if detail.fee.is_some() { - assert!(detail.fee_rate.is_some(), "fee_rate should be present when fee is known"); - assert!(detail.fee_rate.unwrap() > 0.0, "fee_rate should be positive"); + assert!( + detail.fee_rate.is_some(), + "fee_rate should be present when fee is known" + ); + assert!( + detail.fee_rate.unwrap() > 0.0, + "fee_rate should be positive" + ); } // For received txs, at least one output should be ours @@ -1445,7 +1664,10 @@ mod tests { // Verify output fields for out in &detail.outputs { - assert!(!out.script_pubkey.is_empty(), "Output should have script_pubkey"); + assert!( + !out.script_pubkey.is_empty(), + "Output should have script_pubkey" + ); } println!( @@ -1464,14 +1686,9 @@ mod tests { async fn test_history_transaction_amount() { use crate::modules::onchain::{get_transaction_history, TxDirection}; - let history = get_transaction_history( - TEST_VPUB, - ACCOUNT_INFO_ELECTRUM_URL, - None, - None, - ) - .await - .expect("get_transaction_history should succeed"); + let history = get_transaction_history(TEST_VPUB, ACCOUNT_INFO_ELECTRUM_URL, None, None) + .await + .expect("get_transaction_history should succeed"); assert!(!history.transactions.is_empty(), "Need at least 1 tx"); @@ -1506,7 +1723,10 @@ mod tests { } } - println!("Verified amount field for {} transactions", history.transactions.len()); + println!( + "Verified amount field for {} transactions", + history.transactions.len() + ); } // ======================================================================== @@ -1526,7 +1746,9 @@ mod tests { #[tokio::test] #[ignore] async fn test_compose_basic_payment() { - use crate::modules::onchain::{compose_transaction, ComposeParams, ComposeOutput, ComposeResult}; + use crate::modules::onchain::{ + compose_transaction, ComposeOutput, ComposeParams, ComposeResult, + }; let params = ComposeParams { wallet: test_wallet_params(None), @@ -1542,10 +1764,18 @@ mod tests { assert_eq!(results.len(), 1); match &results[0] { - ComposeResult::Success { psbt, fee, total_spent, .. } => { + ComposeResult::Success { + psbt, + fee, + total_spent, + .. + } => { assert!(!psbt.is_empty(), "PSBT should not be empty"); assert!(*fee > 0, "Fee should be > 0"); - assert!(*total_spent > 5_000, "total_spent should be > payment amount"); + assert!( + *total_spent > 5_000, + "total_spent should be > payment amount" + ); use base64::{engine::general_purpose, Engine as _}; let decoded = general_purpose::STANDARD.decode(psbt); @@ -1558,7 +1788,9 @@ mod tests { #[tokio::test] #[ignore] async fn test_compose_send_max() { - use crate::modules::onchain::{compose_transaction, ComposeParams, ComposeOutput, ComposeResult}; + use crate::modules::onchain::{ + compose_transaction, ComposeOutput, ComposeParams, ComposeResult, + }; let params = ComposeParams { wallet: test_wallet_params(None), @@ -1573,7 +1805,9 @@ mod tests { assert_eq!(results.len(), 1); match &results[0] { - ComposeResult::Success { fee, total_spent, .. } => { + ComposeResult::Success { + fee, total_spent, .. + } => { assert!(*fee > 0, "Fee should be > 0"); assert!(*total_spent > 0, "Should have funds to send"); } @@ -1584,7 +1818,9 @@ mod tests { #[tokio::test] #[ignore] async fn test_compose_send_max_with_payment() { - use crate::modules::onchain::{compose_transaction, ComposeParams, ComposeOutput, ComposeResult}; + use crate::modules::onchain::{ + compose_transaction, ComposeOutput, ComposeParams, ComposeResult, + }; let params = ComposeParams { wallet: test_wallet_params(None), @@ -1605,9 +1841,14 @@ mod tests { assert_eq!(results.len(), 1); match &results[0] { - ComposeResult::Success { fee, total_spent, .. } => { + ComposeResult::Success { + fee, total_spent, .. + } => { assert!(*fee > 0, "Fee should be > 0"); - assert!(*total_spent >= 1_000 + fee, "total_spent should cover payment + fee"); + assert!( + *total_spent >= 1_000 + fee, + "total_spent should cover payment + fee" + ); } ComposeResult::Error { error } => panic!("SendMax+Payment compose failed: {}", error), } @@ -1616,7 +1857,9 @@ mod tests { #[tokio::test] #[ignore] async fn test_compose_insufficient_funds() { - use crate::modules::onchain::{compose_transaction, ComposeParams, ComposeOutput, ComposeResult}; + use crate::modules::onchain::{ + compose_transaction, ComposeOutput, ComposeParams, ComposeResult, + }; let params = ComposeParams { wallet: test_wallet_params(None), @@ -1636,7 +1879,9 @@ mod tests { #[tokio::test] #[ignore] async fn test_compose_multiple_fee_rates() { - use crate::modules::onchain::{compose_transaction, ComposeParams, ComposeOutput, ComposeResult}; + use crate::modules::onchain::{ + compose_transaction, ComposeOutput, ComposeParams, ComposeResult, + }; let params = ComposeParams { wallet: test_wallet_params(None), @@ -1655,7 +1900,13 @@ mod tests { for (i, result) in results.iter().enumerate() { match result { ComposeResult::Success { fee, .. } => { - assert!(*fee > prev_fee, "Fee level {} ({} sats) should be > previous ({} sats)", i, fee, prev_fee); + assert!( + *fee > prev_fee, + "Fee level {} ({} sats) should be > previous ({} sats)", + i, + fee, + prev_fee + ); prev_fee = *fee; } ComposeResult::Error { error } => panic!("Fee level {} failed: {}", i, error), @@ -1666,7 +1917,9 @@ mod tests { #[tokio::test] #[ignore] async fn test_compose_with_fingerprint() { - use crate::modules::onchain::{compose_transaction, ComposeParams, ComposeOutput, ComposeResult}; + use crate::modules::onchain::{ + compose_transaction, ComposeOutput, ComposeParams, ComposeResult, + }; let params = ComposeParams { wallet: test_wallet_params(Some("73c5da0a".to_string())), @@ -1680,6 +1933,9 @@ mod tests { let results = compose_transaction(params).await; assert_eq!(results.len(), 1); - assert!(matches!(&results[0], ComposeResult::Success { .. }), "Compose with fingerprint should succeed"); + assert!( + matches!(&results[0], ComposeResult::Success { .. }), + "Compose with fingerprint should succeed" + ); } } diff --git a/src/modules/onchain/types.rs b/src/modules/onchain/types.rs index 845713d..f18091e 100644 --- a/src/modules/onchain/types.rs +++ b/src/modules/onchain/types.rs @@ -1,12 +1,11 @@ use crate::modules::scanner::NetworkType; +use bitcoin::Network as BitcoinNetwork; use bitcoin_address_generator::{ GetAddressResponse as ExternalGetAddressResponse, - GetAddressesResponse as ExternalGetAddressesResponse, - WordCount as ExternalWordCount + GetAddressesResponse as ExternalGetAddressesResponse, WordCount as ExternalWordCount, }; -use bitcoin::Network as BitcoinNetwork; -use uniffi::{Enum, Record}; use serde::{Deserialize, Serialize}; +use uniffi::{Enum, Record}; #[derive(Debug, Clone, Copy, Enum)] pub enum WordCount { @@ -23,7 +22,7 @@ pub enum WordCount { } // For GetAddressResponse struct -#[derive(Debug, Serialize, Deserialize, Clone, Record)] // Added Record trait +#[derive(Debug, Serialize, Deserialize, Clone, Record)] // Added Record trait pub struct GetAddressResponse { /// The generated Bitcoin address as a string pub address: String, @@ -34,7 +33,7 @@ pub struct GetAddressResponse { } // For GetAddressesResponse struct -#[derive(Debug, Serialize, Deserialize, Clone, Record)] // Added Record trait +#[derive(Debug, Serialize, Deserialize, Clone, Record)] // Added Record trait pub struct GetAddressesResponse { /// Vector of generated Bitcoin addresses pub addresses: Vec, @@ -77,7 +76,11 @@ impl From for GetAddressResponse { impl From for GetAddressesResponse { fn from(response: ExternalGetAddressesResponse) -> Self { Self { - addresses: response.addresses.into_iter().map(|addr| addr.into()).collect(), + addresses: response + .addresses + .into_iter() + .map(|addr| addr.into()) + .collect(), } } } @@ -122,11 +125,11 @@ impl From for BitcoinNetwork { #[derive(uniffi::Enum, Debug, PartialEq)] pub enum AddressType { - P2PKH, // Legacy - P2SH, // SegWit - P2WPKH, // Native SegWit - P2WSH, // Native SegWit Script - P2TR, // Taproot + P2PKH, // Legacy + P2SH, // SegWit + P2WPKH, // Native SegWit + P2WSH, // Native SegWit Script + P2TR, // Taproot Unknown, } @@ -382,9 +385,7 @@ pub enum ComposeResult { total_spent: u64, }, /// Composition failed (e.g. insufficient funds) - Error { - error: String, - }, + Error { error: String }, } // ============================================================================ @@ -409,8 +410,7 @@ pub enum TxDirection { /// /// Returns `(direction, display_amount, net_value)`. pub(crate) fn classify_tx(sent: u64, received: u64, fee: Option) -> (TxDirection, u64, i64) { - let net = - (received as i128 - sent as i128).clamp(i64::MIN as i128, i64::MAX as i128) as i64; + let net = (received as i128 - sent as i128).clamp(i64::MIN as i128, i64::MAX as i128) as i64; let direction = if sent > 0 && received > 0 { match fee { @@ -425,7 +425,9 @@ pub(crate) fn classify_tx(sent: u64, received: u64, fee: Option) -> (TxDire let amount = match direction { TxDirection::Received => received, - TxDirection::Sent => sent.saturating_sub(received).saturating_sub(fee.unwrap_or(0)), + TxDirection::Sent => sent + .saturating_sub(received) + .saturating_sub(fee.unwrap_or(0)), TxDirection::SelfTransfer => fee.unwrap_or(0), }; diff --git a/src/modules/pubky/auth.rs b/src/modules/pubky/auth.rs index e833178..ad4d2b6 100644 --- a/src/modules/pubky/auth.rs +++ b/src/modules/pubky/auth.rs @@ -1,6 +1,6 @@ use once_cell::sync::OnceCell; +use pubky::{AuthFlowKind, Capabilities, PubkyAuthFlow}; use tokio::sync::Mutex as TokioMutex; -use pubky::{PubkyAuthFlow, Capabilities, AuthFlowKind}; use super::errors::PubkyError; @@ -12,8 +12,10 @@ fn auth_flow_slot() -> &'static TokioMutex> { /// Start a Pubky auth flow and return the `pubkyauth://` deep-link URL. pub async fn start_pubky_auth(caps: String) -> Result { - let capabilities = Capabilities::try_from(caps.as_str()) - .map_err(|e| PubkyError::InvalidCapabilities { reason: e.to_string() })?; + let capabilities = + Capabilities::try_from(caps.as_str()).map_err(|e| PubkyError::InvalidCapabilities { + reason: e.to_string(), + })?; let mut guard = auth_flow_slot().lock().await; @@ -23,8 +25,11 @@ pub async fn start_pubky_auth(caps: String) -> Result { }); } - let flow = PubkyAuthFlow::start(&capabilities, AuthFlowKind::signin()) - .map_err(|e| PubkyError::AuthFailed { reason: e.to_string() })?; + let flow = PubkyAuthFlow::start(&capabilities, AuthFlowKind::signin()).map_err(|e| { + PubkyError::AuthFailed { + reason: e.to_string(), + } + })?; let url = flow.authorization_url().to_string(); *guard = Some(flow); @@ -49,7 +54,9 @@ pub async fn complete_pubky_auth() -> Result { let session = flow .await_approval() .await - .map_err(|e| PubkyError::AuthFailed { reason: e.to_string() })?; + .map_err(|e| PubkyError::AuthFailed { + reason: e.to_string(), + })?; Ok(session.export_secret()) } diff --git a/src/modules/pubky/keys.rs b/src/modules/pubky/keys.rs index 18ea5e2..95b14ed 100644 --- a/src/modules/pubky/keys.rs +++ b/src/modules/pubky/keys.rs @@ -5,13 +5,16 @@ use super::errors::PubkyError; /// Reconstruct a [`Keypair`] from a hex-encoded 32-byte secret key. pub(super) fn keypair_from_hex(secret_key_hex: &str) -> Result { - let bytes = hex::decode(secret_key_hex) - .map_err(|e| PubkyError::KeyError { reason: e.to_string() })?; - - let secret: [u8; 32] = bytes.try_into().map_err(|v: Vec| PubkyError::KeyError { - reason: format!("expected 32 bytes, got {}", v.len()), + let bytes = hex::decode(secret_key_hex).map_err(|e| PubkyError::KeyError { + reason: e.to_string(), })?; + let secret: [u8; 32] = bytes + .try_into() + .map_err(|v: Vec| PubkyError::KeyError { + reason: format!("expected 32 bytes, got {}", v.len()), + })?; + Ok(Keypair::from_secret(&secret)) } diff --git a/src/modules/pubky/profile.rs b/src/modules/pubky/profile.rs index 23f23c9..3e9f502 100644 --- a/src/modules/pubky/profile.rs +++ b/src/modules/pubky/profile.rs @@ -36,9 +36,9 @@ impl From for PubkyProfile { name: user.name, bio: user.bio, image: user.image, - links: user.links.map(|links| { - links.into_iter().map(PubkyProfileLink::from).collect() - }), + links: user + .links + .map(|links| links.into_iter().map(PubkyProfileLink::from).collect()), status: user.status, } } @@ -54,25 +54,27 @@ pub async fn fetch_pubky_profile(public_key: String) -> Result Result, Pub let addr = format!("{public_key}{FOLLOWS_PATH}"); let storage = pubky.public_storage(); - let list_builder = storage - .list(&addr) - .map_err(|e| PubkyError::FetchFailed { reason: e.to_string() })?; + let list_builder = storage.list(&addr).map_err(|e| PubkyError::FetchFailed { + reason: e.to_string(), + })?; let entries = match list_builder.send().await { Ok(entries) => entries, Err(e) if is_not_found(&e) => return Ok(Vec::new()), - Err(e) => return Err(PubkyError::FetchFailed { reason: e.to_string() }), + Err(e) => { + return Err(PubkyError::FetchFailed { + reason: e.to_string(), + }) + } }; let mut contacts = Vec::new(); diff --git a/src/modules/pubky/resolve.rs b/src/modules/pubky/resolve.rs index e990d22..62b6aec 100644 --- a/src/modules/pubky/resolve.rs +++ b/src/modules/pubky/resolve.rs @@ -15,8 +15,9 @@ pub(super) fn get_pubky() -> Result<&'static Pubky, PubkyError> { /// Convert a `pubky://` URI to its `https://_pubky./…` transport URL. pub fn resolve_pubky_url(uri: String) -> Result { - let url = pubky::resolve_pubky(&uri) - .map_err(|e| PubkyError::ResolutionFailed { reason: e.to_string() })?; + let url = pubky::resolve_pubky(&uri).map_err(|e| PubkyError::ResolutionFailed { + reason: e.to_string(), + })?; Ok(url.to_string()) } @@ -36,12 +37,16 @@ pub async fn fetch_pubky_file(uri: String) -> Result, PubkyError> { .public_storage() .get(&uri) .await - .map_err(|e| PubkyError::FetchFailed { reason: e.to_string() })?; + .map_err(|e| PubkyError::FetchFailed { + reason: e.to_string(), + })?; let bytes = response .bytes() .await - .map_err(|e| PubkyError::FetchFailed { reason: e.to_string() })?; + .map_err(|e| PubkyError::FetchFailed { + reason: e.to_string(), + })?; Ok(bytes.to_vec()) } diff --git a/src/modules/pubky/session.rs b/src/modules/pubky/session.rs index c6885b1..591ac34 100644 --- a/src/modules/pubky/session.rs +++ b/src/modules/pubky/session.rs @@ -1,4 +1,4 @@ -use pubky::{PublicKey, PubkySession}; +use pubky::{PubkySession, PublicKey}; use super::errors::PubkyError; use super::keys::keypair_from_hex; @@ -12,8 +12,10 @@ pub async fn pubky_sign_up( ) -> Result { let kp = keypair_from_hex(&secret_key_hex)?; - let homeserver_pk = PublicKey::try_from_z32(&homeserver_public_key_z32) - .map_err(|e| PubkyError::KeyError { reason: e.to_string() })?; + let homeserver_pk = + PublicKey::try_from_z32(&homeserver_public_key_z32).map_err(|e| PubkyError::KeyError { + reason: e.to_string(), + })?; let pubky = get_pubky()?; let signer = pubky.signer(kp); @@ -21,7 +23,9 @@ pub async fn pubky_sign_up( let session = signer .signup(&homeserver_pk, signup_code.as_deref()) .await - .map_err(|e| PubkyError::AuthFailed { reason: e.to_string() })?; + .map_err(|e| PubkyError::AuthFailed { + reason: e.to_string(), + })?; Ok(session.export_secret()) } @@ -33,10 +37,9 @@ pub async fn pubky_sign_in(secret_key_hex: String) -> Result let pubky = get_pubky()?; let signer = pubky.signer(kp); - let session = signer - .signin() - .await - .map_err(|e| PubkyError::AuthFailed { reason: e.to_string() })?; + let session = signer.signin().await.map_err(|e| PubkyError::AuthFailed { + reason: e.to_string(), + })?; Ok(session.export_secret()) } @@ -53,23 +56,24 @@ pub async fn pubky_session_put( .storage() .put(path.as_str(), content) .await - .map_err(|e| PubkyError::WriteFailed { reason: e.to_string() })?; + .map_err(|e| PubkyError::WriteFailed { + reason: e.to_string(), + })?; Ok(()) } /// Delete a resource at path on the user's homeserver. -pub async fn pubky_session_delete( - session_secret: String, - path: String, -) -> Result<(), PubkyError> { +pub async fn pubky_session_delete(session_secret: String, path: String) -> Result<(), PubkyError> { let session = import_session(&session_secret).await?; session .storage() .delete(path.as_str()) .await - .map_err(|e| PubkyError::WriteFailed { reason: format!("delete failed: {e}") })?; + .map_err(|e| PubkyError::WriteFailed { + reason: format!("delete failed: {e}"), + })?; Ok(()) } @@ -85,16 +89,17 @@ pub async fn pubky_put_with_secret_key( let pubky = get_pubky()?; let signer = pubky.signer(kp); - let session = signer - .signin() - .await - .map_err(|e| PubkyError::AuthFailed { reason: e.to_string() })?; + let session = signer.signin().await.map_err(|e| PubkyError::AuthFailed { + reason: e.to_string(), + })?; session .storage() .put(path.as_str(), content) .await - .map_err(|e| PubkyError::WriteFailed { reason: e.to_string() })?; + .map_err(|e| PubkyError::WriteFailed { + reason: e.to_string(), + })?; Ok(()) } @@ -109,10 +114,14 @@ pub async fn pubky_session_list( let entries = session .storage() .list(dir_path.as_str()) - .map_err(|e| PubkyError::FetchFailed { reason: e.to_string() })? + .map_err(|e| PubkyError::FetchFailed { + reason: e.to_string(), + })? .send() .await - .map_err(|e| PubkyError::FetchFailed { reason: e.to_string() })?; + .map_err(|e| PubkyError::FetchFailed { + reason: e.to_string(), + })?; Ok(entries.iter().map(|e| e.to_pubky_url()).collect()) } @@ -121,5 +130,7 @@ pub async fn pubky_session_list( async fn import_session(session_secret: &str) -> Result { PubkySession::import_secret(session_secret, None) .await - .map_err(|e| PubkyError::AuthFailed { reason: e.to_string() }) + .map_err(|e| PubkyError::AuthFailed { + reason: e.to_string(), + }) } diff --git a/src/modules/pubky/tests.rs b/src/modules/pubky/tests.rs index 37f47c9..9b488b5 100644 --- a/src/modules/pubky/tests.rs +++ b/src/modules/pubky/tests.rs @@ -96,7 +96,10 @@ fn profile_from_full_app_user() { let profile = PubkyProfile::from(user); assert_eq!(profile.name, "Alice"); assert_eq!(profile.bio.as_deref(), Some("Hello world")); - assert_eq!(profile.image.as_deref(), Some("https://example.com/avatar.png")); + assert_eq!( + profile.image.as_deref(), + Some("https://example.com/avatar.png") + ); assert_eq!(profile.status.as_deref(), Some("Online")); let links = profile.links.unwrap(); @@ -159,7 +162,10 @@ fn profile_deserialized_from_full_json() { assert_eq!(profile.name, "Dave"); assert_eq!(profile.bio.as_deref(), Some("Hello")); - assert_eq!(profile.image.as_deref(), Some("https://example.com/img.png")); + assert_eq!( + profile.image.as_deref(), + Some("https://example.com/img.png") + ); assert_eq!(profile.status.as_deref(), Some("Away")); let links = profile.links.unwrap(); assert_eq!(links.len(), 2); @@ -416,4 +422,3 @@ async fn fetch_file_string_malformed_uri_returns_error() { other => panic!("expected FetchFailed, got: {other:?}"), } } - diff --git a/src/modules/scanner/errors.rs b/src/modules/scanner/errors.rs index 9d1e453..46f8198 100644 --- a/src/modules/scanner/errors.rs +++ b/src/modules/scanner/errors.rs @@ -1,6 +1,6 @@ -use thiserror::Error; use crate::lnurl::LnurlError; use crate::onchain::AddressError; +use thiserror::Error; #[derive(uniffi::Error, Debug, Error)] #[non_exhaustive] @@ -11,7 +11,9 @@ pub enum DecodingError { InvalidNetwork, #[error("Invalid amount")] InvalidAmount, - #[error("Invalid LNURL pay amount: {amount_satoshis} sats (must be between {min} and {max} sats)")] + #[error( + "Invalid LNURL pay amount: {amount_satoshis} sats (must be between {min} and {max} sats)" + )] InvalidLNURLPayAmount { amount_satoshis: u64, min: u64, @@ -32,9 +34,7 @@ pub enum DecodingError { #[error("Client creation failed")] ClientCreationFailed, #[error("Invoice creation failed: {error_message}")] - InvoiceCreationFailed { - error_message: String, - }, + InvoiceCreationFailed { error_message: String }, } impl From for DecodingError { @@ -42,19 +42,21 @@ impl From for DecodingError { match error { LnurlError::InvoiceCreationFailed { error_details } => { DecodingError::InvoiceCreationFailed { - error_message: error_details + error_message: error_details, } - }, + } LnurlError::InvalidAddress => DecodingError::InvalidFormat, LnurlError::ClientCreationFailed => DecodingError::ClientCreationFailed, LnurlError::RequestFailed => DecodingError::RequestFailed, LnurlError::InvalidResponse => DecodingError::InvalidResponse, - LnurlError::InvalidAmount { amount_satoshis, min, max } => { - DecodingError::InvalidLNURLPayAmount { - amount_satoshis, - min, - max - } + LnurlError::InvalidAmount { + amount_satoshis, + min, + max, + } => DecodingError::InvalidLNURLPayAmount { + amount_satoshis, + min, + max, }, LnurlError::AuthenticationFailed => DecodingError::InvalidResponse, } @@ -72,4 +74,4 @@ impl From for DecodingError { AddressError::AddressDerivationFailed => DecodingError::InvalidFormat, } } -} \ No newline at end of file +} diff --git a/src/modules/scanner/implementation.rs b/src/modules/scanner/implementation.rs index ceac077..6d7e091 100644 --- a/src/modules/scanner/implementation.rs +++ b/src/modules/scanner/implementation.rs @@ -1,26 +1,25 @@ -use std::collections::HashMap; -use std::str::FromStr; +use super::errors::DecodingError; +use super::types::*; +use super::utils::*; +use crate::lnurl::is_lnurl_address; use async_trait::async_trait; use bitcoin::Network; +use chrono::{DateTime, Utc}; use lazy_regex::{lazy_regex, Lazy}; use lightning_invoice::Bolt11Invoice; -use lnurl::{Builder, LnUrlResponse}; use lnurl::lightning_address::LightningAddress; use lnurl::lnurl::LnUrl; -use url::Url; -use chrono::{DateTime, Utc}; +use lnurl::{Builder, LnUrlResponse}; use regex::Regex; -use crate::lnurl::is_lnurl_address; -use super::errors::DecodingError; -use super::types::*; -use super::utils::*; +use std::collections::HashMap; +use std::str::FromStr; +use url::Url; use crate::modules::onchain::BitcoinAddressValidator; impl LightningInvoice { pub fn get_timestamp(&self) -> DateTime { - DateTime::::from_timestamp(self.timestamp_seconds as i64, 0) - .unwrap_or_default() + DateTime::::from_timestamp(self.timestamp_seconds as i64, 0).unwrap_or_default() } pub fn get_expiry(&self) -> core::time::Duration { @@ -44,9 +43,10 @@ impl Scanner { fn normalize_address_for_validation(address: &str) -> String { let address_lower = address.to_lowercase(); // Check if it's a bech32 address (starts with bc1, tb1, bcrt1, etc.) - if address_lower.starts_with("bc1") || - address_lower.starts_with("tb1") || - address_lower.starts_with("bcrt1") { + if address_lower.starts_with("bc1") + || address_lower.starts_with("tb1") + || address_lower.starts_with("bcrt1") + { address_lower } else { // Legacy address - preserve original case @@ -63,13 +63,14 @@ impl Scanner { // Handle Bitkit deep links if invoice_str.starts_with("bitkit://") { let data = invoice_str.replace("bitkit://", ""); - + // Check if it's a gift code format: bitkit://gift-- if data.starts_with("gift-") { let parts: Vec<&str> = data.splitn(3, '-').collect(); if parts.len() == 3 && parts[0] == "gift" { let code = parts[1]; - let amount = parts[2].parse::() + let amount = parts[2] + .parse::() .map_err(|_| DecodingError::InvalidFormat)?; return Ok(Scanner::Gift { code: code.to_string(), @@ -77,7 +78,7 @@ impl Scanner { }); } } - + return Box::pin(Self::decode(data)).await; } @@ -90,14 +91,14 @@ impl Scanner { } return Ok(Scanner::NodeId { url: invoice_str.to_string(), - network: NetworkType::Bitcoin + network: NetworkType::Bitcoin, }); } } - if invoice_str.to_lowercase().contains("lightning:") || - invoice_str.to_lowercase().starts_with("lntb") || - invoice_str.to_lowercase().starts_with("lnbc") + if invoice_str.to_lowercase().contains("lightning:") + || invoice_str.to_lowercase().starts_with("lntb") + || invoice_str.to_lowercase().starts_with("lnbc") { let invoice_lower = invoice_str.to_lowercase(); let invoice = invoice_lower @@ -106,13 +107,14 @@ impl Scanner { Self::decode_lightning(invoice) } else if invoice_str.to_lowercase().starts_with("bitcoin:") { // Extract address and query params (preserve original case for query params) - let (address_part, query_part) = invoice_str[8..].split_once('?') + let (address_part, query_part) = invoice_str[8..] + .split_once('?') .map(|(addr, query)| (addr, Some(query))) .unwrap_or((&invoice_str[8..], None)); - + // Normalize address (only lowercase bech32, preserve legacy case) let address_normalized = Self::normalize_address_for_validation(address_part); - + if BitcoinAddressValidator::validate_address(&address_normalized).is_ok() { let normalized = if let Some(query) = query_part { format!("bitcoin:{}?{}", address_normalized, query) @@ -125,7 +127,7 @@ impl Scanner { } } else if invoice_str.to_lowercase().starts_with("pubkyauth:") { Ok(Scanner::PubkyAuth { - data: invoice_str.to_string() + data: invoice_str.to_string(), }) } else if let Some(lnurl) = Self::find_lnurl(invoice_str) { Self::decode_lnurl(&lnurl).await @@ -144,7 +146,9 @@ impl Scanner { } pub fn find_lnurl(text: &str) -> Option { - static LNURL_REGEX: Lazy = lazy_regex!(r"^(?:(http.*|bitcoin:.*)[&?]lightning=|lightning:)?(lnurl1[02-9ac-hj-np-z]+)"); + static LNURL_REGEX: Lazy = lazy_regex!( + r"^(?:(http.*|bitcoin:.*)[&?]lightning=|lightning:)?(lnurl1[02-9ac-hj-np-z]+)" + ); // Convert input to lowercase and store it in a variable let text_lower = text.to_lowercase(); @@ -159,46 +163,43 @@ impl Scanner { async fn decode_lnurl(invoice_str: &str) -> Result { // Helper function to convert responses to Scanner enum - fn convert_response(uri: String, response: LnUrlResponse) -> Result { + fn convert_response( + uri: String, + response: LnUrlResponse, + ) -> Result { match response { - LnUrlResponse::LnUrlPayResponse(pay) => { - Ok(Scanner::LnurlPay { - data: LnurlPayData { - uri, - callback: pay.callback, - min_sendable: pay.min_sendable, - max_sendable: pay.max_sendable, - metadata_str: pay.metadata, - comment_allowed: pay.comment_allowed, - allows_nostr: pay.allows_nostr.unwrap_or(false), - nostr_pubkey: pay.nostr_pubkey.map(|key| key.serialize().to_vec()), - } - }) - }, - LnUrlResponse::LnUrlWithdrawResponse(withdraw) => { - Ok(Scanner::LnurlWithdraw { - data: LnurlWithdrawData { - uri, - callback: withdraw.callback, - k1: withdraw.k1, - default_description: withdraw.default_description, - min_withdrawable: withdraw.min_withdrawable, - max_withdrawable: withdraw.max_withdrawable, - tag: withdraw.tag.to_string(), - } - }) - }, - LnUrlResponse::LnUrlChannelResponse(channel) => { - Ok(Scanner::LnurlChannel { - data: LnurlChannelData { - uri: channel.uri, - callback: channel.callback, - k1: channel.k1, - tag: channel.tag.to_string(), - } - }) - }, - _ => Err(DecodingError::InvalidFormat) + LnUrlResponse::LnUrlPayResponse(pay) => Ok(Scanner::LnurlPay { + data: LnurlPayData { + uri, + callback: pay.callback, + min_sendable: pay.min_sendable, + max_sendable: pay.max_sendable, + metadata_str: pay.metadata, + comment_allowed: pay.comment_allowed, + allows_nostr: pay.allows_nostr.unwrap_or(false), + nostr_pubkey: pay.nostr_pubkey.map(|key| key.serialize().to_vec()), + }, + }), + LnUrlResponse::LnUrlWithdrawResponse(withdraw) => Ok(Scanner::LnurlWithdraw { + data: LnurlWithdrawData { + uri, + callback: withdraw.callback, + k1: withdraw.k1, + default_description: withdraw.default_description, + min_withdrawable: withdraw.min_withdrawable, + max_withdrawable: withdraw.max_withdrawable, + tag: withdraw.tag.to_string(), + }, + }), + LnUrlResponse::LnUrlChannelResponse(channel) => Ok(Scanner::LnurlChannel { + data: LnurlChannelData { + uri: channel.uri, + callback: channel.callback, + k1: channel.k1, + tag: channel.tag.to_string(), + }, + }), + _ => Err(DecodingError::InvalidFormat), } } @@ -206,10 +207,12 @@ impl Scanner { if is_lnurl_address(invoice_str) { if let Ok(ln_addr) = LightningAddress::from_str(invoice_str) { let url = ln_addr.lnurlp_url(); - let async_client = Builder::default().build_async() + let async_client = Builder::default() + .build_async() .map_err(|_| DecodingError::InvalidFormat)?; - let response = async_client.make_request(&*url) + let response = async_client + .make_request(&*url) .await .map_err(|_| DecodingError::InvalidFormat)?; @@ -218,20 +221,20 @@ impl Scanner { } // Handle LNURL - let lnurl = LnUrl::from_str(invoice_str) - .map_err(|_| DecodingError::InvalidFormat)?; + let lnurl = LnUrl::from_str(invoice_str).map_err(|_| DecodingError::InvalidFormat)?; // Check for LNURL-auth if lnurl.is_lnurl_auth() { - let parsed_url = Url::parse(&lnurl.url) - .map_err(|_| DecodingError::InvalidFormat)?; + let parsed_url = Url::parse(&lnurl.url).map_err(|_| DecodingError::InvalidFormat)?; - let k1 = parsed_url.query_pairs() + let k1 = parsed_url + .query_pairs() .find(|(key, _)| key == "k1") .map(|(_, value)| value.to_string()) .ok_or(DecodingError::InvalidFormat)?; - let domain = parsed_url.host_str() + let domain = parsed_url + .host_str() .ok_or(DecodingError::InvalidFormat)? .to_string(); @@ -241,15 +244,17 @@ impl Scanner { tag: "login".to_string(), k1, domain, - } + }, }); } // Handle other LNURL types - let async_client = Builder::default().build_async() + let async_client = Builder::default() + .build_async() .map_err(|_| DecodingError::InvalidFormat)?; - let response = async_client.make_request(&lnurl.url) + let response = async_client + .make_request(&lnurl.url) .await .map_err(|_| DecodingError::InvalidFormat)?; @@ -257,8 +262,8 @@ impl Scanner { } fn decode_lightning(invoice_str: &str) -> Result { - let bolt11_invoice = Bolt11Invoice::from_str(invoice_str) - .map_err(|_| DecodingError::InvalidFormat)?; + let bolt11_invoice = + Bolt11Invoice::from_str(invoice_str).map_err(|_| DecodingError::InvalidFormat)?; let network = NetworkType::from(bolt11_invoice.network()); let amount_satoshis: u64 = bolt11_invoice.amount_milli_satoshis().unwrap_or(0) / 1000u64; @@ -266,8 +271,9 @@ impl Scanner { let timestamp = DateTime::::from_timestamp( bolt11_invoice.duration_since_epoch().as_secs() as i64, - 0 - ).unwrap_or_default(); + 0, + ) + .unwrap_or_default(); let description = match bolt11_invoice.description() { lightning_invoice::Bolt11InvoiceDescription::Direct(desc) => Some(desc.to_string()), @@ -275,8 +281,7 @@ impl Scanner { }; let payment_hash = AsRef::<[u8]>::as_ref(bolt11_invoice.payment_hash()).to_vec(); - let payee_node_id = bolt11_invoice.recover_payee_pub_key() - .serialize().to_vec(); + let payee_node_id = bolt11_invoice.recover_payee_pub_key().serialize().to_vec(); let expiry = bolt11_invoice.expiry_time(); @@ -291,14 +296,14 @@ impl Scanner { description, network_type: network, payee_node_id: Some(payee_node_id), - } + }, }) } fn handle_node_id(invoice_str: &str) -> Result { Ok(Scanner::NodeId { url: invoice_str.to_string(), - network: NetworkType::Bitcoin + network: NetworkType::Bitcoin, }) } @@ -315,24 +320,23 @@ impl Scanner { parts[1] .split('&') .filter_map(|param| { - param.split_once('=').map(|(k, v)| { - (k.to_string(), v.to_string()) - }) + param + .split_once('=') + .map(|(k, v)| (k.to_string(), v.to_string())) }) .collect::>() } else { HashMap::new() }; - let amount_satoshis = params.get("amount") + let amount_satoshis = params + .get("amount") .and_then(|amount| parse_amount_as_satoshis(amount).ok()) .unwrap_or(0); - let label = params.get("label") - .map(String::from); + let label = params.get("label").map(String::from); - let message = params.get("message") - .map(String::from); + let message = params.get("message").map(String::from); Ok(Scanner::OnChain { invoice: OnChainInvoice { @@ -341,7 +345,7 @@ impl Scanner { label, message, params: Some(params), - } + }, }) } } @@ -359,4 +363,4 @@ impl AsyncFromStr for Scanner { async fn from_str(s: &str) -> Result { Scanner::decode(s.to_string()).await } -} \ No newline at end of file +} diff --git a/src/modules/scanner/mod.rs b/src/modules/scanner/mod.rs index b6d96d9..c01e35f 100644 --- a/src/modules/scanner/mod.rs +++ b/src/modules/scanner/mod.rs @@ -1,10 +1,10 @@ mod errors; -mod types; -mod utils; mod implementation; #[cfg(test)] mod tests; +mod types; +mod utils; pub use errors::*; +pub use implementation::*; pub use types::*; -pub use implementation::*; \ No newline at end of file diff --git a/src/modules/scanner/tests.rs b/src/modules/scanner/tests.rs index 22730ee..f9576b1 100644 --- a/src/modules/scanner/tests.rs +++ b/src/modules/scanner/tests.rs @@ -32,7 +32,7 @@ mod tests { assert_eq!(params.get("label").unwrap(), "Test"); assert_eq!(params.get("message").unwrap(), "Test%20Payment"); assert_eq!(params.get("custom").unwrap(), "value"); - }, + } _ => assert!(false, "Should be an OnChain invoice"), } } @@ -46,7 +46,7 @@ mod tests { assert_eq!(invoice.address, address); assert_eq!(invoice.amount_satoshis, 0); assert!(invoice.params.as_ref().unwrap().is_empty()); - }, + } _ => assert!(false, "Should be an OnChain invoice"), } } @@ -60,7 +60,7 @@ mod tests { Ok(Scanner::OnChain { invoice }) => { // Address should be normalized to lowercase assert_eq!(invoice.address, address_upper.to_lowercase()); - }, + } Ok(_) => assert!(false, "Should be an OnChain invoice"), Err(e) => panic!("Failed to decode uppercase address: {:?}", e), } @@ -75,7 +75,7 @@ mod tests { Ok(Scanner::OnChain { invoice }) => { // Legacy addresses are case-sensitive, should preserve original case assert_eq!(invoice.address, legacy); - }, + } Ok(_) => assert!(false, "Should be an OnChain invoice"), Err(e) => panic!("Failed to decode legacy address: {:?}", e), } @@ -88,8 +88,11 @@ mod tests { let decoded = Scanner::decode(invoice.to_string()).await.unwrap(); match decoded { Scanner::OnChain { invoice } => { - assert_eq!(invoice.address, "bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq"); - }, + assert_eq!( + invoice.address, + "bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq" + ); + } _ => assert!(false, "Should be an OnChain invoice"), } } @@ -101,8 +104,11 @@ mod tests { let decoded = Scanner::decode(invoice.to_string()).await.unwrap(); match decoded { Scanner::OnChain { invoice } => { - assert_eq!(invoice.address, "bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq"); - }, + assert_eq!( + invoice.address, + "bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq" + ); + } _ => assert!(false, "Should be an OnChain invoice"), } } @@ -115,7 +121,7 @@ mod tests { match decoded { Scanner::OnChain { invoice } => { assert_eq!(invoice.address, "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa"); - }, + } _ => assert!(false, "Should be an OnChain invoice"), } } @@ -128,7 +134,7 @@ mod tests { match decoded { Scanner::OnChain { invoice } => { assert_eq!(invoice.address, "3J98t1WpEZ73CNmQviecrnyiWrnqRhWNLy"); - }, + } _ => assert!(false, "Should be an OnChain invoice"), } } @@ -142,7 +148,7 @@ mod tests { Scanner::OnChain { invoice } => { // Prefix gets lowercased, but address case is preserved assert_eq!(invoice.address, "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa"); - }, + } _ => assert!(false, "Should be an OnChain invoice"), } } @@ -150,15 +156,19 @@ mod tests { #[tokio::test] async fn test_bitcoin_prefix_with_query_params() { // Test bitcoin: prefix with bech32 address and query params - let invoice = "bitcoin:BC1QAR0SRRR7XFKVY5L643LYDNW9RE59GTZZWF5MDQ?amount=0.00001&label=Test"; + let invoice = + "bitcoin:BC1QAR0SRRR7XFKVY5L643LYDNW9RE59GTZZWF5MDQ?amount=0.00001&label=Test"; let decoded = Scanner::decode(invoice.to_string()).await.unwrap(); match decoded { Scanner::OnChain { invoice } => { // Address should be lowercased, query params preserved - assert_eq!(invoice.address, "bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq"); + assert_eq!( + invoice.address, + "bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq" + ); assert_eq!(invoice.amount_satoshis, 1000); assert_eq!(invoice.label.as_ref().unwrap(), "Test"); - }, + } _ => assert!(false, "Should be an OnChain invoice"), } } @@ -166,17 +176,21 @@ mod tests { #[tokio::test] async fn test_invalid_lightning_invoice() { let invoice = "lnbc1invalid".to_string(); - assert!(matches!(Scanner::decode(invoice).await, Err(DecodingError::InvalidFormat))); + assert!(matches!( + Scanner::decode(invoice).await, + Err(DecodingError::InvalidFormat) + )); } #[tokio::test] async fn test_floating_point_amount_precision() { - let invoice = "bitcoin:bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq?amount=0.000035".to_string(); + let invoice = + "bitcoin:bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq?amount=0.000035".to_string(); let decoded = Scanner::decode(invoice).await.unwrap(); match decoded { Scanner::OnChain { invoice } => { assert_eq!(invoice.amount_satoshis, 3500); - }, + } _ => assert!(false, "Should be an OnChain invoice"), } } @@ -199,17 +213,22 @@ mod tests { #[tokio::test] async fn test_uppercase_bitcoin_uri() { // Test uppercase BITCOIN: prefix with uppercase address (common in QR codes) - let invoice = "BITCOIN:BC1QAR0SRRR7XFKVY5L643LYDNW9RE59GTZZWF5MDQ?amount=0.00001&label=Test".to_string(); + let invoice = + "BITCOIN:BC1QAR0SRRR7XFKVY5L643LYDNW9RE59GTZZWF5MDQ?amount=0.00001&label=Test" + .to_string(); let decoded = Scanner::decode(invoice).await.unwrap(); match decoded { Scanner::OnChain { invoice } => { // Address should be normalized to lowercase - assert_eq!(invoice.address, "bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq"); + assert_eq!( + invoice.address, + "bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq" + ); assert_eq!(invoice.amount_satoshis, 1000); // Query params should preserve original case assert_eq!(invoice.label.as_ref().unwrap(), "Test"); - }, + } _ => assert!(false, "Should be an OnChain invoice"), } } -} \ No newline at end of file +} diff --git a/src/modules/scanner/types.rs b/src/modules/scanner/types.rs index 783ab3a..395236d 100644 --- a/src/modules/scanner/types.rs +++ b/src/modules/scanner/types.rs @@ -1,7 +1,7 @@ -use std::collections::HashMap; -use std::fmt; use bitcoin::Network; use serde::Serialize; +use std::collections::HashMap; +use std::fmt; #[derive(uniffi::Enum, Debug, Clone, PartialEq, Serialize)] pub enum NetworkType { @@ -130,4 +130,4 @@ pub enum Scanner { LnurlPay { data: LnurlPayData }, NodeId { url: String, network: NetworkType }, Gift { code: String, amount: u64 }, -} \ No newline at end of file +} diff --git a/src/modules/scanner/utils.rs b/src/modules/scanner/utils.rs index 12ecc62..447b83c 100644 --- a/src/modules/scanner/utils.rs +++ b/src/modules/scanner/utils.rs @@ -1,7 +1,8 @@ use crate::DecodingError; pub fn parse_amount_as_satoshis(amount: &str) -> Result { - amount.parse::() + amount + .parse::() .map_err(|_| DecodingError::InvalidAmount) .map(|btc| (btc * 100_000_000.0).round() as u64) -} \ No newline at end of file +} diff --git a/src/modules/trezor/account_info.rs b/src/modules/trezor/account_info.rs index e71bbe9..6b923ad 100644 --- a/src/modules/trezor/account_info.rs +++ b/src/modules/trezor/account_info.rs @@ -2,8 +2,8 @@ //! //! Functions that bridge generic account types to Trezor's signing protocol. -use crate::modules::onchain::AccountType; use super::types::TrezorScriptType; +use crate::modules::onchain::AccountType; /// Map AccountType to Trezor's ScriptType for transaction inputs. pub fn account_type_to_script_type(account_type: AccountType) -> TrezorScriptType { diff --git a/src/modules/trezor/callbacks.rs b/src/modules/trezor/callbacks.rs index 2ca94ea..c3315b8 100644 --- a/src/modules/trezor/callbacks.rs +++ b/src/modules/trezor/callbacks.rs @@ -3,8 +3,8 @@ //! These types and traits bridge the Rust trezor-connect-rs library with //! native iOS/Android implementations via UniFFI callback interfaces. -use std::sync::Arc; use once_cell::sync::OnceCell; +use std::sync::Arc; // ============================================================================ // Transport callback types @@ -115,7 +115,12 @@ pub trait TrezorTransportCallback: Send + Sync { /// * `path` - Device path /// * `message_type` - Protobuf message type (e.g., GetAddress = 29) /// * `data` - Serialized protobuf message data - fn call_message(&self, path: String, message_type: u16, data: Vec) -> Option; + fn call_message( + &self, + path: String, + message_type: u16, + data: Vec, + ) -> Option; /// Get pairing code from user during BLE THP pairing. /// diff --git a/src/modules/trezor/errors.rs b/src/modules/trezor/errors.rs index 32d0a0c..b6336da 100644 --- a/src/modules/trezor/errors.rs +++ b/src/modules/trezor/errors.rs @@ -93,9 +93,7 @@ impl From for TrezorError { // Top-level errors TE::Cancelled => TrezorError::UserCancelled, TE::Timeout => TrezorError::Timeout, - TE::IoError(msg) => TrezorError::IoError { - error_details: msg, - }, + TE::IoError(msg) => TrezorError::IoError { error_details: msg }, // Transport errors TE::Transport(transport_err) => match transport_err { @@ -208,9 +206,7 @@ impl From for TrezorError { // THP (Trezor Host Protocol) errors TE::Thp(thp_err) => match thp_err { ThpError::PairingRequired => TrezorError::PairingRequired, - ThpError::PairingFailed(msg) => TrezorError::PairingFailed { - error_details: msg, - }, + ThpError::PairingFailed(msg) => TrezorError::PairingFailed { error_details: msg }, ThpError::InvalidCredentials => TrezorError::PairingFailed { error_details: "Invalid credentials".to_string(), }, @@ -258,9 +254,7 @@ impl From for TrezorError { // Bitcoin errors TE::Bitcoin(bitcoin_err) => match bitcoin_err { - BitcoinError::InvalidPath(msg) => TrezorError::InvalidPath { - error_details: msg, - }, + BitcoinError::InvalidPath(msg) => TrezorError::InvalidPath { error_details: msg }, BitcoinError::InvalidAddress(msg) => TrezorError::DeviceError { error_details: format!("Invalid address: {}", msg), }, @@ -283,4 +277,3 @@ impl From for TrezorError { } } } - diff --git a/src/modules/trezor/implementation.rs b/src/modules/trezor/implementation.rs index b6edda4..b07d637 100644 --- a/src/modules/trezor/implementation.rs +++ b/src/modules/trezor/implementation.rs @@ -3,31 +3,30 @@ //! On desktop platforms, this uses the trezor-connect-rs library directly. //! On Android/iOS, this uses a callback-based transport implemented natively. +use base64::{engine::general_purpose, Engine as _}; use std::sync::Arc; use tokio::sync::Mutex; -use base64::{engine::general_purpose, Engine as _}; use crate::modules::trezor::{ - TrezorAddressResponse, TrezorDeviceInfo, TrezorError, TrezorFeatures, + TrezorAddressResponse, TrezorCoinType, TrezorDeviceInfo, TrezorError, TrezorFeatures, TrezorGetAddressParams, TrezorGetPublicKeyParams, TrezorPublicKeyResponse, - TrezorSignMessageParams, TrezorSignedMessageResponse, TrezorVerifyMessageParams, - TrezorSignTxParams, TrezorSignedTx, - TrezorTransportType, TrezorCoinType, + TrezorSignMessageParams, TrezorSignTxParams, TrezorSignedMessageResponse, TrezorSignedTx, + TrezorTransportType, TrezorVerifyMessageParams, }; // Desktop: use full trezor-connect-rs #[cfg(not(any(target_os = "android", target_os = "ios")))] use trezor_connect_rs::{ - ConnectedDevice, DeviceInfo, GetAddressParams, GetPublicKeyParams, - SignMessageParams, SignTxParams, VerifyMessageParams, Trezor, + ConnectedDevice, DeviceInfo, GetAddressParams, GetPublicKeyParams, SignMessageParams, + SignTxParams, Trezor, VerifyMessageParams, }; // Mobile: use callback transport from trezor-connect-rs #[cfg(any(target_os = "android", target_os = "ios"))] use trezor_connect_rs::{ - CallbackTransport, CallbackDeviceInfo, CallbackReadResult, CallbackResult, - CallbackMessageResult, ConnectedDevice, GetAddressParams, GetPublicKeyParams, - SignMessageParams, SignTxParams, VerifyMessageParams, Transport, TransportCallback, + CallbackDeviceInfo, CallbackMessageResult, CallbackReadResult, CallbackResult, + CallbackTransport, ConnectedDevice, GetAddressParams, GetPublicKeyParams, SignMessageParams, + SignTxParams, Transport, TransportCallback, VerifyMessageParams, }; #[cfg(any(target_os = "android", target_os = "ios"))] @@ -113,12 +112,22 @@ fn validate_sign_tx_params(params: &SignTxParams) -> Result<(), TrezorError> { // Reject unsupported input script types for (i, input) in params.inputs.iter().enumerate() { match input.script_type { - trezor_connect_rs::ScriptType::SpendMultisig => return Err(TrezorError::DeviceError { - error_details: format!("Input {}: Multisig inputs are not currently supported.", i), - }), - trezor_connect_rs::ScriptType::External => return Err(TrezorError::DeviceError { - error_details: format!("Input {}: External inputs are not currently supported.", i), - }), + trezor_connect_rs::ScriptType::SpendMultisig => { + return Err(TrezorError::DeviceError { + error_details: format!( + "Input {}: Multisig inputs are not currently supported.", + i + ), + }) + } + trezor_connect_rs::ScriptType::External => { + return Err(TrezorError::DeviceError { + error_details: format!( + "Input {}: External inputs are not currently supported.", + i + ), + }) + } _ => {} } } @@ -149,7 +158,10 @@ fn validate_sign_tx_params(params: &SignTxParams) -> Result<(), TrezorError> { // Change output - must have script_type if output.script_type.is_none() { return Err(TrezorError::DeviceError { - error_details: format!("Output {}: change output must specify script_type.", i), + error_details: format!( + "Output {}: change output must specify script_type.", + i + ), }); } } @@ -157,7 +169,10 @@ fn validate_sign_tx_params(params: &SignTxParams) -> Result<(), TrezorError> { // OP_RETURN output - amount must be 0 if output.amount != 0 { return Err(TrezorError::DeviceError { - error_details: format!("Output {}: OP_RETURN output must have amount 0.", i), + error_details: format!( + "Output {}: OP_RETURN output must have amount 0.", + i + ), }); } } @@ -255,13 +270,16 @@ impl TransportCallback for CallbackAdapter { self.callback.get_chunk_size(path.to_string()) } - fn call_message(&self, path: &str, message_type: u16, data: &[u8]) -> Option { + fn call_message( + &self, + path: &str, + message_type: u16, + data: &[u8], + ) -> Option { // Call the native layer's call_message implementation - let result = self.callback.call_message( - path.to_string(), - message_type, - data.to_vec(), - ); + let result = self + .callback + .call_message(path.to_string(), message_type, data.to_vec()); // Convert from TrezorCallMessageResult to CallbackMessageResult result.map(|r| CallbackMessageResult { @@ -278,7 +296,8 @@ impl TransportCallback for CallbackAdapter { } fn save_thp_credential(&self, device_id: &str, credential_json: &str) -> bool { - self.callback.save_thp_credential(device_id.to_string(), credential_json.to_string()) + self.callback + .save_thp_credential(device_id.to_string(), credential_json.to_string()) } fn load_thp_credential(&self, device_id: &str) -> Option { @@ -286,7 +305,8 @@ impl TransportCallback for CallbackAdapter { } fn log_debug(&self, tag: &str, message: &str) { - self.callback.log_debug(tag.to_string(), message.to_string()); + self.callback + .log_debug(tag.to_string(), message.to_string()); } } @@ -301,12 +321,20 @@ struct UiCallbackAdapter { impl trezor_connect_rs::TrezorUiCallback for UiCallbackAdapter { fn on_pin_request(&self) -> Option { let result = self.callback.on_pin_request(); - if result.is_empty() { None } else { Some(result) } + if result.is_empty() { + None + } else { + Some(result) + } } fn on_passphrase_request(&self, on_device: bool) -> Option { let result = self.callback.on_passphrase_request(on_device); - if result.is_empty() { None } else { Some(result) } + if result.is_empty() { + None + } else { + Some(result) + } } } @@ -383,8 +411,7 @@ impl TrezorManager { // Desktop: Initialize trezor-connect-rs #[cfg(not(any(target_os = "android", target_os = "ios")))] { - let mut builder = Trezor::new() - .with_app_identity("Bitkit", "Bitkit"); + let mut builder = Trezor::new().with_app_identity("Bitkit", "Bitkit"); if let Some(ref path) = _credential_path { builder = builder.with_credential_store(path); @@ -421,8 +448,7 @@ impl TrezorManager { { use crate::get_transport_callback; - let callback = get_transport_callback() - .ok_or(TrezorError::NotInitialized)?; + let callback = get_transport_callback().ok_or(TrezorError::NotInitialized)?; let native_devices = callback.enumerate_devices(); @@ -527,8 +553,7 @@ impl TrezorManager { { use crate::get_transport_callback; - let callback = get_transport_callback() - .ok_or(TrezorError::NotInitialized)?; + let callback = get_transport_callback().ok_or(TrezorError::NotInitialized)?; // Find the device in cached list let device = { @@ -541,13 +566,18 @@ impl TrezorManager { }; // Create transport and connect - this will be reused for all operations - let adapter = Arc::new(CallbackAdapter { callback: callback.clone() }); - let mut transport = CallbackTransport::new(adapter) - .with_app_identity("Bitkit", "Bitkit"); + let adapter = Arc::new(CallbackAdapter { + callback: callback.clone(), + }); + let mut transport = + CallbackTransport::new(adapter).with_app_identity("Bitkit", "Bitkit"); transport.init().await.map_err(TrezorError::from)?; // Acquire a session (this triggers THP handshake for BLE) - let session = transport.acquire(&device.path, None).await.map_err(TrezorError::from)?; + let session = transport + .acquire(&device.path, None) + .await + .map_err(TrezorError::from)?; // Store connected path { @@ -577,7 +607,9 @@ impl TrezorManager { // Wire UI callback if set if let Some(ui_cb) = crate::modules::trezor::get_ui_callback() { - let adapter = Arc::new(UiCallbackAdapter { callback: ui_cb.clone() }); + let adapter = Arc::new(UiCallbackAdapter { + callback: ui_cb.clone(), + }); connected.set_ui_callback(adapter); } @@ -613,7 +645,9 @@ impl TrezorManager { // Wire UI callback if set if let Some(ui_cb) = crate::modules::trezor::get_ui_callback() { - let adapter = Arc::new(UiCallbackAdapter { callback: ui_cb.clone() }); + let adapter = Arc::new(UiCallbackAdapter { + callback: ui_cb.clone(), + }); connected.set_ui_callback(adapter); } @@ -646,12 +680,13 @@ impl TrezorManager { // Both mobile and desktop: use stored connected device let mut connected_device = self.connected_device.lock().await; - let device = connected_device - .as_mut() - .ok_or(TrezorError::NotConnected)?; + let device = connected_device.as_mut().ok_or(TrezorError::NotConnected)?; let tc_params: GetAddressParams = params.into(); - let response = device.get_address(tc_params).await.map_err(TrezorError::from)?; + let response = device + .get_address(tc_params) + .await + .map_err(TrezorError::from)?; Ok(TrezorAddressResponse::from(response)) } @@ -669,12 +704,13 @@ impl TrezorManager { // Both mobile and desktop: use stored connected device let mut connected_device = self.connected_device.lock().await; - let device = connected_device - .as_mut() - .ok_or(TrezorError::NotConnected)?; + let device = connected_device.as_mut().ok_or(TrezorError::NotConnected)?; let tc_params: GetPublicKeyParams = params.into(); - let response = device.get_public_key(tc_params).await.map_err(TrezorError::from)?; + let response = device + .get_public_key(tc_params) + .await + .map_err(TrezorError::from)?; Ok(TrezorPublicKeyResponse::from(response)) } @@ -692,12 +728,13 @@ impl TrezorManager { // Both mobile and desktop: use stored connected device let mut connected_device = self.connected_device.lock().await; - let device = connected_device - .as_mut() - .ok_or(TrezorError::NotConnected)?; + let device = connected_device.as_mut().ok_or(TrezorError::NotConnected)?; let tc_params: SignMessageParams = params.into(); - let response = device.sign_message(tc_params).await.map_err(TrezorError::from)?; + let response = device + .sign_message(tc_params) + .await + .map_err(TrezorError::from)?; Ok(TrezorSignedMessageResponse::from(response)) } @@ -712,32 +749,31 @@ impl TrezorManager { ) -> Result { // Both mobile and desktop: use stored connected device let mut connected_device = self.connected_device.lock().await; - let device = connected_device - .as_mut() - .ok_or(TrezorError::NotConnected)?; + let device = connected_device.as_mut().ok_or(TrezorError::NotConnected)?; let tc_params: VerifyMessageParams = params.into(); - device.verify_message(tc_params).await.map_err(TrezorError::from) + device + .verify_message(tc_params) + .await + .map_err(TrezorError::from) } /// Sign a Bitcoin transaction with the connected device. /// /// # Arguments /// * `params` - Transaction parameters including inputs, outputs, and options - pub async fn sign_tx( - &self, - params: TrezorSignTxParams, - ) -> Result { + pub async fn sign_tx(&self, params: TrezorSignTxParams) -> Result { let tc_params: SignTxParams = params.into(); validate_sign_tx_params(&tc_params)?; // Both mobile and desktop: use stored connected device let mut connected_device = self.connected_device.lock().await; - let device = connected_device - .as_mut() - .ok_or(TrezorError::NotConnected)?; + let device = connected_device.as_mut().ok_or(TrezorError::NotConnected)?; - let response = device.sign_transaction(tc_params).await.map_err(TrezorError::from)?; + let response = device + .sign_transaction(tc_params) + .await + .map_err(TrezorError::from)?; Ok(TrezorSignedTx::from(response)) } @@ -762,9 +798,11 @@ impl TrezorManager { _ => bitcoin::Network::Bitcoin, }; - let psbt_bytes = general_purpose::STANDARD.decode(&psbt_base64).map_err(|e| TrezorError::DeviceError { - error_details: format!("Invalid PSBT base64: {}", e), - })?; + let psbt_bytes = general_purpose::STANDARD + .decode(&psbt_base64) + .map_err(|e| TrezorError::DeviceError { + error_details: format!("Invalid PSBT base64: {}", e), + })?; let sign_params = trezor_connect_rs::psbt::psbt_to_sign_tx_params(&psbt_bytes, btc_network) .map_err(|e| TrezorError::DeviceError { error_details: format!("PSBT conversion error: {}", e), @@ -773,11 +811,12 @@ impl TrezorManager { validate_sign_tx_params(&sign_params)?; let mut connected_device = self.connected_device.lock().await; - let device = connected_device - .as_mut() - .ok_or(TrezorError::NotConnected)?; + let device = connected_device.as_mut().ok_or(TrezorError::NotConnected)?; - let response = device.sign_transaction(sign_params).await.map_err(TrezorError::from)?; + let response = device + .sign_transaction(sign_params) + .await + .map_err(TrezorError::from)?; Ok(TrezorSignedTx::from(response)) } @@ -893,8 +932,7 @@ impl TrezorManager { { use crate::get_transport_callback; - let callback = get_transport_callback() - .ok_or(TrezorError::NotInitialized)?; + let callback = get_transport_callback().ok_or(TrezorError::NotInitialized)?; // The native layer's save_thp_credential with empty string can be used // to clear, or we can define a new method. For now, we'll save empty @@ -915,7 +953,10 @@ impl TrezorManager { let mut inner = self.inner.lock().await; let trezor = inner.as_mut().ok_or(TrezorError::NotInitialized)?; - trezor.clear_credentials(device_id).await.map_err(TrezorError::from) + trezor + .clear_credentials(device_id) + .await + .map_err(TrezorError::from) } } } diff --git a/src/modules/trezor/mod.rs b/src/modules/trezor/mod.rs index b227f21..2fc271e 100644 --- a/src/modules/trezor/mod.rs +++ b/src/modules/trezor/mod.rs @@ -3,16 +3,16 @@ //! This module provides FFI-compatible interfaces for interacting with //! Trezor hardware wallets via USB and Bluetooth connections. +pub mod account_info; +mod callbacks; mod errors; -mod types; mod implementation; -mod callbacks; -pub mod account_info; #[cfg(test)] mod tests; +mod types; +pub use account_info::account_type_to_script_type; +pub use callbacks::*; pub use errors::*; -pub use types::*; pub use implementation::*; -pub use callbacks::*; -pub use account_info::account_type_to_script_type; +pub use types::*; diff --git a/src/modules/trezor/tests.rs b/src/modules/trezor/tests.rs index f4396f5..f3deb44 100644 --- a/src/modules/trezor/tests.rs +++ b/src/modules/trezor/tests.rs @@ -3,8 +3,8 @@ #[cfg(test)] mod tests { use crate::modules::trezor::{ - TrezorDeviceInfo, TrezorError, TrezorFeatures, TrezorScriptType, TrezorTransportType, - TrezorTxInput, TrezorTxOutput, TrezorSignTxParams, TrezorSignedTx, TrezorCoinType, + TrezorCoinType, TrezorDeviceInfo, TrezorError, TrezorFeatures, TrezorScriptType, + TrezorSignTxParams, TrezorSignedTx, TrezorTransportType, TrezorTxInput, TrezorTxOutput, }; // ======================================================================== @@ -217,28 +217,40 @@ mod tests { fn test_script_type_conversion_spend_address() { let trezor_type = TrezorScriptType::SpendAddress; let tc_type: trezor_connect_rs::ScriptType = trezor_type.into(); - assert!(matches!(tc_type, trezor_connect_rs::ScriptType::SpendAddress)); + assert!(matches!( + tc_type, + trezor_connect_rs::ScriptType::SpendAddress + )); } #[test] fn test_script_type_conversion_spend_p2sh_witness() { let trezor_type = TrezorScriptType::SpendP2shWitness; let tc_type: trezor_connect_rs::ScriptType = trezor_type.into(); - assert!(matches!(tc_type, trezor_connect_rs::ScriptType::SpendP2SHWitness)); + assert!(matches!( + tc_type, + trezor_connect_rs::ScriptType::SpendP2SHWitness + )); } #[test] fn test_script_type_conversion_spend_witness() { let trezor_type = TrezorScriptType::SpendWitness; let tc_type: trezor_connect_rs::ScriptType = trezor_type.into(); - assert!(matches!(tc_type, trezor_connect_rs::ScriptType::SpendWitness)); + assert!(matches!( + tc_type, + trezor_connect_rs::ScriptType::SpendWitness + )); } #[test] fn test_script_type_conversion_spend_taproot() { let trezor_type = TrezorScriptType::SpendTaproot; let tc_type: trezor_connect_rs::ScriptType = trezor_type.into(); - assert!(matches!(tc_type, trezor_connect_rs::ScriptType::SpendTaproot)); + assert!(matches!( + tc_type, + trezor_connect_rs::ScriptType::SpendTaproot + )); } #[test] @@ -280,7 +292,10 @@ mod tests { assert_eq!(tc_input.prev_index, 0); assert_eq!(tc_input.path, "m/84'/0'/0'/0/0"); assert_eq!(tc_input.amount, 100000); - assert!(matches!(tc_input.script_type, trezor_connect_rs::ScriptType::SpendWitness)); + assert!(matches!( + tc_input.script_type, + trezor_connect_rs::ScriptType::SpendWitness + )); assert_eq!(tc_input.sequence, Some(0xFFFFFFFD)); } @@ -320,7 +335,10 @@ mod tests { assert!(tc_output.address.is_none()); assert_eq!(tc_output.path, Some("m/84'/0'/0'/1/0".to_string())); assert_eq!(tc_output.amount, 5000); - assert!(matches!(tc_output.script_type, Some(trezor_connect_rs::ScriptType::SpendWitness))); + assert!(matches!( + tc_output.script_type, + Some(trezor_connect_rs::ScriptType::SpendWitness) + )); } #[test] @@ -577,21 +595,39 @@ mod tests { assert_eq!(err.to_string(), "Operation timed out"); let err = TrezorError::NotInitialized; - assert_eq!(err.to_string(), "Trezor not initialized. Call trezor_initialize first."); + assert_eq!( + err.to_string(), + "Trezor not initialized. Call trezor_initialize first." + ); let err = TrezorError::NotConnected; - assert_eq!(err.to_string(), "No device connected. Call trezor_connect first."); + assert_eq!( + err.to_string(), + "No device connected. Call trezor_connect first." + ); } #[test] fn test_account_type_to_script_type() { - use crate::modules::trezor::account_info::account_type_to_script_type; use crate::modules::onchain::AccountType; + use crate::modules::trezor::account_info::account_type_to_script_type; - assert!(matches!(account_type_to_script_type(AccountType::Legacy), TrezorScriptType::SpendAddress)); - assert!(matches!(account_type_to_script_type(AccountType::WrappedSegwit), TrezorScriptType::SpendP2shWitness)); - assert!(matches!(account_type_to_script_type(AccountType::NativeSegwit), TrezorScriptType::SpendWitness)); - assert!(matches!(account_type_to_script_type(AccountType::Taproot), TrezorScriptType::SpendTaproot)); + assert!(matches!( + account_type_to_script_type(AccountType::Legacy), + TrezorScriptType::SpendAddress + )); + assert!(matches!( + account_type_to_script_type(AccountType::WrappedSegwit), + TrezorScriptType::SpendP2shWitness + )); + assert!(matches!( + account_type_to_script_type(AccountType::NativeSegwit), + TrezorScriptType::SpendWitness + )); + assert!(matches!( + account_type_to_script_type(AccountType::Taproot), + TrezorScriptType::SpendTaproot + )); } #[test] @@ -600,7 +636,10 @@ mod tests { let cases = vec![ (ScriptType::SpendAddress, TrezorScriptType::SpendAddress), - (ScriptType::SpendP2SHWitness, TrezorScriptType::SpendP2shWitness), + ( + ScriptType::SpendP2SHWitness, + TrezorScriptType::SpendP2shWitness, + ), (ScriptType::SpendWitness, TrezorScriptType::SpendWitness), (ScriptType::SpendTaproot, TrezorScriptType::SpendTaproot), (ScriptType::SpendMultisig, TrezorScriptType::SpendMultisig), @@ -612,5 +651,4 @@ mod tests { assert_eq!(result, expected); } } - } diff --git a/src/modules/trezor/types.rs b/src/modules/trezor/types.rs index bd9beb9..1b86f47 100644 --- a/src/modules/trezor/types.rs +++ b/src/modules/trezor/types.rs @@ -1,6 +1,5 @@ //! FFI-compatible types for the Trezor module. - /// Transport type for Trezor devices. #[derive(Debug, Clone, Copy, PartialEq, Eq, uniffi::Enum)] pub enum TrezorTransportType { From dd9ed4cd7e3c9ae038c180b6c92f050ef2bd3fe2 Mon Sep 17 00:00:00 2001 From: Ovi Trif Date: Fri, 27 Mar 2026 16:56:38 +0100 Subject: [PATCH 4/5] ci: add Claude GitHub App workflows - claude.yml: @claude bot mentions in issues/PRs (org members only) - claude-code-review.yml: auto code review on PRs with --comment fix and --allowedTools for gh/git/read operations, plus old-comment minimization step Requires CLAUDE_CODE_OAUTH_TOKEN secret (set via GitHub App wizard). Co-Authored-By: Claude Opus 4.6 (1M context) --- .github/workflows/claude-code-review.yml | 55 ++++++++++++++++++++++++ .github/workflows/claude.yml | 45 +++++++++++++++++++ 2 files changed, 100 insertions(+) create mode 100644 .github/workflows/claude-code-review.yml create mode 100644 .github/workflows/claude.yml diff --git a/.github/workflows/claude-code-review.yml b/.github/workflows/claude-code-review.yml new file mode 100644 index 0000000..c73fbaf --- /dev/null +++ b/.github/workflows/claude-code-review.yml @@ -0,0 +1,55 @@ +name: Claude Code Review + +on: + pull_request: + types: [opened, synchronize, ready_for_review, reopened] + +concurrency: + group: ${{ github.workflow }}-${{ github.event.pull_request.number }} + cancel-in-progress: true + +jobs: + claude-review: + runs-on: ubuntu-latest + permissions: + contents: read + pull-requests: write + issues: write + id-token: write + + steps: + - name: Checkout repository + uses: actions/checkout@v6 + with: + fetch-depth: 1 + + - name: Minimize old Claude comments + env: + GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + run: | + REPO="${{ github.repository }}" + PR_NUMBER="${{ github.event.pull_request.number }}" + + # Minimize issue comments from claude[bot] + gh api "repos/$REPO/issues/$PR_NUMBER/comments" --jq '.[] | select(.user.login == "claude[bot]") | .node_id' | while read -r node_id; do + if [ -n "$node_id" ]; then + echo "Minimizing comment: $node_id" + gh api graphql -f query=' + mutation($id: ID!) { + minimizeComment(input: {subjectId: $id, classifier: OUTDATED}) { + minimizedComment { isMinimized } + } + }' -f id="$node_id" || true + fi + done + + - name: Run Claude Code Review + id: claude-review + uses: anthropics/claude-code-action@v1 + with: + claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }} + plugin_marketplaces: 'https://github.com/anthropics/claude-code.git' + plugins: 'code-review@claude-code-plugins' + prompt: '/code-review:code-review --comment ${{ github.repository }}/pull/${{ github.event.pull_request.number }}' + claude_args: | + --allowedTools "Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*),Bash(gh api:*),Bash(git log:*),Bash(git diff:*),Bash(git blame:*),Read,Glob,Grep" diff --git a/.github/workflows/claude.yml b/.github/workflows/claude.yml new file mode 100644 index 0000000..e4c2f8d --- /dev/null +++ b/.github/workflows/claude.yml @@ -0,0 +1,45 @@ +name: Claude Code + +on: + issue_comment: + types: [created] + pull_request_review_comment: + types: [created] + issues: + types: [opened, assigned] + pull_request_review: + types: [submitted] + +jobs: + claude: + if: | + (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude') && + contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.comment.author_association)) || + (github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude') && + contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.comment.author_association)) || + (github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude') && + contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.review.author_association)) || + (github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')) && + contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.issue.author_association)) + runs-on: ubuntu-latest + permissions: + contents: write # Allow creating branches/commits + pull-requests: write # Allow pushing to PR branches + issues: write # Allow updating issue comments + id-token: write + actions: read # Required for Claude to read CI results on PRs + steps: + - name: Checkout repository + uses: actions/checkout@v6 + with: + fetch-depth: 0 # Full history for git operations + + - name: Run Claude Code + id: claude + uses: anthropics/claude-code-action@v1 + with: + claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }} + + # This is an optional setting that allows Claude to read CI results on PRs + additional_permissions: | + actions: read From bdada9a325425f4783b8aca58dd65423baffa3a8 Mon Sep 17 00:00:00 2001 From: Ovi Trif Date: Fri, 27 Mar 2026 16:56:46 +0100 Subject: [PATCH 5/5] chore: add review instructions, PR template, and .ai gitignore - copilot-instructions.md: Rust/UniFFI-specific code review rules - pull_request_template.md: standard PR template (Description, Preview, QA Notes) - .gitignore: add .ai for Claude-generated files Co-Authored-By: Claude Opus 4.6 (1M context) --- .github/copilot-instructions.md | 55 ++++++++++++++++++++++++++++++++ .github/pull_request_template.md | 15 +++++++++ .gitignore | 1 + 3 files changed, 71 insertions(+) create mode 100644 .github/copilot-instructions.md create mode 100644 .github/pull_request_template.md diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md new file mode 100644 index 0000000..67b6a9f --- /dev/null +++ b/.github/copilot-instructions.md @@ -0,0 +1,55 @@ +# GitHub Copilot Code Review Instructions + +When performing a code review, respond in English. + +## Architecture & Patterns + +When performing a code review, ensure new public types are properly exported via `src/lib.rs` for UniFFI binding generation. + +When performing a code review, verify that modules follow the established structure: `mod.rs`, `types.rs`, `errors.rs`, `implementation.rs`, and optional `tests.rs`. + +When performing a code review, check that UniFFI-exported types follow existing patterns (derive macros, enum representations, error types). + +## Error Handling & Safety + +When performing a code review, flag any use of `unwrap()` or `expect()` in non-test code and suggest proper error propagation with `?` or `Result`. + +When performing a code review, ensure error types implement proper `Display` and `Error` traits and are exported for UniFFI. + +When performing a code review, flag any `unsafe` blocks and verify they are necessary and well-documented. + +## Code Quality & Readability + +When performing a code review, ensure `cargo clippy` warnings are addressed — the project treats clippy warnings as errors. + +When performing a code review, verify that `cargo fmt` formatting is applied consistently. + +When performing a code review, focus on readability and avoid deeply nested match arms, replacing with early returns or helper functions where possible. + +When performing a code review, ensure unused code is removed after refactoring. + +When performing a code review, verify that existing utilities and helper functions are reused rather than creating duplicates. + +## Dependencies & Platform + +When performing a code review, verify that platform-specific dependencies use correct `#[cfg(target_os)]` guards (especially Trezor: BLE-only on iOS, USB+BLE elsewhere). + +When performing a code review, check that new dependencies are justified and don't introduce unnecessary bloat to the FFI binary. + +## Testing + +When performing a code review, suggest tests for new functionality covering the most important cases. + +When performing a code review, verify that tests use the established patterns (test modules in `tests.rs`, `#[cfg(test)]` gating). + +## Bitcoin & Lightning Specific + +When performing a code review, verify that Bitcoin/Lightning operations use proper types from the `bitcoin` and `bdk` crates. + +When performing a code review, verify that proper Bitcoin and Lightning technical terms are used when naming code components. + +## Build & Version + +When performing a code review, check that version changes are synchronized across `Cargo.toml`, `Package.swift`, and `bindings/android/gradle.properties`. + +When performing a code review, verify that changes to public API types don't break existing UniFFI bindings without updating the binding generation. diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md new file mode 100644 index 0000000..79350fd --- /dev/null +++ b/.github/pull_request_template.md @@ -0,0 +1,15 @@ + + + +### Description + + + +### Preview + + + +### QA Notes + + + diff --git a/.gitignore b/.gitignore index 036d404..3660ecd 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,4 @@ target/ .idea/ .DS_Store +.ai