-
Notifications
You must be signed in to change notification settings - Fork 6
feat: implement comprehensive plugin system for storage backends #1050
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughAdds a plugin system for storage backends: introduces BlobPlugin and MetadataPlugin config fields, a plugin registry with listing and pflag-based CLI wiring, and converts blob/metadata backends (badger, gcs, s3, sqlite) to option-based constructors with Start/Stop lifecycle and NewFromCmdlineOptions registration. Reworks config loading to support plugin sections and env/CLI processing, replaces direct backend construction with plugin lookup/start and interface assertions, and adds many option helpers, unit/integration/benchmark tests, and documentation (PLUGIN_DEVELOPMENT.md, README, dingo.yaml.example). Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Pre-merge checks and finishing touches✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
46e0363 to
966e345
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
dingo.yaml.example (1)
98-100: Remove obsolete badgerCacheSize configuration.The
badgerCacheSizefield at lines 98-100 appears to be from the old configuration structure and is now replaced by the plugin-baseddatabase.blob.badger.block-cache-sizeanddatabase.blob.badger.index-cache-sizeoptions (lines 22-25). This field should be removed to avoid confusion.Apply this diff:
-# Maximum cache size in bytes used by BadgerDB for block/index cache -# Default: 1073741824 (1 GB) -badgerCacheSize: 1073741824 -database/plugin/metadata/store.go (1)
152-181: Remove unused parameters frommetadata.New()signature - they're never passed to the plugin.The parameters (
dataDir,logger,promRegistry) are redundant. The sqlite plugin is registered withNewFromCmdlineOptionsas its factory function, which reads these values from module-levelcmdlineOptionsvariables that are populated from command-line flags and environment variables. Themetadata.New()function callsGetPlugin()→NewFromCmdlineOptions()with no arguments, bypassing the passed parameters entirely.This same unused parameter pattern exists in the blob plugin (
database/plugin/blob/store.go:37), indicating a systematic design issue.Options:
- Remove the unused parameters from both
metadata.New()andblob.New()signatures- Or refactor the plugin system to accept and pass these parameters through to plugin initialization (passing
dataDir,logger,promRegistryas part of the plugin factory call)The current design where parameters are accepted but ignored creates confusing API contracts for callers.
🧹 Nitpick comments (19)
database/database.go (1)
122-131: Consider documenting or avoiding in-place config mutation.The defaulting logic modifies the provided
configin-place, which could lead to unexpected behavior if the caller reuses the sameConfiginstance for multiple database initializations. Consider either:
- Documenting this mutation behavior clearly in the function documentation
- Creating a copy of the config before applying defaults
Example with copying:
func New( config *Config, ) (*Database, error) { if config == nil { config = DefaultConfig + } else { + // Create a copy to avoid mutating the caller's config + configCopy := *config + config = &configCopy } // Apply defaults for empty fieldscmd/dingo/main.go (1)
109-117: Consider extracting exit behavior for testability.Using
os.Exit(0)directly makes unit testing this logic difficult. Consider extracting the plugin listing to a separate function that returns whether to exit, allowing tests to verify the listing behavior without actually exiting.Example refactor:
func listPluginsIfRequested(blobPlugin, metadataPlugin string) bool { if blobPlugin == "list" { fmt.Println("Available blob plugins:") blobPlugins := plugin.GetPlugins(plugin.PluginTypeBlob) for _, p := range blobPlugins { fmt.Printf(" %s: %s\n", p.Name, p.Description) } return true } if metadataPlugin == "list" { fmt.Println("Available metadata plugins:") metadataPlugins := plugin.GetPlugins(plugin.PluginTypeMetadata) for _, p := range metadataPlugins { fmt.Printf(" %s: %s\n", p.Name, p.Description) } return true } return false }database/plugin/blob/aws/plugin.go (1)
85-114: Consider returning errors instead of panicking.While panicking on configuration errors is acceptable during initialization, consider returning errors instead to provide better error handling flexibility and testability. This would allow callers to handle configuration issues gracefully.
Example refactor:
func NewFromCmdlineOptions() (plugin.Plugin, error) { cmdlineOptionsMutex.RLock() bucket := cmdlineOptions.bucket region := cmdlineOptions.region accessKey := cmdlineOptions.accessKey secretKey := cmdlineOptions.secretKey prefix := cmdlineOptions.prefix cmdlineOptionsMutex.RUnlock() // Validate AWS credentials if (accessKey != "" && secretKey == "") || (accessKey == "" && secretKey != "") { return nil, fmt.Errorf( "AWS access key and secret key must both be provided or both be empty", ) } opts := []BlobStoreS3OptionFunc{ WithBucket(bucket), WithRegion(region), WithAccessKey(accessKey), WithSecretKey(secretKey), WithPrefix(prefix), } return NewWithOptions(opts...) }Note: This would require updating the
plugin.PluginEntry.NewFromOptionsFuncsignature to return(Plugin, error)instead of justPlugin.database/plugin/blob/badger/options_test.go (1)
17-22: Avoid using a nil writer withslog.NewTextHandlerin tests
logger := slog.New(slog.NewTextHandler(nil, nil))risks a panic if the logger is ever used, because the handler holds a nilio.Writer. Usingio.Discardis safer and idiomatic even in tests.You can adjust the imports and logger construction like this:
-import ( - "log/slog" - "testing" - - "github.com/prometheus/client_golang/prometheus" -) +import ( + "io" + "log/slog" + "testing" + + "github.com/prometheus/client_golang/prometheus" +) @@ func TestWithLogger(t *testing.T) { b := &BlobStoreBadger{} - logger := slog.New(slog.NewTextHandler(nil, nil)) + logger := slog.New(slog.NewTextHandler(io.Discard, nil)) option := WithLogger(logger)Also applies to: 63-72
database/plugin/blob/gcs/options.go (1)
23-60: Consider adding unit tests mirroring Badger/SQLite option testsTo keep parity with
badgerandsqliteplugins and catch future refactors, consider a smalloptions_test.gothat verifies:
WithLoggersets a non-nil*GcsLogger(including nil-in / discard behavior).WithPromRegistry,WithBucket,WithCredentialsFile, andWithProjectIDassign their respective fields.This would make behavior changes in these helpers immediately visible.
database/plugin/blob/store.go (2)
37-66:NewignoresdataDir,logger, andpromRegistryparametersThe new plugin-based
Newonly usespluginName;dataDir,logger, andpromRegistryare accepted but never applied to the plugin instance. If callers previously relied on these parameters to configure the underlying store (e.g., badger data dir, logging, metrics), that behavior is now silently lost.Consider:
- Either threading these values into the plugin via an options mechanism, or
- Explicitly deprecating/changing the
Newsignature so callers aren’t misled into thinking those arguments still matter.It’s also worth double-checking all call sites to confirm no one expects these parameters to take effect anymore.
36-36: Update stale comment about always returning a Badger pluginThe comment
// For now, this always returns a badger pluginno longer reflects reality now thatpluginNameis resolved via the registry. Updating it to describe “returns the started blob plugin selected by name” will avoid confusion for future readers.database/plugin/metadata/sqlite/plugin.go (1)
25-32: Clarify howloggerandpromRegistryincmdlineOptionsare populated
NewFromCmdlineOptionsforwardsloggerandpromRegistryviaWithLoggerandWithPromRegistry, but this file only exposesdata-diras aPluginOptionwithDest: &(cmdlineOptions.dataDir). Without a visible setter,loggerandpromRegistrywill remain nil unless they’re assigned elsewhere.If they are set from shared wiring (e.g., a central place that initializes all plugin cmdline options), it might be worth adding a brief comment here. If not, consider either:
- Adding appropriate hooks/options to populate them, or
- Dropping them from
cmdlineOptionsandNewFromCmdlineOptionsto reduce unused state.Also applies to: 55-66
database/plugin/blob/badger/plugin.go (2)
30-47: cmdlineOptions mutex and fields appear only partially wired
cmdlineOptionsMutexis used for setting defaults and for read access inNewFromCmdlineOptions, but any writes viaPluginOption.Destwill bypass it, andpromRegistryis never read. Either remove the mutex/unused field or consistently guard writes and plumbpromRegistryintoNew(...)viaWithPromRegistryso metrics work when the plugin is constructed through this path.
95-107: Panic on construction failure may be too aggressive for config errors
NewFromCmdlineOptionspanics ifNew(opts...)returns an error. That turns misconfiguration (e.g., invalid data dir) into a hard crash with no chance to recover or emit a nicer message. Consider changing the plugin factory signature to surface anerrorinstead of panicking, or at least wrapping the panic with a clearer message about which plugin and options failed.database/plugin/blob/aws/database.go (2)
32-43: accessKey/secretKey fields are unused in NewWithOptions
BlobStoreS3now carriesregion,accessKey, andsecretKey, butNewWithOptionsonly usesbucketandregion. Any options that populateaccessKey/secretKey(if present inaws/options.goor the plugin) currently have no effect, and credentials still come solely fromconfig.LoadDefaultConfig. Either wire these through to the AWS config (e.g., via custom credentials providers) or drop the fields/options to avoid a misleading configuration surface.Also applies to: 111-156
58-66: Lowercasing the full S3 dataDir may corrupt key prefixes
path := strings.TrimPrefix(strings.ToLower(dataDir), prefix)lowercases both the bucket and the key prefix. While buckets are case-insensitive, S3 object keys are case-sensitive, so this will transform keys likeMyPrefix/Fileintomyprefix/file. This looks like pre-existing behavior but is surprising and potentially breaking; consider only normalizing the scheme part and leaving the remainder case-preserving.database/plugin/blob/gcs/plugin.go (3)
26-38: validateCredentials does two os.Stat calls and could be simplified
validateCredentialsfirst callsos.Statto check for existence and then callsos.Statagain to check permissions. You can avoid the second syscall (and a small race window) by doing a singleos.Statand then checking botherrandinfo.Mode().Perm()from that result.
40-83: promRegistry in cmdlineOptions is unused and not exposed via plugin options
cmdlineOptionscarries apromRegistry prometheus.Registerer, but there is noPluginOptionthat populates it, andNewFromCmdlineOptionsnever passes aWithPromRegistryoption toNewWithOptions. As a result, metrics for the GCS blob plugin won’t be registered with any registry when constructed via the plugin system. Either removepromRegistryhere or plumb it through properly so observability behaves as expected.
85-105: Panic in NewFromCmdlineOptions on validation or construction errors
NewFromCmdlineOptionspanics both on invalid credentials file and onNewWithOptionsfailure. For configuration problems this produces a hard crash rather than a controlled error. If the plugin framework can be evolved to letNewFromOptionsFuncreturn an error, it would be preferable to surface configuration failures that way instead of panicking.internal/config/config.go (1)
170-205: Nestedconfig:YAML layout changes how defaults applyWhen a
configsection is present in the YAML,LoadConfigdoes:if tempCfg.Config != nil { globalConfig = tempCfg.Config } else { // legacy path: unmarshal whole file into globalConfig err = yaml.Unmarshal(buf, globalConfig) }In the new (
config:) path,tempCfg.Configstarts from a zero value, so any fields omitted in YAML will not inherit the pre-initialized defaults fromglobalConfig(e.g., ports, bind addresses). In the legacy path, unmarshalling intoglobalConfigpreserves those defaults for unspecified fields.If this behavioral change is intentional (new layout requires all important fields to be set explicitly), it’s fine but should be clearly documented. If you want the new layout to keep default values for unspecified fields, consider unmarshalling into the existing
globalConfiginstead of replacing the pointer, for example:if tempCfg.Config != nil { - globalConfig = tempCfg.Config + *globalConfig = *globalConfig // no-op, emphasizes reuse + // overlay values from tempCfg.Config onto the existing defaults + if err := yaml.Unmarshal(buf, struct { + Config *Config `yaml:"config,omitempty"` + }{Config: globalConfig}); err != nil { + return nil, fmt.Errorf("error parsing config file: %w", err) + } } else { // legacy behavior err = yaml.Unmarshal(buf, globalConfig) ... }(or any equivalent approach that preserves the pre-seeded defaults).
Also ensure that plugin-related sections processed via
plugin.ProcessConfigremain consistent with whichever layout you settle on.database/plugin/blob/aws/options.go (1)
1-74: Option pattern for S3 blob store looks clean and consistentThe
BlobStoreS3OptionFuncpattern and the individualWith*helpers are straightforward and match the rest of the plugin system. UsingNewS3LoggerinsideWithLoggerensures a safe default even when callers passnil. Just ensure downstream code never logsaccessKey/secretKeyvalues to avoid leaking credentials.database/plugin/blob/badger/database.go (2)
42-93: Constructor and option wiring look good; confirm nil-logger handlingThe option-based
Newlooks solid: defaults forblockCacheSize/indexCacheSize, clean in‑memory vs on‑disk split withdb.dataDir, and Badger options correctly parameterized with the struct state.This relies on
NewBadgerLogger(db.logger)gracefully handling a nildb.loggerwhen noWithLoggeroption is provided. IfNewBadgerLoggerdoesn’t already normalizenilto a safe default (like a discard logger), you should either:
- Initialize
db.loggerto a safe default before constructingbadgerOpts, or- Update
NewBadgerLoggerto handle nil the wayNewS3Loggerdoes for the S3 plugin.That keeps the constructor robust even when callers don’t pass a logger.
137-147: Start/Stop correctly hook into existing lifecycle
Startas a no‑op andStopdelegating toClosemeshes cleanly with the existinginitandCloselogic and satisfies the plugin interface without changing behavior. If you ever revisit GC, you might also consider tying the backgroundblobGcticker toClose/Stopso that the goroutine can shut down explicitly, but that’s not a blocker for this PR.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (33)
PLUGIN_DEVELOPMENT.md(1 hunks)README.md(1 hunks)chain/chain_test.go(1 hunks)cmd/dingo/main.go(2 hunks)config.go(0 hunks)database/database.go(1 hunks)database/database_test.go(1 hunks)database/plugin/blob/aws/database.go(3 hunks)database/plugin/blob/aws/options.go(1 hunks)database/plugin/blob/aws/plugin.go(1 hunks)database/plugin/blob/badger/database.go(3 hunks)database/plugin/blob/badger/options.go(1 hunks)database/plugin/blob/badger/options_test.go(1 hunks)database/plugin/blob/badger/plugin.go(1 hunks)database/plugin/blob/blob.go(1 hunks)database/plugin/blob/gcs/database.go(3 hunks)database/plugin/blob/gcs/options.go(1 hunks)database/plugin/blob/gcs/plugin.go(1 hunks)database/plugin/blob/store.go(2 hunks)database/plugin/metadata/sqlite/database.go(4 hunks)database/plugin/metadata/sqlite/options.go(1 hunks)database/plugin/metadata/sqlite/options_test.go(1 hunks)database/plugin/metadata/sqlite/plugin.go(1 hunks)database/plugin/metadata/store.go(2 hunks)database/plugin/register.go(2 hunks)database/plugin/register_test.go(1 hunks)dingo.yaml.example(1 hunks)internal/config/config.go(6 hunks)internal/config/config_test.go(0 hunks)internal/node/load.go(1 hunks)internal/node/node.go(0 hunks)node.go(1 hunks)peergov/peergov.go(2 hunks)
💤 Files with no reviewable changes (3)
- internal/node/node.go
- config.go
- internal/config/config_test.go
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().
Applied to files:
database/plugin/blob/badger/options_test.gochain/chain_test.go
📚 Learning: 2025-11-22T22:57:02.415Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 1036
File: peergov/peergov.go:78-105
Timestamp: 2025-11-22T22:57:02.415Z
Learning: In the blinklabs-io/dingo repository, metric names in peergov/peergov.go (specifically cardano_node_metrics_peerSelection_ActivePeers, cardano_node_metrics_peerSelection_EstablishedPeers, and cardano_node_metrics_peerSelection_KnownPeers) use PascalCase to match the Haskell node implementation for ecosystem tooling compatibility between the Haskell node, Amaru, and Dingo.
Applied to files:
peergov/peergov.go
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: chain.Chain.AddBlock expects a github.com/blinklabs-io/gouroboros/ledger.Block. In tests, construct a database/models.Block only as a container and call its Decode() to obtain the gouroboros ledger.Block (which also satisfies github.com/blinklabs-io/gouroboros/ledger/common.Block) before inserting into the chain.
Applied to files:
chain/chain_test.go
🧬 Code graph analysis (15)
database/plugin/register_test.go (2)
database/plugin/register.go (6)
PluginEntry(41-47)PluginTypeBlob(27-27)Register(51-53)PluginTypeMetadata(26-26)GetPlugins(100-108)GetPlugin(110-119)database/plugin/plugin.go (1)
Plugin(17-20)
node.go (1)
database/plugin/log.go (1)
Logger(18-27)
database/plugin/metadata/sqlite/options_test.go (2)
database/plugin/metadata/sqlite/options.go (2)
WithLogger(26-30)WithPromRegistry(33-39)config.go (1)
WithLogger(156-160)
database/plugin/blob/store.go (1)
database/plugin/register.go (2)
GetPlugin(110-119)PluginTypeBlob(27-27)
database/plugin/blob/badger/options_test.go (1)
database/plugin/blob/badger/options.go (6)
WithDataDir(42-46)WithBlockCacheSize(49-53)WithIndexCacheSize(56-60)WithLogger(26-30)WithPromRegistry(33-39)WithGc(63-67)
database/plugin/blob/gcs/options.go (2)
database/plugin/blob/gcs/database.go (1)
BlobStoreGCS(31-41)database/plugin/blob/gcs/logger.go (1)
NewGcsLogger(28-34)
cmd/dingo/main.go (2)
internal/config/config.go (3)
DefaultBlobPlugin(48-48)DefaultMetadataPlugin(49-49)LoadConfig(144-235)database/plugin/register.go (3)
GetPlugins(100-108)PluginTypeBlob(27-27)PluginTypeMetadata(26-26)
database/plugin/metadata/store.go (1)
database/plugin/register.go (2)
GetPlugin(110-119)PluginTypeMetadata(26-26)
database/plugin/blob/gcs/plugin.go (3)
database/plugin/register.go (2)
Register(51-53)PluginEntry(41-47)database/plugin/blob/gcs/options.go (4)
BlobStoreGCSOptionFunc(23-23)WithBucket(42-46)WithCredentialsFile(49-53)WithProjectID(56-60)database/plugin/blob/gcs/database.go (1)
NewWithOptions(96-138)
peergov/peergov.go (1)
peergov/peer.go (1)
PeerSourceInboundConn(33-33)
database/plugin/metadata/sqlite/database.go (1)
database/plugin/metadata/sqlite/options.go (4)
WithDataDir(42-46)WithLogger(26-30)WithPromRegistry(33-39)SqliteOptionFunc(23-23)
internal/node/load.go (1)
database/plugin/log.go (1)
Logger(18-27)
database/plugin/blob/aws/database.go (3)
database/plugin/blob/aws/options.go (1)
BlobStoreS3OptionFunc(23-23)database/plugin/blob/aws/logger.go (1)
NewS3Logger(28-33)database/plugin/blob/store.go (1)
New(37-67)
database/plugin/blob/gcs/database.go (3)
database/plugin/blob/gcs/logger.go (2)
GcsLogger(24-26)NewGcsLogger(28-34)database/plugin/blob/gcs/options.go (1)
BlobStoreGCSOptionFunc(23-23)database/plugin/blob/store.go (1)
New(37-67)
database/plugin/blob/aws/options.go (3)
database/plugin/blob/aws/database.go (1)
BlobStoreS3(33-44)config.go (1)
WithLogger(156-160)database/plugin/blob/aws/logger.go (1)
NewS3Logger(28-33)
🪛 markdownlint-cli2 (0.18.1)
PLUGIN_DEVELOPMENT.md
115-115: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: nilaway
- GitHub Check: docker (ubuntu-latest, amd64)
- GitHub Check: Analyze (go)
🔇 Additional comments (24)
database/plugin/register.go (1)
66-81: LGTM! Environment variable processing is well-structured.The
ProcessEnvVarsfunction correctly constructs the environment variable prefix patternDINGO_DATABASE_{TYPE}_{PLUGIN}_and processes options for each registered plugin. The implementation aligns with the documented configuration precedence (env > YAML > defaults).README.md (1)
65-136: Excellent documentation for the plugin system!The "Database Plugins" section clearly explains available plugins, selection methods, configuration options, and includes practical examples. The documentation covers command-line flags, environment variables, and YAML configuration patterns, making it easy for users to adopt the new plugin architecture.
peergov/peergov.go (1)
223-224: LGTM! Formatting improvements for readability.The changes split multi-condition if statements and slice operations across multiple lines, improving code readability without altering the logic.
Also applies to: 232-234, 263-264, 272-274
database/plugin/register_test.go (3)
27-63: Well-structured registration test with proper state isolation.The test correctly saves and restores
pluginEntriesstate, validates plugin registration, and verifies all expected fields (Type, Name). Good use of defer for cleanup.
65-103: Comprehensive test coverage for plugin retrieval by type.The test properly validates that
GetPluginscorrectly filters plugins by type, returning the expected count for both blob and metadata plugins.
105-138: Thorough validation of plugin retrieval by type and name.The test correctly verifies both successful retrieval (returning the expected plugin instance) and non-existent plugin lookup (returning nil).
node.go (1)
73-77: LGTM! Database configuration updated for plugin architecture.The removal of
BlobCacheSizeis correct as cache configuration is now handled by individual plugin options. The database initialization will populateBlobPluginandMetadataPluginfrom defaults, aligning with the new plugin-based approach.chain/chain_test.go (1)
109-115: LGTM! Test configuration updated for plugin system.The database config correctly uses
BlobPlugin: "badger"andMetadataPlugin: "sqlite"instead of the deprecatedBlobCacheSize. This aligns the test with the new plugin-based architecture while maintaining the same default storage backends.database/plugin/blob/blob.go (1)
18-18: No issues found — AWS plugin package exists and is properly configured.The verification confirms that the
database/plugin/blob/awspackage exists, contains Go files, has a properinit()function, and callsplugin.Register()correctly. The blank import at line 18 will not cause a build failure.cmd/dingo/main.go (1)
103-118: Plugin listing logic looks good.The pre-run logic correctly lists available plugins and exits early before loading configuration, which provides a good UX. The conditional override logic (lines 126-131) appropriately preserves config file values when CLI flags use defaults.
database/plugin/metadata/sqlite/options_test.go (1)
24-57: LGTM! Clean option setter tests.The tests appropriately validate each option setter function, using pointer equality for logger and registry checks, and string equality for dataDir. The test structure aligns well with the option pattern being used throughout the plugin system.
internal/node/load.go (1)
55-61: LGTM! Database config updated for plugin system.The database configuration correctly uses the new plugin-based fields (
BlobPluginandMetadataPlugin) instead of the removedBlobCacheSize. The change aligns with the broader plugin architecture migration.dingo.yaml.example (1)
12-56: LGTM! Clear plugin-based configuration structure.The restructured database configuration provides a clear, hierarchical approach to plugin selection and configuration. The per-plugin option blocks (badger, gcs, s3, sqlite) are well-documented with comments explaining each option.
database/database_test.go (1)
29-35: LGTM! Test config properly updated for plugin system.The test configuration correctly specifies the blob and metadata plugins explicitly, replacing the removed
BlobCacheSizefield. The in-memory database setup (emptyDataDir) remains appropriate for testing.database/plugin/blob/aws/plugin.go (2)
37-83: LGTM! Plugin registration well-structured.The plugin registration in
init()correctly defines all required metadata including type, name, description, and options. The option definitions use consistent kebab-case naming and properDestpointer assignments for configuration binding.
94-100: Credential validation logic is correct.The validation ensures AWS credentials are either both provided or both omitted, which is the correct behavior. The XOR logic properly catches the two invalid states where only one credential is provided.
database/plugin/blob/badger/options_test.go (1)
24-131: Good functional coverage for Badger option helpersEach option is exercised individually and in combination, and assertions directly check the underlying BlobStoreBadger fields. This gives solid protection against regressions in the option wiring.
database/plugin/blob/gcs/options.go (1)
23-60: GCS option setters correctly targetBlobStoreGCSfieldsEach
WithXfunction assigns the expected field, andWithLoggerdelegates toNewGcsLogger, which centralizes logger handling. The options look consistent with the struct definition and plugin pattern.database/plugin/metadata/sqlite/options.go (1)
23-46: SQLite option helpers are clear and consistentThe
SqliteOptionFunctype and the threeWithXhelpers assign fields directly onMetadataStoreSqliteand match the constructor pattern described in the PR. With tests already covering these, there’s nothing blocking here.database/plugin/metadata/sqlite/plugin.go (1)
35-72: SQLite metadata plugin registration and construction look soundThe plugin is registered with the correct
PluginTypeMetadata, a clear description, and aNewFromOptionsFuncthat constructs viaNewWithOptionsusing the functional options. The pattern aligns with the blob plugins and supports CLI-configurabledata-dir. Using a panic onNewWithOptionserror at init-time is acceptable in this bootstrap context.database/plugin/blob/badger/options.go (1)
23-67: Functional options for Badger look clean and idiomaticThe
BlobStoreBadgerOptionFuncpattern and theWith*helpers are straightforward and map 1:1 to struct fields, which should make future extension and testing easy.database/plugin/metadata/sqlite/database.go (1)
195-204: Plugin Start/Stop lifecycle wiring looks consistentHaving
Startas a no‑op (constructor fully initializes the DB) andStopdelegate toClosecleanly satisfies the plugin interface without changing existing init/teardown behavior.internal/config/config.go (2)
47-50: Centralized plugin defaults are wired correctlyUsing
DefaultBlobPlugin/DefaultMetadataPluginand seedingglobalConfigwith those values keeps the default selection in one place and matches the new plugin-based configuration model.Also applies to: 139-141
80-99: ParseCmdlineArgs is not called anywhere—the review comment targets dead code.The review identifies a potential bug in
ParseCmdlineArgs, but this function is never invoked in the codebase. The only occurrence ofParseCmdlineArgsis its definition at line 80; no call site exists in tests, main.go, or any other file.The actual configuration flow in
cmd/dingo/main.go(lines 120–130) correctly implements the intended precedence (CLI > YAML/env > defaults):
config.LoadConfig(configFile)loads YAML/env values- Manual override applies only if the flag differs from the default:
if blobPlugin != config.DefaultBlobPlugin { cfg.BlobPlugin = blobPlugin }This preserves YAML/env values unless the user explicitly provides a different CLI flag. The suggested fix would correct dead code without affecting runtime behavior.
Likely an incorrect or invalid review comment.
441675e to
ddddc4f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
database/plugin/blob/aws/database.go (1)
56-75: Fix scheme prefix handling for mixed‑casedataDir(critical correctness issue)Right now:
- The prefix check is case‑insensitive:
strings.HasPrefix(strings.ToLower(dataDir), "s3://").- But the trim is case‑sensitive:
path := strings.TrimPrefix(dataDir, prefix).For an input like
"S3://mybucket/myprefix", the check passes, butTrimPrefixfails to remove"s3://"(due to case), sopathstays"S3://mybucket/myprefix", andbucketbecomes"S3:"afterSplitN, which will break S3 access.You likely want either:
- A strictly case‑sensitive scheme (simpler and correct):
const prefix = "s3://" - if !strings.HasPrefix(strings.ToLower(dataDir), prefix) { + if !strings.HasPrefix(dataDir, prefix) { cancel() return nil, errors.New( "s3 blob: expected dataDir='s3://<bucket>[/prefix]'", ) } - -path := strings.TrimPrefix(dataDir, prefix) +path := strings.TrimPrefix(dataDir, prefix)or
- A fully case‑insensitive scheme, where you trim based on a normalized form but still preserve the original path for keys, which would require a slightly different approach.
As written, mixed‑case schemes pass validation but produce an invalid bucket; this should be corrected before release.
🧹 Nitpick comments (8)
integration/benchmark_test.go (2)
5-5: Replace deprecatedioutilwithospackage.The
ioutilpackage has been deprecated since Go 1.16. Useos.ReadFileinstead ofioutil.ReadFileon Line 73.Apply this diff:
- "io/ioutil"And update Line 73:
- data, err := ioutil.ReadFile(chunkFile) + data, err := os.ReadFile(chunkFile)
66-96: Consider removing unused legacy function.This function is marked as "legacy" but is still defined. If it's not being used by any benchmarks, consider removing it to reduce maintenance burden.
database/plugin/blob/gcs/plugin_test.go (1)
22-63: LGTM: Comprehensive credential validation test coverage.The table-driven test provides thorough coverage of credential validation scenarios with clear error message assertions.
Note: The use of
/dev/null(line 31) may have limited portability to Windows, though this is a common pattern in Go tests and typically handled by the Go runtime.database/plugin/blob/gcs/options_test.go (1)
25-35: StrengthenTestWithLoggerassertion to check identity, not just non‑nilRight now the test only verifies
b.logger != nil, so it would still pass ifWithLoggerswapped in a different logger instance. To mirror the other option tests and tighten regression coverage, consider asserting equality with the provided logger:func TestWithLogger(t *testing.T) { b := &BlobStoreGCS{} logger := slog.New(slog.NewTextHandler(io.Discard, nil)) option := WithLogger(logger) option(b) - if b.logger == nil { - t.Errorf("Expected logger to be set") - } + if b.logger != logger { + t.Errorf("Expected logger to be set to provided logger") + } }database/database.go (1)
27-40: Consider copyingConfigbefore applying defaults instead of mutating in place
Newnow fills in emptyBlobPlugin,MetadataPlugin, andDataDirby mutating the provided*Config:// Note: This function modifies the provided config in-place by applying default values to empty fields. if config.BlobPlugin == "" { config.BlobPlugin = DefaultConfig.BlobPlugin } ...Even though this is documented, it can still be surprising for callers that:
- Reuse the same
*Configacross multipleNewcalls, or- Treat the config as immutable/shared state.
To avoid side effects while keeping behavior the same, you could shallow‑copy the config first and only mutate the copy:
func New( config *Config, ) (*Database, error) { if config == nil { - config = DefaultConfig + config = DefaultConfig } - // Apply defaults for empty fields - if config.BlobPlugin == "" { - config.BlobPlugin = DefaultConfig.BlobPlugin - } - if config.MetadataPlugin == "" { - config.MetadataPlugin = DefaultConfig.MetadataPlugin - } - if config.DataDir == "" { - config.DataDir = DefaultConfig.DataDir - } + // Work on a copy to avoid mutating the caller's config + cfg := *config + if cfg.BlobPlugin == "" { + cfg.BlobPlugin = DefaultConfig.BlobPlugin + } + if cfg.MetadataPlugin == "" { + cfg.MetadataPlugin = DefaultConfig.MetadataPlugin + } + if cfg.DataDir == "" { + cfg.DataDir = DefaultConfig.DataDir + } blobDb, err := blob.New( - config.BlobPlugin, + cfg.BlobPlugin, ) ... db := &Database{ - logger: config.Logger, - config: config, + logger: cfg.Logger, + config: &cfg, }This keeps the new defaulting semantics while making
Newsafer for shared configs.Also applies to: 115-132
database/plugin/metadata/store.go (1)
149-176: Plugin-basedNewlogic is solid; update stale commentThe
Newimplementation correctly mirrors the blob store path (lookup,Start, type-assert toMetadataStore) with clear error messages. The leading comment “For now, this always returns a sqlite plugin” is now inaccurate since any registered metadata plugin can be returned; consider updating or removing it.database/plugin/blob/badger/plugin.go (1)
23-107: Badger plugin wiring looks good; double‑checkuintoption plumbingThe overall shape here looks solid: defaults are initialized once, options are protected by an RWMutex, and
NewFromCmdlineOptionscleanly snapshots config into option funcs before constructing the plugin.Two minor points to verify:
PluginOptionTypeUintis used withDestfields of type*uint64(blockCacheSize,indexCacheSize). Please confirm the option-processing code fully supports*uint64destinations (and not just*uint), to avoid type mismatches or silent truncation when parsing CLI/env values.NewFromCmdlineOptionspanics onNewerror. Given the currentNewFromOptionsFuncsignature doesn’t allow returning an error, this may be the only practical choice, but it does mean a misconfigured Badger plugin will crash the process at plugin construction time. If you later extend the plugin API, you might want to revisit this and surface construction errors viaStart()or an explicit error channel instead.internal/config/config.go (1)
80-120: Consider refactoring os.Exit for testability.The
ParseCmdlineArgsmethod callsos.Exit(0)directly when listing plugins (lines 108, 116), which makes it difficult to test this functionality and prevents graceful handling in library contexts.Consider returning a sentinel error or boolean flag indicating "list requested" and let the caller decide whether to exit:
-func (c *Config) ParseCmdlineArgs(programName string, args []string) error { +func (c *Config) ParseCmdlineArgs(programName string, args []string) (listRequested bool, err error) { // ... setup code ... // Handle plugin listing if c.BlobPlugin == "list" { fmt.Println("Available blob plugins:") blobPlugins := plugin.GetPlugins(plugin.PluginTypeBlob) for _, p := range blobPlugins { fmt.Printf(" %s: %s\n", p.Name, p.Description) } - os.Exit(0) + return true, nil } if c.MetadataPlugin == "list" { fmt.Println("Available metadata plugins:") metadataPlugins := plugin.GetPlugins(plugin.PluginTypeMetadata) for _, p := range metadataPlugins { fmt.Printf(" %s: %s\n", p.Name, p.Description) } - os.Exit(0) + return true, nil } - return nil + return false, nil }Then in
cmd/dingo/main.go:listRequested, err := cfg.ParseCmdlineArgs(...) if err != nil { // handle error } if listRequested { os.Exit(0) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (39)
PLUGIN_DEVELOPMENT.md(1 hunks)README.md(1 hunks)chain/chain_test.go(1 hunks)cmd/dingo/main.go(3 hunks)config.go(0 hunks)database/database.go(1 hunks)database/database_test.go(1 hunks)database/plugin/blob/aws/database.go(4 hunks)database/plugin/blob/aws/options.go(1 hunks)database/plugin/blob/aws/plugin.go(1 hunks)database/plugin/blob/aws/plugin_test.go(1 hunks)database/plugin/blob/badger/database.go(3 hunks)database/plugin/blob/badger/options.go(1 hunks)database/plugin/blob/badger/options_test.go(1 hunks)database/plugin/blob/badger/plugin.go(1 hunks)database/plugin/blob/blob.go(1 hunks)database/plugin/blob/gcs/database.go(3 hunks)database/plugin/blob/gcs/options.go(1 hunks)database/plugin/blob/gcs/options_test.go(1 hunks)database/plugin/blob/gcs/plugin.go(1 hunks)database/plugin/blob/gcs/plugin_test.go(1 hunks)database/plugin/blob/store.go(2 hunks)database/plugin/metadata/sqlite/database.go(4 hunks)database/plugin/metadata/sqlite/options.go(1 hunks)database/plugin/metadata/sqlite/options_test.go(1 hunks)database/plugin/metadata/sqlite/plugin.go(1 hunks)database/plugin/metadata/store.go(2 hunks)database/plugin/register.go(2 hunks)database/plugin/register_test.go(1 hunks)dingo.yaml.example(1 hunks)go.mod(1 hunks)integration/benchmark_test.go(1 hunks)integration/integration_test.go(1 hunks)internal/config/config.go(6 hunks)internal/config/config_test.go(0 hunks)internal/node/load.go(1 hunks)internal/node/node.go(0 hunks)node.go(1 hunks)peergov/peergov.go(2 hunks)
💤 Files with no reviewable changes (3)
- internal/config/config_test.go
- config.go
- internal/node/node.go
✅ Files skipped from review due to trivial changes (1)
- PLUGIN_DEVELOPMENT.md
🚧 Files skipped from review as they are similar to previous changes (8)
- cmd/dingo/main.go
- database/plugin/blob/gcs/plugin.go
- database/plugin/metadata/sqlite/options_test.go
- database/plugin/blob/badger/options_test.go
- database/plugin/register_test.go
- chain/chain_test.go
- README.md
- internal/node/load.go
🧰 Additional context used
🧠 Learnings (8)
📚 Learning: 2025-11-22T22:57:02.415Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 1036
File: peergov/peergov.go:78-105
Timestamp: 2025-11-22T22:57:02.415Z
Learning: In the blinklabs-io/dingo repository, metric names in peergov/peergov.go (specifically cardano_node_metrics_peerSelection_ActivePeers, cardano_node_metrics_peerSelection_EstablishedPeers, and cardano_node_metrics_peerSelection_KnownPeers) use PascalCase to match the Haskell node implementation for ecosystem tooling compatibility between the Haskell node, Amaru, and Dingo.
Applied to files:
peergov/peergov.go
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().
Applied to files:
integration/benchmark_test.go
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.
Applied to files:
integration/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.
Applied to files:
integration/benchmark_test.go
📚 Learning: 2025-10-22T20:18:45.125Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T20:18:45.125Z
Learning: In tests, Chain.AddBlock must be given a gouroboros ledger.Block (not database/models.Block). Use a lightweight MockBlock (as in chain/chain_test.go) or call models.Block.Decode() when starting from stored rows.
Applied to files:
integration/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: utxorpc tests must pass github.com/blinklabs-io/gouroboros/ledger/common.Block to chain.Chain.AddBlock. They should create database/models.Block only as a storage row, then call models.Block.Decode() to obtain a ledger/common.Block for AddBlock and any ledger/protocol-facing logic.
Applied to files:
integration/benchmark_test.go
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: chain.Chain.AddBlock expects a github.com/blinklabs-io/gouroboros/ledger.Block. In tests, construct a database/models.Block only as a container and call its Decode() to obtain the gouroboros ledger.Block (which also satisfies github.com/blinklabs-io/gouroboros/ledger/common.Block) before inserting into the chain.
Applied to files:
integration/benchmark_test.go
📚 Learning: 2025-11-20T19:58:38.889Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 0
File: :0-0
Timestamp: 2025-11-20T19:58:38.889Z
Learning: In Go, test dependencies (such as github.com/stretchr/testify) are correctly marked as direct dependencies in go.mod, not as indirect dependencies, even when they are only used in test files. This is a standard practice in Go's module system.
Applied to files:
go.mod
🧬 Code graph analysis (15)
peergov/peergov.go (1)
peergov/peer.go (1)
PeerSourceInboundConn(33-33)
database/plugin/blob/aws/options.go (3)
database/plugin/blob/aws/database.go (1)
BlobStoreS3(33-42)config.go (1)
WithLogger(156-160)database/plugin/blob/aws/logger.go (1)
NewS3Logger(28-33)
database/plugin/blob/gcs/options.go (3)
database/plugin/blob/gcs/database.go (1)
BlobStoreGCS(32-41)config.go (1)
WithLogger(156-160)database/plugin/blob/gcs/logger.go (1)
NewGcsLogger(28-34)
database/plugin/blob/badger/plugin.go (3)
database/plugin/register.go (2)
PluginEntry(41-47)PluginTypeBlob(27-27)database/plugin/option.go (4)
PluginOption(34-42)PluginOptionTypeString(28-28)PluginOptionTypeUint(31-31)PluginOptionTypeBool(29-29)database/plugin/blob/badger/options.go (5)
BlobStoreBadgerOptionFunc(23-23)WithDataDir(42-46)WithBlockCacheSize(49-53)WithIndexCacheSize(56-60)WithGc(63-67)
database/plugin/metadata/sqlite/database.go (1)
database/plugin/metadata/sqlite/options.go (4)
WithDataDir(42-46)WithLogger(26-30)WithPromRegistry(33-39)SqliteOptionFunc(23-23)
internal/config/config.go (2)
database/database.go (1)
Config(34-40)database/plugin/register.go (6)
PopulateCmdlineOptions(55-64)GetPlugins(100-108)PluginTypeBlob(27-27)PluginTypeMetadata(26-26)ProcessConfig(83-98)ProcessEnvVars(66-81)
node.go (1)
database/plugin/log.go (1)
Logger(18-27)
database/plugin/blob/aws/plugin_test.go (1)
database/plugin/blob/aws/plugin.go (1)
NewFromCmdlineOptions(67-84)
database/plugin/blob/badger/database.go (3)
database/plugin/blob/store.go (1)
New(35-61)database/plugin/blob/badger/options.go (4)
BlobStoreBadgerOptionFunc(23-23)WithLogger(26-30)WithBlockCacheSize(49-53)WithIndexCacheSize(56-60)database/plugin/blob/badger/logger.go (1)
NewBadgerLogger(28-35)
integration/benchmark_test.go (8)
database/database.go (2)
New(117-156)Config(34-40)database/plugin/blob/aws/database.go (1)
New(45-107)database/plugin/blob/gcs/database.go (1)
New(44-93)database/plugin/blob/store.go (1)
New(35-61)database/plugin/metadata/store.go (1)
New(150-176)node.go (1)
New(49-62)internal/config/config.go (1)
Config(58-78)config.go (1)
Config(33-54)
database/plugin/metadata/store.go (3)
database/plugin/blob/store.go (1)
New(35-61)database/plugin/metadata/sqlite/database.go (1)
New(48-58)database/plugin/register.go (1)
GetPlugin(110-119)
database/plugin/blob/gcs/database.go (3)
database/plugin/blob/gcs/logger.go (2)
GcsLogger(24-26)NewGcsLogger(28-34)database/plugin/blob/gcs/options.go (2)
BlobStoreGCSOptionFunc(23-23)WithCredentialsFile(49-53)database/plugin/blob/store.go (1)
New(35-61)
database/database.go (4)
database/plugin/blob/aws/database.go (1)
New(45-107)database/plugin/blob/gcs/database.go (1)
New(44-93)database/plugin/blob/store.go (1)
New(35-61)database/immutable/immutable.go (1)
New(40-48)
database/plugin/blob/store.go (3)
database/plugin/metadata/sqlite/database.go (1)
New(48-58)database/plugin/metadata/store.go (1)
New(150-176)database/plugin/register.go (2)
GetPlugin(110-119)PluginTypeBlob(27-27)
database/database_test.go (1)
database/plugin/log.go (1)
Logger(18-27)
🪛 GitHub Actions: golangci-lint
database/plugin/blob/gcs/database.go
[error] 27-27: golangci-lint: File is not properly formatted (gci).
🪛 GitHub Check: lint
database/plugin/blob/gcs/database.go
[failure] 27-27:
File is not properly formatted (gci)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: nilaway
- GitHub Check: Analyze (go)
🔇 Additional comments (25)
peergov/peergov.go (2)
223-224: LGTM! Formatting improves readability.The multi-line formatting of the conditional check makes the code more readable without altering logic. The change is consistently applied in both the local roots and public roots handling sections.
Also applies to: 263-264
232-234: LGTM! Formatting enhances code clarity.The multi-line formatting of the slice removal operation makes the code easier to read and understand. The change is consistently applied in both peer deduplication sections.
Also applies to: 272-274
integration/benchmark_test.go (3)
19-64: LGTM!The block loading logic correctly handles reading from the immutable DB, includes proper cleanup with defer, and gracefully handles insufficient test data by duplicating blocks.
236-300: LGTM!The benchmark implementation is well-structured with proper setup, cleanup, timer reset, and allocation reporting. Transaction handling and error management are appropriate for benchmark code.
302-366: LGTM!The benchmark implementation follows the same sound patterns as
benchmarkStorageBackend, appropriately scaled to test 200 blocks for the test load scenario.go.mod (1)
33-33: LGTM: Dependency correctly promoted to direct.The promotion of
google.golang.org/apifrom indirect to direct dependency is appropriate given the direct usage in GCS and cloud storage plugin implementations introduced in this PR.database/plugin/register.go (1)
66-81: LGTM: Environment variable naming standardized.The uppercase conversion for plugin type and name in the environment variable prefix aligns with standard environment variable conventions and the documented breaking changes in this PR.
dingo.yaml.example (1)
12-56: LGTM: Configuration restructuring is well-documented.The migration to a plugin-based configuration structure is clear, well-documented, and provides comprehensive examples for each plugin type (badger, gcs, s3, sqlite). The inline comments effectively guide users through the new configuration format.
database/plugin/blob/badger/options.go (1)
23-67: LGTM: Clean functional options implementation.The functional options pattern is well-implemented, providing type-safe, composable configuration for the Badger blob store. The approach follows Go best practices and enables flexible plugin initialization.
node.go (1)
73-77: LGTM: Database configuration aligned with plugin system.The removal of
BlobCacheSizefrom the database configuration correctly reflects the migration to plugin-based configuration, where cache settings are now handled via per-plugin options.database/plugin/blob/blob.go (1)
18-18: LGTM: Plugin registration via init side effect.The blank import correctly triggers the AWS plugin's
init()function for registration, consistent with the existing plugin registration pattern.database/database_test.go (1)
29-35: Test config correctly updated for plugin-based storageExplicitly setting
BlobPlugin: "badger"andMetadataPlugin: "sqlite"indbConfigaligns this test with the new plugin-based configuration model and mirrors the declared defaults. This keeps the test resilient to future changes inDefaultConfigwhile making the dependency on specific plugins clear.database/plugin/blob/store.go (1)
17-21: BlobNewimplementation via plugin registry looks solid and matches metadata patternThe new
Newfunction cleanly wires blob stores through the plugin registry:
- Clear error when the named plugin is not found.
- Properly calls
Start()and wraps startup errors with context.- Verifies the plugin implements
BlobStorebefore returning.This mirrors the metadata store factory and keeps the blob layer decoupled from specific backends. No issues from my side.
Also applies to: 34-61
database/plugin/blob/gcs/options.go (1)
23-53: GCS option pattern looks consistent and safeThe option funcs are straightforward, correctly target the BlobStoreGCS fields, and
WithLoggerusesNewGcsLoggerso a nil logger is handled safely. No changes needed.database/plugin/metadata/sqlite/options.go (1)
23-46: SQLite option wiring is straightforward and matches the patternThe
SqliteOptionFuncpattern and the three option helpers cleanly set the expected fields and align with howNewWithOptionsis described to work. Looks good as-is.database/plugin/blob/aws/database.go (1)
109-154: Option‑based S3 constructor and plugin lifecycle look consistent
NewWithOptionsapplies the option funcs, sets a sane default logger, validates that a bucket is provided, loads AWS config, optionally overrides the region, and then initializes the client and metrics viainit(). TheStart/Stopno‑ops are acceptable since initialization happens in the constructors and the S3 client doesn’t require explicit teardown. This aligns well with the new plugin/option pattern.Also applies to: 225-235
database/plugin/metadata/sqlite/database.go (2)
60-72: Logger initialization fix looks good.The default logger is now correctly initialized AFTER applying options and uses
os.Stderrinstead of a nil writer, resolving the panic issue flagged in the previous review.
199-208: LGTM!The plugin lifecycle methods are appropriately implemented:
Start()is a no-op since initialization occurs inNew()/NewWithOptions(), andStop()properly delegates toClose().database/plugin/blob/gcs/database.go (2)
117-121: Credentials file is now properly wired.The
credentialsFilefield is now correctly passed to the GCS client viaoption.WithCredentialsFile(), addressing the previous review concern.
176-188: LGTM!The plugin lifecycle methods are correctly implemented with proper nil-checking in
Stop().database/plugin/blob/aws/options.go (1)
23-60: LGTM!The functional option pattern is correctly implemented. All options properly configure the
BlobStoreS3instance, andWithLoggerappropriately wraps the logger withNewS3Loggerfor safe nil handling.database/plugin/blob/badger/database.go (2)
43-51: LGTM!The option-based constructor correctly sets defaults first, then applies user-provided options, allowing options to override defaults as expected.
137-146: LGTM!The plugin lifecycle methods follow the established pattern:
Start()as a no-op andStop()delegating toClose().internal/config/config.go (1)
170-212: LGTM!The plugin configuration handling is well-structured: it uses
tempConfigto separately parse plugin sections, maintains backward compatibility by falling back to direct unmarshalling, and properly processes plugin-specific configuration viaplugin.ProcessConfig().integration/integration_test.go (1)
38-56: No idempotency issues found — plugins are safe for multiple Stop() calls.The verification confirms both underlying implementations are safe:
- Badger's DB.Close is idempotent via sync.Once, so multiple calls are harmless.
- SQLite's db.Close() is safe to call multiple times; subsequent calls will return an error but won't crash.
The test pattern (explicit Stop() calls followed by deferred Stop() calls) is unconventional but poses no risk—both plugins will handle the duplicate calls safely.
ddddc4f to
a304185
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
internal/integration/benchmark_test.go (1)
90-125: Normalize GCS/S3 bucket URIs to avoid double-prefixing.You’ve correctly added the
gcs://ands3://prefixes so the cloud blob plugins receive proper URIs. However, whenDINGO_TEST_GCS_BUCKETorDINGO_TEST_S3_BUCKETare already set to a URI (e.g.,gcs://my-bucketors3://my-bucket/prefix), concatenating the scheme again will produce invalid URIs likegcs://gcs://my-bucket.A more robust pattern is to normalize the env value once and then pass it through unchanged:
-import ( - "fmt" - "os" - "path/filepath" - "testing" +import ( + "fmt" + "os" + "path/filepath" + "strings" + "testing"if hasGCSCredentials() { testBucket := os.Getenv("DINGO_TEST_GCS_BUCKET") if testBucket == "" { testBucket = "dingo-test-bucket" } + if !strings.HasPrefix(testBucket, "gcs://") { + testBucket = "gcs://" + testBucket + } backends = append(backends, struct { name string config *database.Config }{ name: "GCS", config: &database.Config{ BlobPlugin: "gcs", - DataDir: "gcs://" + testBucket, + DataDir: testBucket, MetadataPlugin: "sqlite", }, }) }if hasS3Credentials() { testBucket := os.Getenv("DINGO_TEST_S3_BUCKET") if testBucket == "" { testBucket = "dingo-test-bucket" } + if !strings.HasPrefix(testBucket, "s3://") { + testBucket = "s3://" + testBucket + } backends = append(backends, struct { name string config *database.Config }{ name: "S3", config: &database.Config{ BlobPlugin: "s3", - DataDir: "s3://" + testBucket, + DataDir: testBucket, MetadataPlugin: "sqlite", }, }) }You can then mirror the same pattern in
BenchmarkTestLoad’s GCS/S3 sections.Also applies to: 160-194
🧹 Nitpick comments (5)
database/plugin/blob/gcs/options_test.go (1)
25-75: Tests cover the basics; consider integration testing as a future enhancement.The unit tests verify that each option correctly sets its corresponding field. For more comprehensive coverage, consider adding tests that:
- Verify options work correctly when passed to
NewWithOptions- Test edge cases like nil/empty values
- Validate the interaction between multiple options
However, the current tests are sufficient for verifying the core option functionality.
internal/integration/integration_test.go (2)
40-58: Remove redundantStop()calls.The test calls
Stop()explicitly (lines 52-58) after already deferring it (lines 44, 49). While this works ifStop()is idempotent, the explicit calls are redundant—the deferred calls will execute at function exit.Apply this diff to remove the redundant calls:
if err := badgerPlugin.Start(); err != nil { t.Fatalf("failed to start badger plugin: %v", err) } defer badgerPlugin.Stop() if err := sqlitePlugin.Start(); err != nil { t.Fatalf("failed to start sqlite plugin: %v", err) } defer sqlitePlugin.Stop() - - // Test that we can stop plugins - if err := badgerPlugin.Stop(); err != nil { - t.Fatalf("failed to stop badger plugin: %v", err) - } - - if err := sqlitePlugin.Stop(); err != nil { - t.Fatalf("failed to stop sqlite plugin: %v", err) - } }
161-196: Consider enhancing the test to actually load and validate the configuration.The test creates a temporary config file but only validates that it exists. To provide more confidence in the integration between configuration and plugin instantiation, consider actually loading the config and verifying the plugin settings are applied correctly.
Example enhancement:
// After line 189, add: cfg, err := LoadConfig(tmpFile.Name()) if err != nil { t.Fatalf("failed to load config: %v", err) } if cfg.BlobPlugin != "badger" { t.Errorf("expected blob plugin 'badger', got %q", cfg.BlobPlugin) } if cfg.MetadataPlugin != "sqlite" { t.Errorf("expected metadata plugin 'sqlite', got %q", cfg.MetadataPlugin) }internal/config/config.go (1)
102-118: Consider using a sentinel error for the plugin list case.Returning
errors.New("plugin list requested")treats a successful operation (listing plugins) as an error. While this pattern is pragmatic for controlling program flow (similar toflag.ErrHelp), a predefined sentinel error would be clearer and easier for callers to distinguish from actual errors.Consider defining a sentinel error:
var ErrPluginListRequested = errors.New("plugin list requested")Then return it consistently:
if c.BlobPlugin == "list" { fmt.Println("Available blob plugins:") blobPlugins := plugin.GetPlugins(plugin.PluginTypeBlob) for _, p := range blobPlugins { fmt.Printf(" %s: %s\n", p.Name, p.Description) } - return errors.New("plugin list requested") + return ErrPluginListRequested } if c.MetadataPlugin == "list" { fmt.Println("Available metadata plugins:") metadataPlugins := plugin.GetPlugins(plugin.PluginTypeMetadata) for _, p := range metadataPlugins { fmt.Printf(" %s: %s\n", p.Name, p.Description) } - return errors.New("plugin list requested") + return ErrPluginListRequested }internal/integration/benchmark_test.go (1)
65-88: Verify that “memory” vs “disk” semantics align withdatabase.Newdefaults.Both benchmarks define a “memory” backend as:
config: &database.Config{ BlobPlugin: "badger", DataDir: "", MetadataPlugin: "sqlite", },But the
database.Newsnippet provided shows:if configCopy.DataDir == "" { configCopy.DataDir = DefaultConfig.DataDir }If
DefaultConfig.DataDiris non-empty, then the “memory” backend will quietly fall back to that on-disk location rather than being truly ephemeral, which makes the “memory” label misleading and may skew benchmark comparisons.Please double-check:
- What
DefaultConfig.DataDiris set to in this branch.- Whether you intend the “memory” backend to be:
- in-memory badger (via plugin options), or
- using some specific temp dir, or
- just “default badger” vs. a “dedicated temp-disk badger” backend.
If the goal is a genuine in-memory variant, consider wiring the corresponding plugin option explicitly instead of relying on
DataDir == "".Based on relevant_code_snippets, where
database.Newoverwrites emptyDataDir.Also applies to: 134-157
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (38)
PLUGIN_DEVELOPMENT.md(1 hunks)README.md(1 hunks)chain/chain_test.go(1 hunks)cmd/dingo/main.go(3 hunks)config.go(0 hunks)database/database.go(1 hunks)database/database_test.go(1 hunks)database/plugin/blob/aws/database.go(4 hunks)database/plugin/blob/aws/options.go(1 hunks)database/plugin/blob/aws/plugin.go(1 hunks)database/plugin/blob/aws/plugin_test.go(1 hunks)database/plugin/blob/badger/database.go(3 hunks)database/plugin/blob/badger/options.go(1 hunks)database/plugin/blob/badger/options_test.go(1 hunks)database/plugin/blob/badger/plugin.go(1 hunks)database/plugin/blob/blob.go(1 hunks)database/plugin/blob/gcs/database.go(3 hunks)database/plugin/blob/gcs/options.go(1 hunks)database/plugin/blob/gcs/options_test.go(1 hunks)database/plugin/blob/gcs/plugin.go(1 hunks)database/plugin/blob/gcs/plugin_test.go(1 hunks)database/plugin/blob/store.go(2 hunks)database/plugin/metadata/sqlite/database.go(4 hunks)database/plugin/metadata/sqlite/options.go(1 hunks)database/plugin/metadata/sqlite/options_test.go(1 hunks)database/plugin/metadata/sqlite/plugin.go(1 hunks)database/plugin/metadata/store.go(2 hunks)database/plugin/register.go(2 hunks)database/plugin/register_test.go(1 hunks)dingo.yaml.example(1 hunks)go.mod(1 hunks)internal/config/config.go(6 hunks)internal/config/config_test.go(0 hunks)internal/integration/benchmark_test.go(1 hunks)internal/integration/integration_test.go(1 hunks)internal/node/load.go(1 hunks)internal/node/node.go(0 hunks)node.go(1 hunks)
💤 Files with no reviewable changes (3)
- internal/config/config_test.go
- config.go
- internal/node/node.go
🚧 Files skipped from review as they are similar to previous changes (20)
- go.mod
- database/database_test.go
- database/plugin/register_test.go
- database/plugin/blob/gcs/plugin_test.go
- node.go
- database/plugin/blob/badger/options_test.go
- README.md
- database/plugin/register.go
- database/plugin/metadata/sqlite/plugin.go
- dingo.yaml.example
- database/plugin/metadata/sqlite/options_test.go
- database/plugin/blob/badger/options.go
- PLUGIN_DEVELOPMENT.md
- database/plugin/blob/badger/database.go
- database/plugin/blob/aws/options.go
- database/plugin/blob/gcs/plugin.go
- database/plugin/blob/aws/plugin.go
- cmd/dingo/main.go
- database/plugin/metadata/store.go
- database/plugin/metadata/sqlite/options.go
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().
Applied to files:
internal/integration/benchmark_test.gochain/chain_test.go
📚 Learning: 2025-11-20T02:07:07.073Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 0
File: :0-0
Timestamp: 2025-11-20T02:07:07.073Z
Learning: In the blinklabs-io/dingo repository, range over int is valid Go syntax. The repository uses Go 1.24.0 (minimum Go 1.23), and range over int has been supported since Go 1.22. This syntax should not be flagged as invalid.
Applied to files:
database/plugin/blob/gcs/database.go
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: chain.Chain.AddBlock expects a github.com/blinklabs-io/gouroboros/ledger.Block. In tests, construct a database/models.Block only as a container and call its Decode() to obtain the gouroboros ledger.Block (which also satisfies github.com/blinklabs-io/gouroboros/ledger/common.Block) before inserting into the chain.
Applied to files:
chain/chain_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.
Applied to files:
chain/chain_test.go
📚 Learning: 2025-10-22T20:19:25.277Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:434-434
Timestamp: 2025-10-22T20:19:25.277Z
Learning: In this repo’s tests, use models.Block.Decode() to obtain a ledger.Block before calling chain.Chain.AddBlock, since AddBlock expects a ledger.Block, while LedgerState.GetBlock returns a database/models.Block.
Applied to files:
chain/chain_test.go
🧬 Code graph analysis (9)
internal/integration/benchmark_test.go (1)
database/database.go (2)
New(116-163)Config(34-40)
internal/node/load.go (1)
database/plugin/log.go (1)
Logger(18-27)
database/plugin/blob/gcs/options.go (4)
database/plugin/blob/gcs/database.go (1)
BlobStoreGCS(32-41)database/plugin/blob/aws/options.go (2)
WithLogger(26-30)WithPromRegistry(33-39)config.go (1)
WithLogger(156-160)database/plugin/blob/gcs/logger.go (1)
NewGcsLogger(28-34)
internal/integration/integration_test.go (6)
database/plugin/register.go (5)
GetPlugins(100-108)PluginTypeBlob(27-27)PluginTypeMetadata(26-26)GetPlugin(110-119)PluginEntry(41-47)database/plugin/blob/aws/database.go (1)
NewWithOptions(110-154)database/plugin/blob/gcs/database.go (1)
NewWithOptions(96-147)database/plugin/metadata/sqlite/database.go (1)
NewWithOptions(61-136)database/plugin/blob/aws/options.go (1)
WithBucket(42-46)database/plugin/blob/gcs/options.go (1)
WithBucket(42-46)
internal/config/config.go (2)
database/database.go (2)
Config(34-40)New(116-163)database/plugin/register.go (6)
PopulateCmdlineOptions(55-64)GetPlugins(100-108)PluginTypeBlob(27-27)PluginTypeMetadata(26-26)ProcessConfig(83-98)ProcessEnvVars(66-81)
database/plugin/blob/gcs/database.go (3)
database/plugin/blob/gcs/logger.go (2)
GcsLogger(24-26)NewGcsLogger(28-34)database/plugin/blob/gcs/options.go (2)
BlobStoreGCSOptionFunc(23-23)WithCredentialsFile(49-53)database/plugin/blob/store.go (1)
New(35-61)
database/plugin/blob/aws/plugin_test.go (1)
database/plugin/blob/aws/plugin.go (1)
NewFromCmdlineOptions(67-84)
database/plugin/metadata/sqlite/database.go (4)
database/plugin/blob/badger/options.go (3)
WithDataDir(42-46)WithLogger(26-30)WithPromRegistry(33-39)database/plugin/metadata/sqlite/options.go (4)
WithDataDir(42-46)WithLogger(26-30)WithPromRegistry(33-39)SqliteOptionFunc(23-23)config.go (1)
WithLogger(156-160)database/immutable/immutable.go (1)
New(40-48)
chain/chain_test.go (1)
database/plugin/log.go (1)
Logger(18-27)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: docker (ubuntu-latest, amd64)
- GitHub Check: go-test (1.24.x, macos-latest)
- GitHub Check: nilaway
- GitHub Check: Analyze (go)
🔇 Additional comments (26)
database/plugin/blob/gcs/options.go (1)
23-53: LGTM! Clean functional options implementation.The option constructors follow the functional options pattern correctly and are consistent with the AWS S3 implementation. The
WithLoggeroption appropriately wraps the logger usingNewGcsLogger, and all options perform direct field assignment as expected.database/plugin/blob/gcs/database.go (2)
95-147: Excellent! The credentialsFile is now properly wired into client creation.The
NewWithOptionsimplementation correctly addresses the previous review concern by conditionally addingoption.WithCredentialsFileto the client options when a credentials file is provided (lines 119-124). The function validates required fields, applies sensible defaults, and properly initializes the GCS client with the configured options.
179-191: LGTM! Plugin lifecycle methods are correctly implemented.
Start()being a no-op is appropriate since initialization occurs during construction inNewWithOptions.Stop()properly closes the client if present, ensuring clean resource cleanup.database/database.go (1)
122-139: LGTM! Config copy pattern prevents mutation.The config copy pattern is well-implemented and prevents unintended mutation of the caller's config. Applying defaults only to empty fields is the correct approach.
chain/chain_test.go (1)
109-115: LGTM! Test configuration updated for plugin system.The test configuration correctly uses the new plugin-based fields (BlobPlugin, MetadataPlugin) instead of the removed BlobCacheSize field.
database/plugin/blob/blob.go (1)
18-18: LGTM! Standard blank import for plugin registration.The blank import ensures the AWS plugin's init() function executes for plugin registration.
database/plugin/blob/aws/plugin_test.go (1)
21-40: LGTM! Race condition fixed with proper mutex protection.The test now correctly protects all mutations to the global
cmdlineOptionsvariable withcmdlineOptionsMutex.Lock()/Unlock(), addressing the race condition flagged in the previous review.internal/node/load.go (1)
55-61: LGTM! Database config wiring updated for plugin system.The database configuration correctly uses the new plugin fields (BlobPlugin, MetadataPlugin) from the config, aligning with the plugin-based architecture.
database/plugin/blob/badger/plugin.go (1)
93-107: LGTM! Proper mutex usage for concurrent access.The function correctly uses
RLock()to read the sharedcmdlineOptionsand constructs the plugin safely. The panic on error is appropriate for init-time plugin construction.database/plugin/blob/store.go (1)
34-61: LGTM! Well-structured plugin initialization with comprehensive error handling.The registry-based lookup properly validates the plugin at each stage (lookup, start, type assertion) with clear error messages.
database/plugin/metadata/sqlite/database.go (2)
60-72: LGTM! Logger initialization is now safe.The refactored option pattern correctly applies options first, then sets a safe default logger with a non-nil writer (
os.Stderr). This addresses the previous critical issue where a nil writer would cause panics.Note: There's a minor inconsistency where the default logger uses
TextHandlerwithos.Stderr(line 71) whileinit()usesJSONHandlerwithio.Discard(line 142), but both are safe implementations.
199-208: LGTM! Plugin interface methods correctly implemented.The
Start()andStop()methods properly implement the plugin.Plugin interface.Start()being a no-op is correct since initialization happens inNew(), andStop()appropriately delegates toClose().database/plugin/blob/aws/database.go (4)
40-40: LGTM!The addition of the
regionfield enables region configuration via the options pattern, which aligns well with the plugin system's goal of supporting flexible configuration.
109-154: LGTM!The
NewWithOptionsfunction correctly implements the option pattern with proper error handling and resource cleanup. The 30-second timeout context is appropriately managed—created before AWS config loading and canceled ininit()or on error paths.
225-235: LGTM!The no-op
StartandStopmethods are appropriate for the S3 client, which is stateless and doesn't require explicit lifecycle management. The initialization happens during construction, and the S3 SDK client handles its own cleanup.
57-64: Let me search more thoroughly for any uppercase S3 URI variations:No breaking change in practice—the update correctly reflects AWS S3 constraints.
Amazon S3 only uses lowercase for bucket names, and buckets with uppercase letters in the name can't be accessed via the S3 API. The codebase search found all existing S3 URIs use the lowercase
s3://scheme. Removing thestrings.ToLower()call doesn't break any real-world usage—it simplifies the code while remaining compliant with AWS constraints. The change is correct.internal/integration/integration_test.go (4)
61-108: LGTM!The test comprehensively validates that all plugins have descriptions and verifies specific expected descriptions for known plugins.
110-159: LGTM!The test appropriately exercises the plugin lifecycle for plugins that can be instantiated with defaults, and correctly notes that cloud plugins require additional configuration.
198-216: LGTM!The test appropriately validates that the plugin registry can retrieve different plugin entries, which is the core functionality needed for plugin switching.
218-343: LGTM!The cloud plugin tests appropriately skip when credentials are unavailable and use environment variables for configuration. The credential detection helpers check standard locations, and the plugin entry finder is straightforward.
internal/config/config.go (4)
48-57: LGTM!The default plugin constants and
tempConfigstruct properly support the new plugin-based configuration structure described in the PR.
69-70: LGTM!The
BlobPluginandMetadataPluginfields replace the hardcoded backend configuration and integrate properly with both YAML and environment variable processing.
140-141: LGTM!The default plugin values align with the PR description and ensure sensible defaults for blob (badger) and metadata (sqlite) plugins.
170-228: LGTM!The
LoadConfigupdates properly handle both the new plugin-based configuration structure and maintain backward compatibility with the previous flat structure. Plugin environment variables are processed in the correct order (after main environment processing).internal/integration/benchmark_test.go (2)
203-267: Benchmark harness for key/value reads looks solid.
benchmarkStorageBackendcorrectly:
- Spins up a fresh DB per sub-benchmark, with a temporary on-disk path for badger “disk” backends.
- Pre-populates with a fixed number of blocks outside the timed section.
- Calls
b.ResetTimer()andb.ReportAllocs()before the hot loop.- Uses read-only transactions during the benchmark and bails out via
b.Fatalfon any error.Overall this is a clean, representative KV read benchmark against the plugin-backed
db.Blob()API.
269-333:BenchmarkTestLoadhelper is consistent and appropriate.
benchmarkTestLoadmirrors the structure ofbenchmarkStorageBackendwith a higher block count (200), which matches the “load first 200 blocks” intent:
- Same temp-dir handling for badger disk backends.
- Same DB lifecycle and error handling.
- Pre-population outside the timed loop and transaction-based reads inside.
No issues from a correctness or benchmark-harness standpoint.
| // loadBlockData loads real block data from testdata chunks for benchmarking | ||
| func loadBlockData(numBlocks int) ([][]byte, error) { | ||
| var blocks [][]byte | ||
| testdataDir := "../database/immutable/testdata" | ||
|
|
||
| // Open immutable database to parse chunks | ||
| imm, err := immutable.New(testdataDir) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to open immutable DB: %v", err) | ||
| } | ||
|
|
||
| // Create iterator from origin (slot 0) to get all blocks | ||
| origin := ocommon.NewPoint(0, make([]byte, 32)) | ||
| iter, err := imm.BlocksFromPoint(origin) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to create block iterator: %v", err) | ||
| } | ||
| defer iter.Close() | ||
|
|
||
| // Extract blocks | ||
| for len(blocks) < numBlocks { | ||
| block, err := iter.Next() | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to read block: %v", err) | ||
| } | ||
| if block == nil { | ||
| break | ||
| } | ||
|
|
||
| blocks = append(blocks, block.Cbor) | ||
| } | ||
|
|
||
| if len(blocks) < numBlocks { | ||
| // If we don't have enough blocks, duplicate the ones we have | ||
| for len(blocks) < numBlocks { | ||
| for _, block := range blocks { | ||
| if len(blocks) >= numBlocks { | ||
| break | ||
| } | ||
| blocks = append(blocks, block) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return blocks[:numBlocks], nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Handle zero-block edge case and consider closing immutable DB handle.
loadBlockData assumes that BlocksFromPoint will always yield at least one block. If, for any reason, the immutable testdata is empty or the iterator returns no blocks, len(blocks) stays 0 and the duplication loop will spin forever:
if len(blocks) < numBlocks {
for len(blocks) < numBlocks {
for _, block := range blocks {
// ...
}
}
}With an empty blocks slice, the inner for _, block := range blocks never runs, so len(blocks) never changes.
Two concrete improvements:
- Fail fast when no blocks are read to avoid a potential infinite loop and surface a clear error:
// Extract blocks
for len(blocks) < numBlocks {
block, err := iter.Next()
if err != nil {
return nil, fmt.Errorf("failed to read block: %v", err)
}
if block == nil {
break
}
blocks = append(blocks, block.Cbor)
}
+
+ if len(blocks) == 0 {
+ return nil, fmt.Errorf("no blocks read from immutable testdata at %s", testdataDir)
+ }- Close the immutable DB if it exposes a Close-like method. You already
defer iter.Close(), butimmutable.New(testdataDir)likely opens files as well. Ifimmhas aClosemethod, addingdefer imm.Close()right after successful construction would avoid leaking file descriptors across benchmarks. Please confirm the exact API ofimmutable.Newindatabase/immutablebefore changing this.
---
🏁 Script executed:
```shell
#!/bin/bash
# Find the immutable package and check if it has a Close method
rg -n "func\s+\(.*immutable\)" --type=go | head -20
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look for the immutable package definition and its methods
fd -e go | xargs grep -l "package immutable" | head -5Length of output: 158
🏁 Script executed:
#!/bin/bash
# Search for New function in immutable and check what type it returns
rg -nA10 "func New\(" --type=go | grep -A10 "database/immutable" | head -30Length of output: 1247
🏁 Script executed:
#!/bin/bash
# Check the benchmark_test.go file to understand full context
cat internal/integration/benchmark_test.goLength of output: 7742
---
🏁 Script executed:
```shell
#!/bin/bash
# Check if ImmutableDb has a Close method
rg -nA5 "func \(\*ImmutableDb\) Close" database/immutable/ --type=go
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check the ImmutableDb struct definition and all its methods
rg -nA50 "type ImmutableDb struct" database/immutable/immutable.go --type=goLength of output: 1197
🏁 Script executed:
#!/antml:bin/bash
# Check the BlocksFromPoint method to see what iterator it returns
rg -nA15 "func \(\*ImmutableDb\) BlocksFromPoint" database/immutable/ --type=goLength of output: 0
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for BlocksFromPoint definition with broader pattern
rg -n "BlocksFromPoint" database/immutable/ --type=go -A10 | head -30
Length of output: 662
🏁 Script executed:
#!/bin/bash
# Check if testdata directory exists and has blocks
ls -la database/immutable/testdata 2>/dev/null || echo "testdata directory not found or empty"Length of output: 39782
Fix potential infinite loop when immutable DB yields no blocks.
The duplication loop at lines 41–47 will spin forever if blocks is empty:
if len(blocks) < numBlocks {
for len(blocks) < numBlocks {
for _, block := range blocks {
// ...
}
}
}With an empty slice, the inner for loop never executes, so len(blocks) never changes.
Add a check to fail fast and surface a clear error:
// Extract blocks
for len(blocks) < numBlocks {
block, err := iter.Next()
if err != nil {
return nil, fmt.Errorf("failed to read block: %v", err)
}
if block == nil {
break
}
blocks = append(blocks, block.Cbor)
}
+
+ if len(blocks) == 0 {
+ return nil, fmt.Errorf("no blocks read from immutable testdata at %s", testdataDir)
+ }🤖 Prompt for AI Agents
In internal/integration/benchmark_test.go around lines 18–63, the duplication
loop can hang when imm.BlocksFromPoint yields zero blocks because the inner
range over an empty slice never runs; modify the function to check if
len(blocks) == 0 after iterating the iterator and return a descriptive error
(e.g., "no blocks found in testdata") instead of entering the duplication loop,
and keep the existing duplication logic only when there is at least one block to
replicate so the function either returns the requested number of blocks or fails
fast with a clear error.
a304185 to
64465cf
Compare
| promRegistry: promRegistry, | ||
| // Set defaults | ||
| blockCacheSize: 805306368, // 768MB | ||
| indexCacheSize: 268435456, // 256MB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use the constants from badger/plugin.go
64465cf to
7804a5f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
database/plugin/blob/aws/plugin.go (1)
23-30: AligncmdlineOptionssynchronization and avoid panicking on configuration errorsTwo issues here mirror prior feedback:
cmdlineOptionsMutexonly guards reads:NewFromCmdlineOptionsusesRLock, but writes tocmdlineOptionsvia theDestpointers ininitare not protected by the same mutex. As-is, the RWMutex doesn’t provide real race protection. Either:
- Remove the mutex entirely if all Dest writes happen single-threaded before any call to
NewFromCmdlineOptions, or- Ensure any code mutating
cmdlineOptionsacquiresLock/Unlockon this mutex.
NewFromCmdlineOptionspanics onNewWithOptionserror. SinceNewWithOptionscan fail due to user/config issues (e.g., AWS config, missing bucket), this turns misconfiguration into a hard crash. A more robust pattern is to keepNewWithOptionsside-effect-light and move validation/remote calls toStart(), so errors surface via the plugin lifecycle instead of panics.Also applies to: 67-83
database/plugin/metadata/sqlite/plugin.go (1)
23-28: FixcmdlineOptionsRWMutex asymmetry and handleNewWithOptionserrors gracefullyThis sqlite metadata plugin has the same patterns as the AWS blob plugin:
cmdlineOptionsMutexis only used withRLock/RUnlockinNewFromCmdlineOptions, while thedata-dirfield is written viaDest: &(cmdlineOptions.dataDir)without takingLock. Either remove the mutex if initialization is guaranteed single-threaded, or ensure writers also useLock/Unlockso the RWMutex actually guards concurrent access.
NewFromCmdlineOptionspanics ifNewWithOptionsreturns an error. SinceNewWithOptionscan fail on configuration issues (e.g., baddata-dir, DB open errors), panicking makes misconfiguration unrecoverable. Prefer returning a plugin that performs validation inStart()(and returns an error there), or otherwise threading the error back to the caller instead of panicking.Also applies to: 51-64
🧹 Nitpick comments (11)
PLUGIN_DEVELOPMENT.md (1)
96-110: Clarify how CLI flags interact with env/YAML precedenceYou document precedence as “env > YAML > defaults” and separately describe CLI flags, but don’t state where CLI fits into that order. It would help plugin authors if this section briefly said how CLI flags interact with env/YAML (e.g., whether flags override env/YAML or are just another way of setting the same cmdline option variables).
database/plugin/metadata/store.go (1)
149-175: Plugin-based metadata store factory is correct; logic mirrors blob storeThe registry lookup, startup, and type assertion flow is solid: you handle “not found”, Start() failures, and non-MetadataStore plugins with clear errors. This matches the blob store factory and keeps metadata wiring consistent.
If more plugin-backed store types are added later, consider extracting a small helper (e.g.,
startAndAssert(pluginType, name, targetInterfaceName)) to avoid repeating this pattern across blob/metadata.internal/integration/benchmark_test.go (1)
18-75: Improve diagnostics in loadBlockData error pathsThe zero-block guard avoids the potential infinite loop; nice. For easier debugging when immutable testdata is misconfigured, consider including
testdataDirin the first two error messages so failures clearly point at the path:- imm, err := immutable.New(testdataDir) - if err != nil { - return nil, fmt.Errorf("failed to open immutable DB: %v", err) - } + imm, err := immutable.New(testdataDir) + if err != nil { + return nil, fmt.Errorf("failed to open immutable DB at %s: %v", testdataDir, err) + } @@ - iter, err := imm.BlocksFromPoint(origin) - if err != nil { - return nil, fmt.Errorf("failed to create block iterator: %v", err) - } + iter, err := imm.BlocksFromPoint(origin) + if err != nil { + return nil, fmt.Errorf("failed to create block iterator from %s: %v", testdataDir, err) + }database/plugin/blob/gcs/plugin_test.go (1)
22-65: Make credential validation test paths OS-agnosticUsing hard-coded POSIX paths (
/dev/null,/nonexistent/file.json) makes this test fragile on non-Unix platforms and tightly couples it to exact filesystem layouts.Consider:
- Creating a temp file via
t.TempDir()/os.CreateTempfor the “valid credentials file” case.- Using a path inside
t.TempDir()that you intentionally don’t create for the “nonexistent credentials file” case (e.g.,filepath.Join(t.TempDir(), "missing.json")).This keeps the behavior under test the same while making the test portable and less environment-dependent.
database/plugin/blob/badger/database.go (1)
101-114: Store and stop the GC ticker during Close to prevent unnecessary work on closed DB
initstarts a GC goroutine withblobGc(time.NewTicker(5 * time.Minute)), butClosedoes not stop the ticker or signal the goroutine. After DB close, the GC goroutine continues looping and callingRunValueLogGC, which returnsErrRejectedand gets logged as a warning repeatedly. This causes resource waste and log spam.Store the ticker on
BlobStoreBadgerand stop it inClose(andStop) to allow the GC goroutine to exit cleanly and prevent operations on a closed DB.database/database.go (1)
122-137: Prefer copying Config by value to avoid omissionsManually copying each field into
configCopyworks but is easy to miss whenConfiggrows. Consider:cfgVal := *config configCopy := &cfgValand then apply the plugin-name defaults on
configCopy. This keeps behavior the same while automatically carrying any future fields forward.Also applies to: 150-155
database/plugin/blob/aws/database.go (1)
109-154: Consider makingNewa thin wrapper aroundNewWithOptions
NewWithOptionsis now the canonical constructor (options, optional region override, AWS config load), whileNewduplicates much of the setup logic. To keep behavior in one place and reduce drift, consider:
- Parse
dataDirinNewintobucket/prefix.- Call
NewWithOptions(WithBucket(...), WithPrefix(...), WithLogger(logger), WithPromRegistry(promRegistry))instead of re-implementing client creation.This keeps legacy
Newworking while centralizing the construction and future changes inNewWithOptions.database/plugin/blob/gcs/database.go (1)
95-147: NewWithOptions correctly wirescredentialsFile; consider centralizing client creation
NewWithOptionsnow uses:clientOpts = append(clientOpts, storage.WithDisabledClientMetrics()) if db.credentialsFile != "" { clientOpts = append(clientOpts, option.WithCredentialsFile(db.credentialsFile)) }which makes the
credentials-fileoption effective and matches typical GCS client usage. To avoid future drift betweenNewandNewWithOptions, you could haveNewparse the legacydataDirintobucketNameand then callNewWithOptions(WithBucket(...), WithLogger(logger), WithPromRegistry(promRegistry))so there is a single path that constructs the client and bucket handle.Also applies to: 179-191
internal/integration/integration_test.go (1)
226-255: Avoid callingStop()twice in cloud plugin testsIn both
TestCloudPluginGCSandTestCloudPluginS3youdefer ...Stop()and then explicitly callStop()again at the end of the test. This is harmless today but unnecessary and could hide issues ifStop()ever stops being idempotent. Keeping only the deferred call (or only the explicit call) would be cleaner.Also applies to: 257-285
database/plugin/metadata/sqlite/database.go (1)
199-203: Minor: Clarify comment reference.The comment states "Database is already started in
New()", but the initialization actually occurs inNewWithOptions. WhileNew()does delegate toNewWithOptions, consider updating the comment for clarity.// Start implements the plugin.Plugin interface func (d *MetadataStoreSqlite) Start() error { - // Database is already started in New(), so this is a no-op + // Database is already started in NewWithOptions(), so this is a no-op return nil }internal/config/config.go (1)
217-236: Consider logging skipped configuration entries.Lines 227-234 silently skip configuration entries where the value is not
map[any]any. If a user provides malformed YAML (e.g.,badger: "some_string"instead ofbadger: {option: value}), the configuration will be ignored without warning. This could lead to confusion during troubleshooting.Consider adding logging or validation to alert users when configuration entries are skipped:
// Build plugin config map blobConfig := make(map[string]map[any]any) for k, v := range tempCfg.Database.Blob { if key, ok := k.(string); ok { if val, ok := v.(map[any]any); ok { blobConfig[key] = val + } else { + // Log skipped non-map config entries + fmt.Fprintf(os.Stderr, "warning: skipping blob config entry %q: expected map, got %T\n", key, v) } } }Apply similar logging for metadata configuration processing at lines 247-254.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (38)
PLUGIN_DEVELOPMENT.md(1 hunks)README.md(1 hunks)chain/chain_test.go(1 hunks)cmd/dingo/main.go(3 hunks)config.go(0 hunks)database/database.go(1 hunks)database/database_test.go(1 hunks)database/plugin/blob/aws/database.go(4 hunks)database/plugin/blob/aws/options.go(1 hunks)database/plugin/blob/aws/plugin.go(1 hunks)database/plugin/blob/aws/plugin_test.go(1 hunks)database/plugin/blob/badger/database.go(3 hunks)database/plugin/blob/badger/options.go(1 hunks)database/plugin/blob/badger/options_test.go(1 hunks)database/plugin/blob/badger/plugin.go(1 hunks)database/plugin/blob/blob.go(1 hunks)database/plugin/blob/gcs/database.go(3 hunks)database/plugin/blob/gcs/options.go(1 hunks)database/plugin/blob/gcs/options_test.go(1 hunks)database/plugin/blob/gcs/plugin.go(1 hunks)database/plugin/blob/gcs/plugin_test.go(1 hunks)database/plugin/blob/store.go(2 hunks)database/plugin/metadata/sqlite/database.go(4 hunks)database/plugin/metadata/sqlite/options.go(1 hunks)database/plugin/metadata/sqlite/options_test.go(1 hunks)database/plugin/metadata/sqlite/plugin.go(1 hunks)database/plugin/metadata/store.go(2 hunks)database/plugin/register.go(2 hunks)database/plugin/register_test.go(1 hunks)dingo.yaml.example(1 hunks)go.mod(1 hunks)internal/config/config.go(6 hunks)internal/config/config_test.go(1 hunks)internal/integration/benchmark_test.go(1 hunks)internal/integration/integration_test.go(1 hunks)internal/node/load.go(1 hunks)internal/node/node.go(0 hunks)node.go(1 hunks)
💤 Files with no reviewable changes (2)
- internal/node/node.go
- config.go
🚧 Files skipped from review as they are similar to previous changes (18)
- database/plugin/blob/aws/options.go
- README.md
- database/plugin/metadata/sqlite/options_test.go
- node.go
- database/database_test.go
- internal/config/config_test.go
- database/plugin/blob/gcs/options_test.go
- database/plugin/register_test.go
- database/plugin/register.go
- cmd/dingo/main.go
- go.mod
- database/plugin/blob/badger/plugin.go
- database/plugin/blob/aws/plugin_test.go
- database/plugin/blob/gcs/plugin.go
- database/plugin/blob/badger/options.go
- database/plugin/metadata/sqlite/options.go
- database/plugin/blob/gcs/options.go
- internal/node/load.go
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().
Applied to files:
database/plugin/blob/badger/options_test.gochain/chain_test.gointernal/integration/benchmark_test.go
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: chain.Chain.AddBlock expects a github.com/blinklabs-io/gouroboros/ledger.Block. In tests, construct a database/models.Block only as a container and call its Decode() to obtain the gouroboros ledger.Block (which also satisfies github.com/blinklabs-io/gouroboros/ledger/common.Block) before inserting into the chain.
Applied to files:
chain/chain_test.go
📚 Learning: 2025-11-20T02:07:07.073Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 0
File: :0-0
Timestamp: 2025-11-20T02:07:07.073Z
Learning: In the blinklabs-io/dingo repository, range over int is valid Go syntax. The repository uses Go 1.24.0 (minimum Go 1.23), and range over int has been supported since Go 1.22. This syntax should not be flagged as invalid.
Applied to files:
database/plugin/blob/gcs/database.go
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: In utxorpc code paths, always operate on github.com/blinklabs-io/gouroboros/ledger/common.Block. Use database/models.Block only as the storage type and call models.Block.Decode() to obtain a ledger/common.Block before further processing.
Applied to files:
internal/integration/benchmark_test.go
🧬 Code graph analysis (10)
database/plugin/metadata/sqlite/database.go (1)
database/plugin/metadata/sqlite/options.go (4)
WithDataDir(42-46)WithLogger(26-30)WithPromRegistry(33-39)SqliteOptionFunc(23-23)
database/plugin/blob/aws/plugin.go (4)
database/plugin/register.go (3)
Register(51-53)PluginEntry(41-47)PluginTypeBlob(27-27)database/plugin/option.go (2)
PluginOption(34-42)PluginOptionTypeString(28-28)database/plugin/blob/aws/options.go (4)
BlobStoreS3OptionFunc(23-23)WithBucket(42-46)WithRegion(49-53)WithPrefix(56-60)database/plugin/blob/aws/database.go (1)
NewWithOptions(110-154)
database/plugin/blob/store.go (2)
database/plugin/metadata/store.go (1)
New(150-176)database/plugin/register.go (2)
GetPlugin(110-119)PluginTypeBlob(27-27)
chain/chain_test.go (1)
database/plugin/log.go (1)
Logger(18-27)
database/plugin/metadata/store.go (2)
database/plugin/blob/store.go (1)
New(35-61)database/plugin/register.go (2)
GetPlugin(110-119)PluginTypeMetadata(26-26)
database/plugin/blob/badger/database.go (3)
database/plugin/blob/badger/options.go (4)
BlobStoreBadgerOptionFunc(23-23)WithLogger(26-30)WithBlockCacheSize(49-53)WithIndexCacheSize(56-60)database/plugin/blob/badger/plugin.go (2)
DefaultBlockCacheSize(25-25)DefaultIndexCacheSize(26-26)database/plugin/blob/badger/logger.go (1)
NewBadgerLogger(28-35)
database/plugin/blob/gcs/database.go (5)
database/plugin/blob/gcs/logger.go (2)
GcsLogger(24-26)NewGcsLogger(28-34)database/plugin/blob/aws/database.go (2)
NewWithOptions(110-154)New(45-107)database/plugin/blob/gcs/options.go (2)
BlobStoreGCSOptionFunc(23-23)WithCredentialsFile(49-53)database/plugin/blob/badger/database.go (1)
New(43-99)database/plugin/blob/store.go (1)
New(35-61)
internal/config/config.go (8)
database/database.go (3)
New(116-161)Config(34-40)Database(43-48)database/plugin/blob/aws/database.go (1)
New(45-107)database/plugin/blob/gcs/database.go (1)
New(44-93)database/plugin/blob/store.go (1)
New(35-61)database/plugin/metadata/sqlite/database.go (1)
New(48-58)database/plugin/metadata/store.go (1)
New(150-176)database/immutable/immutable.go (1)
New(40-48)database/plugin/register.go (6)
PopulateCmdlineOptions(55-64)GetPlugins(100-108)PluginTypeBlob(27-27)PluginTypeMetadata(26-26)ProcessConfig(83-98)ProcessEnvVars(66-81)
database/database.go (4)
internal/config/config.go (1)
Config(69-89)config.go (1)
Config(33-54)database/plugin/blob/store.go (1)
New(35-61)database/plugin/metadata/sqlite/database.go (1)
New(48-58)
internal/integration/benchmark_test.go (7)
database/database.go (2)
New(116-161)Config(34-40)database/plugin/blob/aws/database.go (1)
New(45-107)database/plugin/blob/gcs/database.go (1)
New(44-93)database/plugin/blob/store.go (1)
New(35-61)database/plugin/metadata/store.go (1)
New(150-176)internal/config/config.go (1)
Config(69-89)config.go (1)
Config(33-54)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: docker (ubuntu-latest, amd64)
- GitHub Check: nilaway
- GitHub Check: Analyze (go)
🔇 Additional comments (15)
database/plugin/blob/blob.go (1)
17-21: AWS blob plugin registration via blank import looks goodThe added blank import ensures the AWS blob plugin is registered at init like the existing badger and gcs plugins; no additional wiring is needed here.
database/plugin/blob/store.go (1)
34-60: Blob plugin factory via registry looks correct and robustThe
Newimplementation correctly handles missing plugins, startup failures, and type-assertion failures, and returns a startedBlobStoreinstance when everything succeeds. This aligns with the metadata store pattern and the new plugin registry design.internal/integration/benchmark_test.go (1)
77-345: Benchmark harness and backend selection look soundThe benchmark setup correctly:
- Exercises in-memory and on-disk badger backends, plus optional GCS/S3 backends gated on credentials.
- Uses the new plugin-based
database.Config(BlobPlugin/DataDir/MetadataPlugin) consistently.- Populates realistic block data and reuses it across iterations.
No issues spotted in the benchmark logic or resource cleanup.
database/plugin/blob/badger/options_test.go (1)
25-132: Option tests look good and cover key behaviorsThese tests cleanly exercise each option and a combination case against
BlobStoreBadgerfields; they’re straightforward and align with the option pattern introduced inoptions.go.chain/chain_test.go (1)
109-115: dbConfig usage is aligned with the new plugin-based database configExplicitly setting
BlobPlugin: "badger"andMetadataPlugin: "sqlite"while leavingDataDirempty makes the plugin choice clear in tests and matches the new defaults; this looks good.dingo.yaml.example (1)
12-56: Plugin-based database config structure looks consistent with implementationThe
database.blob/database.metadatalayout, plugin names (badger,gcs,s3,sqlite), and option keys (data-dir,block-cache-size,credentials-file, etc.) match the new plugin system and serve as a clear example config.internal/integration/integration_test.go (1)
18-205: Good end-to-end coverage of plugin registration and config integrationThese tests exercise plugin registration (
GetPlugins/GetPlugin), descriptions, basic Start/Stop lifecycle, and loading plugin selections from YAML viaconfig.LoadConfig, which should catch most regressions in the new plugin system wiring.database/plugin/metadata/sqlite/database.go (2)
53-58: LGTM! Clean delegation pattern.The
New()function correctly delegates toNewWithOptionswhile maintaining backward compatibility. The functional options pattern is idiomatic and flexible.
60-72: Logger initialization fix confirmed.The previous critical issue with nil writer panics has been correctly addressed. The default logger now safely writes to
os.Stderr, and the initialization occurs after options are applied, allowingWithLoggerto override if needed.internal/config/config.go (6)
48-55: LGTM! Clear plugin defaults and sentinel error.The default plugin constants align with the stated migration path (badger for blob, sqlite for metadata), and using a sentinel error for the list-requested flow is an appropriate pattern.
57-67: LGTM! Flexible configuration structure.The
tempConfiganddatabaseConfigtypes enable layered YAML loading and support both legacy flat configs and the new plugin-based structure. The use ofmap[any]anyprovides flexibility for YAML unmarshaling, though it requires careful type assertions downstream.
79-80: LGTM! Config struct updated for plugin system.Replacing
BadgerCacheSizewithBlobPluginandMetadataPluginfields correctly reflects the shift to the plugin architecture. The YAML and envconfig tags are properly configured.
91-131: LGTM! Well-structured CLI argument parsing.The method correctly integrates plugin selection flags, delegates plugin-specific option population to the registry, and handles the "list" special case appropriately. The use of
flag.ExitOnErroris standard for CLI applications.
150-151: LGTM! Defaults align with migration plan.Setting
BlobPluginandMetadataPluginto their respective defaults maintains backward compatibility while enabling the plugin system.
274-281: LGTM! Plugin environment variable processing.The call to
plugin.ProcessEnvVars()is properly positioned after standard environment processing, and error handling is appropriate.
7804a5f to
79efac9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/config/config.go (1)
69-89: Update documentation and/or code to clarify plugin selection environment variable namingThe inconsistency is confirmed. The code uses
CARDANO_BLOB_PLUGINandCARDANO_METADATA_PLUGINfor plugin selection (fromenvconfig.Process("cardano", ...)), but README.md documentsDINGO_DATABASE_BLOB_PLUGINandDINGO_DATABASE_METADATA_PLUGIN. TheDINGO_DATABASE_*pattern is only used for plugin-specific options (viaplugin.ProcessEnvVars()), not for plugin selection itself.Choose one approach:
- Update README.md to document
CARDANO_BLOB_PLUGINandCARDANO_METADATA_PLUGINfor plugin selection, or- Change
envconfig.Process("cardano", ...)to use a"dingo_database"prefix and add explicitenvconfig:"DINGO_DATABASE_BLOB_PLUGIN"tags on those fields to match the documented interface.
♻️ Duplicate comments (2)
database/plugin/metadata/sqlite/plugin.go (2)
23-25: Missing RWMutex protection for cmdlineOptions.As confirmed in the previous review, the sqlite plugin lacks the
cmdlineOptionsMutexthat should protect concurrent access tocmdlineOptions. ThedataDirfield is written during flag parsing (line 41 viaDestpointer), but there's noLock()wrapper protecting that write, whileNewFromCmdlineOptionsusesRLock()(would use if mutex existed). The badger plugin demonstrates the correct pattern withinitCmdlineOptions()usingLock()/Unlock().
48-62: Panic on error converts config issues into crashes.As confirmed in the previous review,
NewWithOptionscan fail with configuration errors (baddataDir, gorm failures, etc.), and panicking here (lines 56-59) turns user misconfiguration into an unrecoverable crash. The comment at lines 57-58 is misleading—NewWithOptionsdoes validate and can fail. The error should be returned through the plugin lifecycle instead.Additionally, line 49 reads
cmdlineOptions.dataDirwithout any lock protection (noRLock()since no mutex exists).
🧹 Nitpick comments (7)
internal/integration/benchmark_test.go (2)
85-221: Consider extracting backend configuration to reduce duplication.The backend configuration logic (lines 88-145 and 157-214) is duplicated between
BenchmarkStorageBackendsandBenchmarkTestLoad. Consider extracting a helper function that returns the backend slice:func getTestBackends(b *testing.B, diskDataDir string) []struct{ name string config *database.Config } { backends := []struct { name string config *database.Config }{ { name: "memory", config: &database.Config{ BlobPlugin: "badger", DataDir: "", MetadataPlugin: "sqlite", }, }, { name: "disk", config: &database.Config{ BlobPlugin: "badger", DataDir: diskDataDir, MetadataPlugin: "sqlite", }, }, } // Add cloud backends if credentials are available if hasGCSCredentials() { testBucket := os.Getenv("DINGO_TEST_GCS_BUCKET") if testBucket == "" { testBucket = "dingo-test-bucket" } backends = append(backends, struct { name string config *database.Config }{ name: "GCS", config: &database.Config{ BlobPlugin: "gcs", DataDir: "gcs://" + testBucket, MetadataPlugin: "sqlite", }, }) } if hasS3Credentials() { testBucket := os.Getenv("DINGO_TEST_S3_BUCKET") if testBucket == "" { testBucket = "dingo-test-bucket" } backends = append(backends, struct { name string config *database.Config }{ name: "S3", config: &database.Config{ BlobPlugin: "s3", DataDir: "s3://" + testBucket, MetadataPlugin: "sqlite", }, }) } return backends }Then use it in both benchmark functions:
backends := getTestBackends(b, b.TempDir())
223-287: Simplify temporary directory handling.The current code creates temp directories in two places:
- Line 104 (in caller):
config.DataDir = b.TempDir()- Lines 232-240 (in helper): creates a new temp dir with
os.MkdirTempand overwritesconfig.DataDirThis redundancy is confusing. Consider simplifying by using only
b.TempDir()from the caller:func benchmarkStorageBackend( b *testing.B, backendName string, config *database.Config, ) { - var tempDir string - var err error - // Create temporary directory for on-disk local backends - if config.BlobPlugin == "badger" && config.DataDir != "" { - tempDir, err = os.MkdirTemp( - "", - fmt.Sprintf("dingo-bench-%s-", backendName), - ) - if err != nil { - b.Fatalf("failed to create temp dir: %v", err) - } - defer os.RemoveAll(tempDir) - config.DataDir = filepath.Join(tempDir, "data") - } - // Create database with the specified backend - db, err := database.New(config) + db, err := database.New(config) if err != nil {The Badger backend will create any necessary subdirectories automatically. This approach:
- Eliminates the unused first temp directory
- Avoids mutating the config parameter
- Leverages Go's testing framework cleanup (b.TempDir() is auto-cleaned)
Apply the same change to
benchmarkTestLoad(lines 297-307).database/plugin/blob/aws/database.go (1)
171-203: Consider making the timeout configurable and ensure cleanup on error.The
Start()implementation is well-structured with proper validation and error handling. Two minor observations:
Hard-coded timeout (line 178): The 30-second timeout for AWS config loading is hard-coded. Consider making this configurable via a
WithTimeoutoption or environment variable for environments with slower network/authentication.Context cleanup on error (lines 198-201): If
init()fails, the localcancel()is called (line 199), butd.startupCancelmay still reference the same function without being nil'd out. While this isn't a functional issue (the plugin won't be used after Start() fails), for defensive programming consider:if err := d.init(); err != nil { cancel() + d.startupCancel = nil return err }database/plugin/blob/gcs/plugin.go (1)
82-102: Consider returning errors instead of panicking.Lines 90 and 99 panic on errors that may stem from user misconfiguration:
validateCredentialscan fail if the credentials file path is invalid or unreadable.NewWithOptionscan fail if the bucket is empty or GCS client creation fails (perdatabase.go:67-118).Panicking here turns configuration errors into unrecoverable crashes. Consider deferring validation to the
Start()method and returning errors through the plugin lifecycle to allow graceful error handling.database/plugin/blob/gcs/database.go (1)
136-163: UnifyCloseandStopand make close idempotentRight now
Closeassumesd.clientis non‑nil andStopduplicates close logic with a nil check. This can lead to inconsistent behavior or double‑close if both are called. Consider makingClosesafe/idempotent and delegatingStopto it:func (d *BlobStoreGCS) Close() error { - return d.client.Close() + if d.client == nil { + return nil + } + err := d.client.Close() + d.client = nil + return err } @@ func (d *BlobStoreGCS) Stop() error { - if d.client != nil { - return d.client.Close() - } - return nil + return d.Close() }This keeps the lifecycle behavior in one place and avoids surprises if callers mix
StopandClose.database/plugin/blob/badger/database.go (1)
101-137: Ensure GC goroutine exits cleanly when the store is stopped
initstartsblobGcwithgo d.blobGc(d.gcTicker), andClose/Stopcalld.gcTicker.Stop(), butblobGcdoesfor range t.C. Sincetime.Ticker.Stop()does not closet.C, the GC goroutine will block forever after shutdown. It’s only one goroutine per instance, but it’s effectively leaked across restarts/tests.Consider adding an explicit stop signal, e.g.:
type BlobStoreBadger struct { @@ - gcTicker *time.Ticker + gcTicker *time.Ticker + gcStopCh chan struct{} @@ func (d *BlobStoreBadger) init() error { @@ if d.gcEnabled { - d.gcTicker = time.NewTicker(5 * time.Minute) - go d.blobGc(d.gcTicker) + d.gcTicker = time.NewTicker(5 * time.Minute) + d.gcStopCh = make(chan struct{}) + go d.blobGc(d.gcTicker, d.gcStopCh) } @@ -func (d *BlobStoreBadger) blobGc(t *time.Ticker) { - for range t.C { +func (d *BlobStoreBadger) blobGc(t *time.Ticker, stop <-chan struct{}) { + for { + select { + case <-t.C: again: err := d.DB().RunValueLogGC(0.5) @@ - } + case <-stop: + return + } } } @@ func (d *BlobStoreBadger) Close() error { - // Stop GC ticker if it exists - if d.gcTicker != nil { - d.gcTicker.Stop() - } + if d.gcTicker != nil { + d.gcTicker.Stop() + if d.gcStopCh != nil { + close(d.gcStopCh) + d.gcStopCh = nil + } + d.gcTicker = nil + }This way the GC goroutine terminates when the plugin is stopped, which is friendlier for long‑running processes and tests that may create/destroy stores multiple times.
Also applies to: 149-157
internal/config/config.go (1)
57-67: Clarify/adjust precedence when both top-level anddatabase.*plugin configs are present
LoadConfigbuildspluginConfigfrom:
- top-level
blob/metadatasections (viatempCfg.Blob/tempCfg.Metadata), anddatabase.blob/database.metadataif present, where the latter reconstructs a map and then overwrites any existing entry:if tempCfg.Blob != nil { pluginConfig["blob"] = tempCfg.Blob } ... if tempCfg.Database != nil { if tempCfg.Database.Blob != nil { ... pluginConfig["blob"] = blobConfig } }If a user writes:
database: blob: plugin: gcs blob: gcs: bucket: my-bucketthen
database.blob.pluginsetsBlobPlugin = "gcs", but becausedatabase.blobhas no per‑plugin maps,blobConfigends up empty and replaces the non‑empty top‑levelblobsection; plugin options likebucketare silently dropped.If the intended rule is "database.* sections fully replace top‑level sections", this is fine but should be clearly documented. If instead you want
database.blob/database.metadatato override or extend top‑level blocks, consider merging instead of wholesale replacement, e.g.:
- Start from any existing
pluginConfig["blob"], then overlay entries fromdatabase.blob.- Only replace when
database.blobactually contains per‑plugin maps, not when it only setsplugin.Right now this corner case can surprise users migrating configs that mix the two layouts.
Also applies to: 176-207, 208-273
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (40)
PLUGIN_DEVELOPMENT.md(1 hunks)README.md(1 hunks)chain/chain_test.go(1 hunks)cmd/dingo/main.go(3 hunks)config.go(0 hunks)database/database.go(1 hunks)database/database_test.go(1 hunks)database/plugin/blob/aws/database.go(4 hunks)database/plugin/blob/aws/options.go(1 hunks)database/plugin/blob/aws/plugin.go(1 hunks)database/plugin/blob/aws/plugin_test.go(1 hunks)database/plugin/blob/badger/database.go(4 hunks)database/plugin/blob/badger/options.go(1 hunks)database/plugin/blob/badger/options_test.go(1 hunks)database/plugin/blob/badger/plugin.go(1 hunks)database/plugin/blob/blob.go(1 hunks)database/plugin/blob/gcs/database.go(4 hunks)database/plugin/blob/gcs/options.go(1 hunks)database/plugin/blob/gcs/options_test.go(1 hunks)database/plugin/blob/gcs/plugin.go(1 hunks)database/plugin/blob/gcs/plugin_test.go(1 hunks)database/plugin/blob/store.go(2 hunks)database/plugin/metadata/sqlite/database.go(2 hunks)database/plugin/metadata/sqlite/options.go(1 hunks)database/plugin/metadata/sqlite/options_test.go(1 hunks)database/plugin/metadata/sqlite/plugin.go(1 hunks)database/plugin/metadata/store.go(2 hunks)database/plugin/option.go(5 hunks)database/plugin/plugin.go(1 hunks)database/plugin/register.go(3 hunks)database/plugin/register_test.go(1 hunks)dingo.yaml.example(1 hunks)go.mod(2 hunks)internal/config/config.go(6 hunks)internal/config/config_test.go(1 hunks)internal/integration/benchmark_test.go(1 hunks)internal/integration/integration_test.go(1 hunks)internal/node/load.go(1 hunks)internal/node/node.go(0 hunks)node.go(1 hunks)
💤 Files with no reviewable changes (2)
- config.go
- internal/node/node.go
🚧 Files skipped from review as they are similar to previous changes (16)
- internal/node/load.go
- database/plugin/metadata/sqlite/options_test.go
- node.go
- database/plugin/register_test.go
- database/plugin/blob/badger/options_test.go
- chain/chain_test.go
- cmd/dingo/main.go
- database/plugin/register.go
- database/plugin/blob/gcs/options.go
- database/database_test.go
- README.md
- database/plugin/blob/aws/plugin_test.go
- database/plugin/blob/gcs/plugin_test.go
- PLUGIN_DEVELOPMENT.md
- database/plugin/blob/aws/options.go
- internal/config/config_test.go
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: In utxorpc code paths, always operate on github.com/blinklabs-io/gouroboros/ledger/common.Block. Use database/models.Block only as the storage type and call models.Block.Decode() to obtain a ledger/common.Block before further processing.
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-11-20T19:58:38.889Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 0
File: :0-0
Timestamp: 2025-11-20T19:58:38.889Z
Learning: In Go, test dependencies (such as github.com/stretchr/testify) are correctly marked as direct dependencies in go.mod, not as indirect dependencies, even when they are only used in test files. This is a standard practice in Go's module system.
Applied to files:
go.mod
📚 Learning: 2025-11-20T02:07:07.073Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 0
File: :0-0
Timestamp: 2025-11-20T02:07:07.073Z
Learning: In the blinklabs-io/dingo repository, range over int is valid Go syntax. The repository uses Go 1.24.0 (minimum Go 1.23), and range over int has been supported since Go 1.22. This syntax should not be flagged as invalid.
Applied to files:
database/plugin/blob/gcs/database.go
🧬 Code graph analysis (15)
database/database.go (2)
database/plugin/blob/store.go (1)
New(35-52)database/plugin/metadata/store.go (1)
New(154-171)
database/plugin/plugin.go (1)
database/plugin/register.go (3)
PluginType(24-24)GetPlugin(111-120)PluginTypeName(31-40)
database/plugin/metadata/sqlite/options.go (4)
database/plugin/metadata/sqlite/database.go (1)
MetadataStoreSqlite(37-45)database/plugin/blob/aws/options.go (2)
WithLogger(26-30)WithPromRegistry(33-39)config.go (1)
WithLogger(156-160)database/plugin/log.go (1)
Logger(18-27)
database/plugin/metadata/sqlite/database.go (4)
database/plugin/metadata/sqlite/options.go (1)
SqliteOptionFunc(23-23)database/plugin/metadata/store.go (1)
New(154-171)database/plugin/metadata/sqlite/commit_timestamp.go (2)
CommitTimestamp(29-32)CommitTimestamp(34-36)database/models/models.go (1)
MigrateModels(18-51)
database/plugin/blob/store.go (2)
database/plugin/plugin.go (1)
StartPlugin(25-47)database/plugin/register.go (1)
PluginTypeBlob(28-28)
database/plugin/metadata/store.go (3)
database/plugin/blob/store.go (1)
New(35-52)database/plugin/plugin.go (1)
StartPlugin(25-47)database/plugin/register.go (1)
PluginTypeMetadata(27-27)
internal/integration/integration_test.go (7)
database/plugin/register.go (3)
GetPlugins(101-109)GetPlugin(111-120)PluginEntry(42-48)internal/config/config.go (1)
LoadConfig(156-304)database/plugin/blob/aws/database.go (1)
NewWithOptions(85-100)database/plugin/blob/gcs/database.go (1)
NewWithOptions(68-119)database/plugin/metadata/sqlite/database.go (1)
NewWithOptions(61-76)database/plugin/blob/aws/options.go (1)
WithBucket(42-46)database/plugin/blob/gcs/options.go (1)
WithBucket(42-46)
database/plugin/blob/gcs/plugin.go (5)
database/plugin/register.go (2)
Register(52-54)PluginEntry(42-48)database/plugin/option.go (2)
PluginOption(35-43)PluginOptionTypeString(29-29)database/plugin/plugin.go (1)
Plugin(19-22)database/plugin/blob/gcs/options.go (3)
BlobStoreGCSOptionFunc(23-23)WithBucket(42-46)WithCredentialsFile(49-53)database/plugin/blob/gcs/database.go (1)
NewWithOptions(68-119)
database/plugin/blob/aws/plugin.go (5)
database/plugin/register.go (2)
Register(52-54)PluginEntry(42-48)database/plugin/blob/gcs/plugin.go (1)
NewFromCmdlineOptions(82-102)database/plugin/option.go (2)
PluginOption(35-43)PluginOptionTypeString(29-29)database/plugin/blob/aws/options.go (4)
BlobStoreS3OptionFunc(23-23)WithBucket(42-46)WithRegion(49-53)WithPrefix(56-60)database/plugin/blob/aws/database.go (1)
NewWithOptions(85-100)
internal/integration/benchmark_test.go (7)
database/database.go (2)
New(116-156)Config(34-40)database/plugin/blob/aws/database.go (1)
New(45-82)database/plugin/blob/badger/database.go (1)
New(44-99)database/plugin/blob/gcs/database.go (1)
New(44-65)database/plugin/blob/store.go (1)
New(35-52)internal/config/config.go (1)
Config(69-89)config.go (1)
Config(33-54)
database/plugin/blob/badger/options.go (2)
database/plugin/blob/badger/database.go (1)
BlobStoreBadger(32-41)config.go (1)
WithLogger(156-160)
internal/config/config.go (4)
database/plugin/blob/gcs/database.go (1)
New(44-65)database/plugin/blob/store.go (1)
New(35-52)database/plugin/metadata/store.go (1)
New(154-171)database/plugin/register.go (5)
GetPlugins(101-109)PluginTypeBlob(28-28)PluginTypeMetadata(27-27)ProcessConfig(84-99)ProcessEnvVars(67-82)
database/plugin/blob/gcs/options_test.go (2)
database/plugin/blob/gcs/database.go (2)
BlobStoreGCS(32-41)New(44-65)database/plugin/blob/gcs/options.go (4)
WithLogger(26-30)WithPromRegistry(33-39)WithBucket(42-46)WithCredentialsFile(49-53)
database/plugin/blob/aws/database.go (2)
database/plugin/blob/aws/options.go (5)
WithBucket(42-46)WithPrefix(56-60)WithLogger(26-30)WithPromRegistry(33-39)BlobStoreS3OptionFunc(23-23)database/plugin/blob/aws/logger.go (1)
NewS3Logger(28-33)
database/plugin/blob/badger/database.go (4)
database/plugin/blob/badger/options.go (4)
BlobStoreBadgerOptionFunc(23-23)WithLogger(26-30)WithBlockCacheSize(49-53)WithIndexCacheSize(56-60)database/plugin/blob/badger/plugin.go (2)
DefaultBlockCacheSize(25-25)DefaultIndexCacheSize(26-26)config.go (1)
WithLogger(156-160)database/plugin/blob/badger/logger.go (1)
NewBadgerLogger(28-35)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: docker (ubuntu-latest, amd64)
- GitHub Check: nilaway
- GitHub Check: Analyze (go)
🔇 Additional comments (36)
internal/integration/benchmark_test.go (2)
1-16: LGTM!The imports are well-organized, with blank imports correctly used for plugin registration side effects.
18-83: LGTM!The function correctly handles edge cases:
- Returns an error if no blocks are found (lines 66-68)
- Duplicates blocks when testdata has fewer than needed
- Properly closes the iterator
database/database.go (1)
133-150: Plugin configuration is working as designed via command-line flags and defaults.The plugins receive
DataDirthrough command-line flag options parsed into the globalcmdlineOptionsstruct. TheLoggerandPromRegistryare intentionally set tonilwhen plugins initialize (viaNewFromCmdlineOptions()), after which plugins use sensible defaults: discarding logs and foregoing metrics registration. This is a valid loose-coupling design where plugins are instantiated minimally and manage their own initialization.database/plugin/blob/blob.go (1)
18-18: LGTM!The blank import correctly triggers plugin registration for the AWS S3 blob plugin via its
init()function, following the established pattern used for other plugins.database/plugin/blob/aws/database.go (3)
84-100: LGTM! Clean option-based constructor.
NewWithOptionscorrectly defers all validation toStart(), keeping the constructor simple and side-effect-free. The default logger setup ensures the instance is always usable.
45-82: LGTM! Proper dataDir parsing and validation.The
New()function correctly parses thes3://bucket/prefixformat, validates the input, and delegates toNewWithOptions. The validation appropriately catches malformed inputs before any AWS operations.
205-209: LGTM! Appropriate no-op for S3 client.The
Stop()implementation is correct—the AWS SDK v2 S3 client doesn't require explicit resource cleanup.database/plugin/blob/aws/plugin.go (1)
74-79: <!-- [scratchpad_start]
[task verification_1/1][observations]
- AWS NewWithOptions (lines 85-100 in database/plugin/blob/aws/database.go) has signature returning (*BlobStoreS3, error)
- Function body (lines 86-100) only applies options and sets defaults; explicitly returns db, nil
- Comment at line 98 states: "AWS config loading and validation moved to Start()"
- GCS version shows validation CAN happen in NewWithOptions (bucket name check, client creation at lines 84-107)
- SQLite version also defers validation, returns db, nil
- No error return paths in current AWS NewWithOptions implementation
[analysis]
- AWS NewWithOptions currently never returns an error in practice (always returns nil)
- The panic in plugin.go lines 74-79 is unreachable dead code in current state
- Error return type exists but is unused, suggesting future-proofing or consistency with other implementations
- Review comment's premise (can NewWithOptions fail?) is technically false for current code
- However, code structure is sound: function signature allows for future validation without changing callers
[pending]
- None; verification is complete
[done]
Script execution confirms NewWithOptions in AWS implementation always returns nil error. Panic is defensive/dead code. Review comment's refactoring suggestion for message clarity is still valid but predicated on impossible error condition.
[scratchpad_end]
Clarify or remove the unreachable panic in
NewWithOptionserror handling.Verification confirms that
NewWithOptionsin the AWS plugin (lines 85-100 ofdatabase/plugin/blob/aws/database.go) never returns an error—it always returns(db, nil). The panic at lines 74-79 ofplugin.gois unreachable dead code.If the error return type is intentional for future extensibility, either:
- Remove the panic and assume no error occurs, or
- Update the panic message to reflect that the error return is reserved for future use
If you want defensive clarity, the suggested message improvement is still helpful:
- // NewWithOptions should not fail for configuration issues - // Validation happens in Start() - panic(fmt.Sprintf("unexpected error creating AWS plugin: %v", err)) + // NewWithOptions should never fail; if it does, it's a programming error + // All user configuration validation happens in Start() + panic(fmt.Sprintf("programming error in AWS plugin initialization: %v", err))go.mod (1)
24-35: Directly requiringpflagandgoogle.golang.org/apiis correctThese are used directly in the codebase (CLI/options and cloud integrations), so promoting them to direct requirements keeps
go.modaccurate without changing behavior.database/plugin/plugin.go (1)
17-47: StartPlugin helper cleanly centralizes plugin lookup and startupThe helper wraps registry lookup and Start() with clear, contextual errors and keeps call sites simple. This looks good and aligns with the registry-based design.
database/plugin/blob/store.go (1)
17-52: BlobNew(pluginName)wiring via registry looks correctUsing
plugin.StartPluginplus aBlobStoretype assertion cleanly enforces the expected interface and surfaces configuration/registration mistakes with clear errors. This is a solid refactor from hard‑coded wiring.dingo.yaml.example (1)
12-56: Plugin-based database config layout looks consistentThe
database.blob/database.metadatalayout and per‑plugin option blocks line up with the new plugin system and option names. Just ensure these documented defaults (especiallygc: truefor badger) stay in sync with the corresponding option defaults indatabase/plugin/blob/badger/plugin.goso users don’t see surprises when omitting explicit config.database/plugin/blob/gcs/options_test.go (1)
1-75: Option tests cover the essential GCS fieldsEach option is exercised against a fresh
BlobStoreGCSand validated for the expected field mutation; this is sufficient and straightforward coverage for the option helpers.database/plugin/metadata/sqlite/options.go (3)
23-30: LGTM!The functional options pattern is correctly implemented.
SqliteOptionFuncandWithLoggerfollow the established pattern seen in other plugins.
32-39: LGTM!
WithPromRegistrycorrectly implements the functional options pattern and aligns with the existing codebase conventions.
41-46: LGTM!
WithDataDiris correctly implemented following the functional options pattern.database/plugin/blob/gcs/plugin.go (2)
25-44: LGTM!The
validateCredentialsfunction correctly handles both ADC fallback (empty string) and validates file existence and readability. The permission check logic at line 37 is correct.
46-52: LGTM!The
cmdlineOptionsstruct andcmdlineOptionsMutexare properly defined. This follows the correct pattern for protecting shared state.database/plugin/metadata/sqlite/database.go (4)
47-58: LGTM!The
New()function correctly wrapsNewWithOptionsusing the functional options pattern, maintaining backward compatibility while adopting the new plugin architecture.
60-76: LGTM! Past issue resolved.The default logger initialization (line 71) now correctly uses
os.Stderrinstead ofnil, preventing the panic identified in the previous review. Database initialization is appropriately deferred toStart(), keepingNewWithOptionsside-effect-free.
139-203: LGTM!The
Start()method correctly implements deferred initialization with proper error handling. Database setup errors are returned rather than causing panics, allowing graceful error propagation through the plugin lifecycle. The separation of in-memory vs. persistent storage paths is clear and well-structured.
205-208: LGTM!The
Stop()method correctly delegates toClose(), implementing the plugin lifecycle interface while reusing existing cleanup logic.database/plugin/metadata/store.go (1)
153-171: LGTM!The
New()function correctly implements the plugin-based approach. The simplified signature (onlypluginName) delegates configuration to the plugin system, and the type assertion ensures interface compliance. This aligns with the documented breaking changes in the PR.database/plugin/option.go (3)
45-88: LGTM!The switch from standard
flagtopflagis correctly implemented. The widening ofPluginOptionTypeUintfromuinttouint64is applied consistently across the flag binding (lines 74-79), with matching type assertions and defaults.
127-132: LGTM!The environment variable processing for
PluginOptionTypeUintcorrectly usesuint64throughout, consistent with theAddToFlagSetchanges.
171-182: LGTM!The
ProcessConfiguint handling correctly supports bothint(from YAML parsing) anduint64types, with appropriate negative value validation. The error message accurately reflects accepted types.internal/integration/integration_test.go (6)
3-14: LGTM!The imports are correct. Each package is imported once:
awsandgcsas named imports (used at lines 237, 264), andbadgerandsqliteas blank imports for registration side effects. No duplicate imports present.
16-51: LGTM!The integration test correctly verifies plugin registration, retrieval, and basic lifecycle (Start/Stop). The use of
deferfor cleanup ensures plugins are stopped even if assertions fail.
53-100: LGTM!The test validates that all plugins have non-empty descriptions and verifies specific description text for badger and sqlite plugins, ensuring consistent metadata across the plugin system.
102-202: LGTM!Both tests provide valuable coverage:
TestPluginLifecycleEndToEndvalidates complete registration→instantiation→lifecycle flow.TestPluginConfigurationIntegrationverifies end-to-end config loading and plugin selection from YAML.The tests appropriately exercise the plugin system's core functionality.
204-276: LGTM!The cloud plugin tests appropriately:
- Skip when credentials are unavailable (avoiding false failures).
- Use
NewWithOptionsdirectly with test configuration.- Verify basic lifecycle for GCS and S3 backends.
The
TestPluginSwitchingtest validates that the registry supports multiple plugins of the same type.
278-341: LGTM!The helper functions provide appropriate credential detection for GCS and S3, checking standard locations and environment variables. The
findPluginEntryhelper is a straightforward utility for test assertions.database/plugin/blob/badger/options.go (1)
23-67: LGTM!All six option constructors (
WithLogger,WithPromRegistry,WithDataDir,WithBlockCacheSize,WithIndexCacheSize,WithGc) correctly implement the functional options pattern. Each sets the appropriate field onBlobStoreBadger, and types align with the struct definition.database/plugin/blob/gcs/database.go (1)
31-41: Option-based constructor and credentials wiring look solidThe New → NewWithOptions flow, bucket validation, default logger, and use of
credentialsFileviaoption.WithCredentialsFileall look correct and align with the plugin/option pattern used elsewhere. I don’t see functional issues in this construction path.Also applies to: 60-119
database/plugin/blob/badger/database.go (1)
31-40: New option-based constructor and GC defaults look correctThe switch to
New(opts ...BlobStoreBadgerOptionFunc)with defaults forgcEnabled,blockCacheSize, andindexCacheSizelooks good and ensuresWithGc(false)can actually disable GC while keeping it enabled by default. Directory handling and in‑memory vs on‑disk options are also wired sensibly.Also applies to: 44-99
internal/config/config.go (1)
48-56: Review comment is incorrect—ParseCmdlineArgs is unused dead codeThe review criticizes
ParseCmdlineArgsininternal/config/config.go, but that function is never called. The actual CLI argument handling uses Cobra incmd/dingo/main.go(lines 119-121, 146-150), which implements precedence differently:// Load config first cfg, err := config.LoadConfig(configFile) // Then override only if flag was NOT the default if blobPlugin != config.DefaultBlobPlugin { cfg.BlobPlugin = blobPlugin }
ParseCmdlineArgsappears to be dead code from earlier development. The current implementation correctly handles precedence (defaults < YAML/env < CLI), though it uses a flag-value-comparison check rather thanflags.Changed().Recommendation: Remove or deprecate
ParseCmdlineArgssince it's not used. The existing Cobra-based approach incmd/dingo/main.gois the active code path.Likely an incorrect or invalid review comment.
| // Register plugin | ||
| func init() { | ||
| plugin.Register( | ||
| plugin.PluginEntry{ | ||
| Type: plugin.PluginTypeBlob, | ||
| Name: "gcs", | ||
| Description: "Google Cloud Storage blob store", | ||
| NewFromOptionsFunc: NewFromCmdlineOptions, | ||
| Options: []plugin.PluginOption{ | ||
| { | ||
| Name: "bucket", | ||
| Type: plugin.PluginOptionTypeString, | ||
| Description: "GCS bucket name", | ||
| DefaultValue: "", | ||
| Dest: &(cmdlineOptions.bucket), | ||
| }, | ||
| { | ||
| Name: "credentials-file", | ||
| Type: plugin.PluginOptionTypeString, | ||
| Description: "Path to GCS service account credentials file", | ||
| DefaultValue: "", | ||
| Dest: &(cmdlineOptions.credentialsFile), | ||
| }, | ||
| }, | ||
| }, | ||
| ) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify mutex protection during flag parsing.
While cmdlineOptionsMutex is defined, the Dest pointers (lines 68, 75) allow direct writes to cmdlineOptions fields during flag parsing. These writes need to be protected with Lock()/Unlock(), similar to the badger plugin's initCmdlineOptions() pattern. Verify that flag parsing happens under the mutex lock to prevent races with NewFromCmdlineOptions reads.
🏁 Script executed:
#!/bin/bash
# Search for initCmdlineOptions or Lock() calls in GCS plugin files
rg -n "cmdlineOptionsMutex\.(Lock|Unlock)" database/plugin/blob/gcs/ --type go -C 3Length of output: 0
I need to gather more information to verify the mutex protection concern. Let me search for additional context.
🏁 Script executed:
#!/bin/bash
# Search for flag parsing code and initCmdlineOptions in GCS plugin
rg -n "flag\." database/plugin/blob/gcs/ --type go -C 3Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look for initCmdlineOptions function in GCS plugin
rg -n "initCmdlineOptions\|init\s*\(" database/plugin/blob/gcs/ --type go -C 5Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check badger plugin for comparison - look at mutex handling pattern
rg -n "cmdlineOptionsMutex\.(Lock|Unlock)" database/plugin/blob/badger/ --type go -C 3Length of output: 593
🏁 Script executed:
#!/bin/bash
# Search for NewFromCmdlineOptions in GCS to understand when cmdlineOptions is read
rg -n "NewFromCmdlineOptions" database/plugin/blob/gcs/ --type go -A 10Length of output: 1474
I need to understand the plugin framework's flag parsing mechanism. Let me search for where Dest pointers are written.
🏁 Script executed:
#!/bin/bash
# Search for where plugin system processes Dest pointers
find database/plugin -name "*.go" -type f | head -20Length of output: 779
🏁 Script executed:
#!/bin/bash
# Look for the plugin package/registry implementation
rg -n "type PluginOption\|Dest" database/plugin/ --type go -B 2 -A 2 | head -50Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for where Dest is actually written to (flag assignments)
rg -n "\.Dest\s*=" database/plugin/ --type go -C 3Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if there's an initCmdlineOptions pattern in OTHER blob plugins
fd "plugin.go" database/plugin/blob --exec grep -l "initCmdlineOptions" {} \;Length of output: 37
🏁 Script executed:
#!/bin/bash
# Check AWS plugin for comparison - does it have initCmdlineOptions?
rg -n "initCmdlineOptions\|cmdlineOptionsMutex" database/plugin/blob/aws/ --type goLength of output: 0
🏁 Script executed:
#!/bin/bash
# Find cmdlineOptions definition and structure in GCS
rg -n "var cmdlineOptions\|type.*Options.*struct" database/plugin/blob/gcs/ --type go -A 10Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search plugin registry code (likely at database/plugin level)
rg -n "type PluginOption\|\.Dest" database/plugin --type go --max-depth 2 -A 3 -B 1Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if there's plugin registration framework code
find database/plugin -maxdepth 1 -name "*.go" -type fLength of output: 135
🏁 Script executed:
#!/bin/bash
# Examine plugin option.go to understand PluginOption structure
cat database/plugin/option.goLength of output: 4781
🏁 Script executed:
#!/bin/bash
# Examine plugin register.go to see how Dest pointers are handled
cat database/plugin/register.goLength of output: 2805
🏁 Script executed:
#!/bin/bash
# Find cmdlineOptions definition in GCS with simpler grep
cat database/plugin/blob/gcs/plugin.go | grep -n "cmdlineOptions\|var \|const " | head -20Length of output: 334
🏁 Script executed:
#!/bin/bash
# Find where PopulateCmdlineOptions, ProcessEnvVars, ProcessConfig are called
rg -n "PopulateCmdlineOptions\|ProcessEnvVars\|ProcessConfig" --type go -B 2 -A 2 | grep -v "^database/plugin/" | head -40Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if these calls happen in main/init or runtime code
rg -n "PopulateCmdlineOptions\|ProcessEnvVars\|ProcessConfig" --type go | grep -v "^database/plugin/option\|^database/plugin/register"Length of output: 0
🏁 Script executed:
#!/bin/bash
# See the full GCS cmdlineOptions structure definition
cat database/plugin/blob/gcs/plugin.go | head -80Length of output: 2109
Add mutex protection for all writes to cmdlineOptions fields.
The race condition is confirmed. While NewFromCmdlineOptions() protects reads with RLock(), the Dest pointers allow writes through multiple unprotected paths:
option.ProcessEnvVars()writes directly without lockoption.ProcessConfig()writes directly without lockpflagwrites viaStringVar()during flag parsing without lock
This creates a classic read-write race. The badger plugin demonstrates the correct pattern: implement initCmdlineOptions() that wraps all write operations with cmdlineOptionsMutex.Lock(), similar to badger's lines 40-45.
79efac9 to
a604363
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
database/plugin/blob/badger/database.go (1)
44-100: Synchronize GC goroutine shutdown withClose()to prevent race withDB.Close()The current code has a real concurrency issue. When
Close()is called, it stops the ticker and closesgcStopCh, but it does not wait for theblobGcgoroutine to finish. IfblobGcis in the middle of callingRunValueLogGC(), the goroutine may still be executing whendb.Close()is called. BadgerDB's documentation confirms thatRunValueLogGCreturnsErrRejectedif the DB has been closed, and Close cannot be called concurrently with GC.To fix this:
- Add
gcWg sync.WaitGroupto theBlobStoreBadgerstruct- Call
gcWg.Add(1)before spawninggo d.blobGc(...)at line ~118- Call
gcWg.Done()before returning fromblobGc()- In
Close()(line 157), after closinggcStopCh, callgcWg.Wait()beforedb.Close()This ensures the GC goroutine exits completely before the database is closed.
♻️ Duplicate comments (3)
database/plugin/blob/gcs/plugin.go (1)
54-80: Mutex protection for cmdlineOptions writes still needed.The past review correctly identified that
Destpointers (lines 68, 75) allow unprotected writes tocmdlineOptionsduring flag parsing, env var processing, and config processing, whileNewFromCmdlineOptionsusesRLock()for reads. This creates a read-write race condition. The badger plugin demonstrates the correct pattern withinitCmdlineOptions()usingLock()/Unlock()to protect all writes.database/plugin/metadata/sqlite/plugin.go (2)
31-36: initCmdlineOptions provides partial protection only.While
initCmdlineOptionscorrectly usesLock()/Unlock()for initialization (lines 32-34), this only protects the initial default assignment. TheDestpointer at line 53 still allows unprotected writes during flag parsing, environment variable processing, and config file processing, creating a race with theRLock()read inNewFromCmdlineOptions(lines 61-63).
60-76: Panic on configuration errors makes the system fragile.As noted in the past review,
NewFromCmdlineOptionspanics whenNewWithOptionsfails (line 73). SinceNewWithOptionscan fail due to various configuration issues (invaliddataDir,gorm.Open()failures, filesystem errors per database.go:78-106), this converts user misconfiguration into an unrecoverable process crash rather than allowing errors to propagate through the plugin lifecycle.
🧹 Nitpick comments (5)
database/plugin/blob/aws/plugin.go (1)
81-85: Reconsider panic for constructor errors.While the comment explains that validation happens in
Start(), the panic at line 84 will crash the process ifNewWithOptions()returns an error. IfNewWithOptions()can fail due to internal errors (not just validation), consider whether those failures should surface as regular errors through the plugin lifecycle instead.However, if
NewWithOptions()is truly side-effect-free and any failure is unexpected (e.g., programming error), the panic might be appropriate.database/plugin/blob/badger/plugin.go (1)
94-108: Consider graceful error handling instead of panic.
NewFromCmdlineOptionspanics ifNew()fails (line 105). Configuration errors (invalid data directory, disk issues, insufficient permissions) will crash the process rather than allowing graceful degradation or error reporting through the plugin lifecycle. Consider returning the error or moving validation toStart(), consistent with the pattern suggested in the sqlite plugin review.Example approach:
func NewFromCmdlineOptions() plugin.Plugin { cmdlineOptionsMutex.RLock() opts := []BlobStoreBadgerOptionFunc{ WithDataDir(cmdlineOptions.dataDir), WithBlockCacheSize(cmdlineOptions.blockCacheSize), WithIndexCacheSize(cmdlineOptions.indexCacheSize), WithGc(cmdlineOptions.gcEnabled), } cmdlineOptionsMutex.RUnlock() p, err := New(opts...) if err != nil { - panic(err) + // Return a plugin that defers the error to Start() + return &errorPlugin{err: err} } return p }Alternatively, ensure
New()never fails during construction, moving all I/O and validation to theStart()method.database/plugin/blob/gcs/plugin.go (1)
82-98: Consider graceful error handling instead of panic.Similar to the badger plugin,
NewFromCmdlineOptionspanics ifNewWithOptions()fails (line 95). Configuration errors (missing bucket, invalid credentials) will crash the process. Consider returning errors through the plugin lifecycle or deferring validation toStart()for more resilient error handling.internal/integration/integration_test.go (1)
16-151: Good end‑to‑end plugin coverage; optionally checkStoperrorsThe integration tests do a nice job exercising registration, descriptions, lifecycle, and config wiring for blob/metadata plugins, plus basic GCS/S3 startup when credentials are available.
One small improvement: in
TestPluginSystemIntegration,TestPluginLifecycleEndToEnd,TestCloudPluginGCS, andTestCloudPluginS3, the deferredStop()calls ignore errors. If a plugin ever fails to shut down cleanly, those failures will be silent. You could wrap the defers to surface issues without much noise:defer func() { if err := badgerPlugin.Stop(); err != nil { t.Errorf("failed to stop badger plugin: %v", err) } }()Also applies to: 224-276, 331-341
internal/config/config.go (1)
49-57: Plugin-aware config loading looks correct; minor note on duallistflagsThe new
tempConfig/databaseConfigstructs, plugin defaults, andplugin.ProcessConfig/plugin.ProcessEnvVarsintegration give you a clean path from YAML and env into plugin options while preserving backward compatibility when noconfig:section is present. The merging logic fordatabase.blob/database.metadatainto per-plugin maps is sensible and usesmaps.Copycorrectly.A tiny edge case in
ParseCmdlineArgs: if a user passes both--blob=listand--metadata=list, only the first check that matches will print (blob or metadata), and the function returnsErrPluginListRequestedimmediately. If you ever care about that UX, you could print both lists before returning. Otherwise, the current behavior is reasonable.Also applies to: 67-78, 79-99, 101-142, 186-297, 305-312
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (43)
PLUGIN_DEVELOPMENT.md(1 hunks)README.md(1 hunks)chain/chain_test.go(1 hunks)cmd/dingo/main.go(3 hunks)config.go(0 hunks)database/database.go(1 hunks)database/database_test.go(1 hunks)database/plugin/blob/aws/commit_timestamp.go(1 hunks)database/plugin/blob/aws/database.go(4 hunks)database/plugin/blob/aws/options.go(1 hunks)database/plugin/blob/aws/plugin.go(1 hunks)database/plugin/blob/aws/plugin_test.go(1 hunks)database/plugin/blob/badger/database.go(3 hunks)database/plugin/blob/badger/options.go(1 hunks)database/plugin/blob/badger/options_test.go(1 hunks)database/plugin/blob/badger/plugin.go(1 hunks)database/plugin/blob/blob.go(1 hunks)database/plugin/blob/gcs/commit_timestamp.go(1 hunks)database/plugin/blob/gcs/database.go(4 hunks)database/plugin/blob/gcs/options.go(1 hunks)database/plugin/blob/gcs/options_test.go(1 hunks)database/plugin/blob/gcs/plugin.go(1 hunks)database/plugin/blob/gcs/plugin_test.go(1 hunks)database/plugin/blob/store.go(2 hunks)database/plugin/metadata/sqlite/database.go(2 hunks)database/plugin/metadata/sqlite/options.go(1 hunks)database/plugin/metadata/sqlite/options_test.go(1 hunks)database/plugin/metadata/sqlite/plugin.go(1 hunks)database/plugin/metadata/sqlite/transaction.go(1 hunks)database/plugin/metadata/store.go(2 hunks)database/plugin/option.go(5 hunks)database/plugin/plugin.go(1 hunks)database/plugin/register.go(3 hunks)database/plugin/register_test.go(1 hunks)dingo.yaml.example(1 hunks)go.mod(2 hunks)internal/config/config.go(4 hunks)internal/config/config_test.go(1 hunks)internal/integration/benchmark_test.go(1 hunks)internal/integration/integration_test.go(1 hunks)internal/node/load.go(1 hunks)internal/node/node.go(0 hunks)node.go(1 hunks)
💤 Files with no reviewable changes (2)
- config.go
- internal/node/node.go
✅ Files skipped from review due to trivial changes (1)
- database/plugin/blob/gcs/commit_timestamp.go
🚧 Files skipped from review as they are similar to previous changes (14)
- PLUGIN_DEVELOPMENT.md
- database/plugin/blob/aws/plugin_test.go
- database/plugin/blob/aws/options.go
- cmd/dingo/main.go
- node.go
- database/plugin/blob/gcs/options_test.go
- database/plugin/register_test.go
- database/plugin/blob/gcs/options.go
- database/plugin/blob/badger/options.go
- database/plugin/metadata/sqlite/options_test.go
- internal/node/load.go
- dingo.yaml.example
- README.md
- database/plugin/metadata/sqlite/options.go
🧰 Additional context used
🧠 Learnings (7)
📚 Learning: 2025-11-20T19:58:38.889Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 0
File: :0-0
Timestamp: 2025-11-20T19:58:38.889Z
Learning: In Go, test dependencies (such as github.com/stretchr/testify) are correctly marked as direct dependencies in go.mod, not as indirect dependencies, even when they are only used in test files. This is a standard practice in Go's module system.
Applied to files:
go.mod
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: chain.Chain.AddBlock expects a github.com/blinklabs-io/gouroboros/ledger.Block. In tests, construct a database/models.Block only as a container and call its Decode() to obtain the gouroboros ledger.Block (which also satisfies github.com/blinklabs-io/gouroboros/ledger/common.Block) before inserting into the chain.
Applied to files:
chain/chain_test.go
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().
Applied to files:
chain/chain_test.gointernal/integration/benchmark_test.godatabase/plugin/blob/badger/options_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.
Applied to files:
chain/chain_test.gointernal/integration/benchmark_test.go
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: In utxorpc code paths, always operate on github.com/blinklabs-io/gouroboros/ledger/common.Block. Use database/models.Block only as the storage type and call models.Block.Decode() to obtain a ledger/common.Block before further processing.
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-11-20T02:07:07.073Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 0
File: :0-0
Timestamp: 2025-11-20T02:07:07.073Z
Learning: In the blinklabs-io/dingo repository, range over int is valid Go syntax. The repository uses Go 1.24.0 (minimum Go 1.23), and range over int has been supported since Go 1.22. This syntax should not be flagged as invalid.
Applied to files:
database/plugin/blob/gcs/database.go
🧬 Code graph analysis (11)
database/plugin/blob/store.go (2)
database/plugin/plugin.go (1)
StartPlugin(25-47)database/plugin/register.go (1)
PluginTypeBlob(28-28)
database/plugin/metadata/sqlite/database.go (3)
database/plugin/metadata/sqlite/options.go (4)
WithDataDir(42-46)WithLogger(26-30)WithPromRegistry(33-39)SqliteOptionFunc(23-23)database/plugin/metadata/store.go (1)
New(154-171)database/models/models.go (1)
MigrateModels(18-51)
database/plugin/plugin.go (1)
database/plugin/register.go (3)
PluginType(24-24)GetPlugin(111-120)PluginTypeName(31-40)
chain/chain_test.go (1)
database/plugin/log.go (1)
Logger(18-27)
internal/integration/benchmark_test.go (8)
database/database.go (2)
New(119-162)Config(34-40)database/plugin/blob/aws/database.go (1)
New(46-83)database/plugin/blob/badger/database.go (1)
New(45-100)database/plugin/blob/gcs/database.go (1)
New(44-65)database/plugin/blob/store.go (1)
New(35-52)database/plugin/metadata/store.go (1)
New(154-171)internal/config/config.go (1)
Config(79-99)config.go (1)
Config(33-54)
database/plugin/metadata/sqlite/plugin.go (5)
database/plugin/blob/badger/plugin.go (1)
NewFromCmdlineOptions(94-108)database/plugin/blob/gcs/plugin.go (1)
NewFromCmdlineOptions(82-98)database/plugin/option.go (1)
PluginOption(35-43)database/plugin/metadata/sqlite/options.go (2)
SqliteOptionFunc(23-23)WithDataDir(42-46)database/plugin/metadata/sqlite/database.go (1)
NewWithOptions(61-76)
database/plugin/blob/badger/options_test.go (2)
database/plugin/blob/badger/database.go (1)
BlobStoreBadger(32-42)database/plugin/blob/badger/options.go (6)
WithDataDir(42-46)WithBlockCacheSize(49-53)WithIndexCacheSize(56-60)WithLogger(26-30)WithPromRegistry(33-39)WithGc(63-67)
database/plugin/metadata/store.go (4)
database/plugin/blob/store.go (1)
New(35-52)database/plugin/metadata/sqlite/database.go (1)
New(48-58)database/plugin/plugin.go (1)
StartPlugin(25-47)database/plugin/register.go (1)
PluginTypeMetadata(27-27)
internal/config/config_test.go (1)
internal/config/config.go (1)
LoadConfig(166-328)
database/database.go (4)
database/plugin/blob/badger/database.go (1)
New(45-100)database/plugin/blob/store.go (1)
New(35-52)database/plugin/metadata/store.go (1)
New(154-171)database/immutable/immutable.go (1)
New(40-48)
database/plugin/blob/badger/plugin.go (5)
database/plugin/register.go (3)
Register(52-54)PluginEntry(42-48)PluginTypeBlob(28-28)database/plugin/blob/gcs/plugin.go (1)
NewFromCmdlineOptions(82-98)database/plugin/metadata/sqlite/plugin.go (1)
NewFromCmdlineOptions(60-76)database/plugin/plugin.go (1)
Plugin(19-22)database/plugin/blob/store.go (1)
New(35-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: nilaway
- GitHub Check: docker (ubuntu-latest, amd64)
- GitHub Check: Analyze (go)
🔇 Additional comments (20)
database/plugin/metadata/sqlite/transaction.go (1)
108-108: LGTM! Modern Go idiom.The change from
interface{}toanymodernizes the code to use the Go 1.18+ type alias, improving readability with no functional impact.database/plugin/blob/aws/commit_timestamp.go (1)
41-44: LGTM! Formatting improvement enhances readability.The multi-line formatting of the logger.Errorf call improves code readability with no functional impact.
database/database.go (1)
125-138: LGTM: Config copying and defaulting logic is well-implemented.The defensive copy of the config (lines 125-127) prevents mutation of the caller's config, which is good practice. The plugin defaulting logic (lines 129-134) is clear and correct. The expanded documentation at lines 135-138 directly addresses the DataDir defaulting concern from the previous review, making the behavior explicit and predictable.
database/plugin/blob/blob.go (1)
18-18: LGTM: AWS plugin registration follows established pattern.The blank import for the AWS S3 plugin is consistent with the existing badger and gcs plugin imports, enabling automatic registration at init time.
internal/config/config_test.go (1)
261-301: LGTM: Well-structured test for database plugin configuration.The test properly validates the new database section loading, including plugin name extraction and per-plugin configuration. The test follows good practices with proper temp file cleanup and clear assertions.
go.mod (1)
24-24: LGTM: Dependency promotion is correct.Both
github.com/spf13/pflagandgoogle.golang.org/apiare now used directly in the codebase (pflag indatabase/plugin/register.gofor CLI flag handling, and google.golang.org/api in the GCS plugin). Promoting them from indirect to direct dependencies aligns with Go module conventions and the retrieved learnings.Also applies to: 34-34
database/plugin/blob/gcs/plugin_test.go (1)
24-82: LGTM: Comprehensive credential validation test.The test properly covers all three credential scenarios (valid file, nonexistent file, empty path for ADC). The table-driven approach is well-structured, and the use of
t.TempDir()ensures proper cleanup.database/plugin/register.go (1)
56-65: LGTM: pflag migration is correctly implemented.The migration from standard library
flagtospf13/pflagis clean and maintains the same functionality. The updated signature aligns with the broader plugin system changes and the new CLI option handling.chain/chain_test.go (1)
109-115: LGTM: Test config correctly updated for plugin system.The database config has been properly updated to use the new plugin-based architecture, specifying
BlobPlugin: "badger"andMetadataPlugin: "sqlite"instead of the removedBlobCacheSizefield. These plugin names match the defaults defined ininternal/config/config.go.database/plugin/blob/aws/plugin.go (1)
24-31: Disregard this review comment—the premise is factually incorrect.The review claims that "Badger and GCS plugins wrap their flag writes in
init()withLock()/Unlock()". However, verification shows this is false:
- Badger: Uses
Lock()/Unlock()ininitCmdlineOptions()duringinit()(lines 41-42)- GCS: Does NOT use
Lock()/Unlock()duringinit()or anywhere else—onlyRLock()/RUnlock()when reading inNewFromCmdlineOptions()(lines 83-86)- AWS: Follows the same pattern as GCS—no write-side locking, only read-side locking (lines 69-73)
GCS and AWS are already consistent with each other. Only Badger differs by protecting initialization writes. The review's assertion that GCS uses write-side locking contradicts the actual code.
Likely an incorrect or invalid review comment.
database/plugin/plugin.go (1)
24-47: LGTM! Clean plugin lifecycle management.The
StartPluginfunction provides clear separation of concerns: registry lookup, existence check, and lifecycle management. Error messages include plugin type and name context, and the use of%wproperly preserves the error chain for debugging.database/database_test.go (1)
29-35: LGTM! Test configuration correctly migrated to plugin system.The test config now uses
BlobPluginandMetadataPluginfields with appropriate defaults ("badger" and "sqlite"), consistent with the plugin-based architecture introduced in this PR.database/plugin/blob/badger/options_test.go (1)
25-132: LGTM! Comprehensive test coverage for option functions.The test suite thoroughly validates all option setters with clear assertions. The inclusion of
TestOptionsCombination(lines 107-132) is particularly valuable, ensuring that multiple options can be composed correctly—a key property of the functional options pattern.database/plugin/blob/badger/plugin.go (1)
39-46: GC default correctly aligned.The initialization properly sets
gcEnabled = true(line 45), which aligns with theDefaultValue: truein the plugin option definition (line 86). This resolves the previous concern about mismatched defaults between code and configuration.database/plugin/blob/store.go (1)
34-52: LGTM! Clean registry-based plugin initialization.The refactored
Newfunction properly delegates to the plugin system, with clear error messages for both missing plugins and type assertion failures. This aligns well with the metadata store's equivalent implementation.database/plugin/metadata/store.go (1)
153-171: LGTM! Consistent plugin-based initialization.The refactored
Newfunction mirrors the blob store's implementation, providing consistent plugin lifecycle management across storage backends. Type assertion and error handling are appropriate.database/plugin/blob/aws/database.go (1)
45-83: S3 option-based construction and timeout-awareStartlook goodThe refactor to
NewWithOptions+Startis clean:
Newnow only parses/validates thes3://bucket[/prefix]URI and forwards to options.NewWithOptionshas no side effects beyond applying options and setting a safe default logger.Startvalidatesbucket, applies a configurable (or default 30s) timeout forconfig.LoadDefaultConfig, supports an explicitregionoverride, and runs shared initialization viainit().This matches the plugin lifecycle pattern and should behave predictably across backends.
Also applies to: 85-101, 172-217
internal/integration/benchmark_test.go (1)
18-83: Benchmarks and block loader look solid; zero-block edge case handled
loadBlockDatacorrectly locates the immutable testdata, iterates blocks, and now fails fast when none are found, avoiding the prior potential infinite loop. Backend setup (includinggcs:///s3://URIs) and theBenchmarkStorageBackends/BenchmarkTestLoadloops are straightforward and correctly useb.Loop()and transactions around realistic block data.Also applies to: 85-150, 174-237, 239-302
database/plugin/metadata/sqlite/database.go (1)
60-76: Close sqlite DB onStarterror now that plugin startup owns the lifecycle
Startopens the sqlite DB and assignsd.dbat line 186, then callsd.init()andAutoMigrate(). On any failure afterd.dbis assigned—at lines 187, 193, or 198—the method returns an error but leavesd.dbopen. SinceStartPlugindiscards the plugin instance onStart()failure,Stop()will never be called to clean up.Close the DB on error paths:
d.db = metadataDb if err := d.init(); err != nil { - // MetadataStoreSqlite is available for recovery, so return error but keep instance - return err + _ = d.Close() + return err } // Create table schemas d.logger.Debug(fmt.Sprintf("creating table: %#v", &CommitTimestamp{})) if err := d.db.AutoMigrate(&CommitTimestamp{}); err != nil { + _ = d.Close() return err } for _, model := range models.MigrateModels { d.logger.Debug(fmt.Sprintf("creating table: %#v", model)) if err := d.db.AutoMigrate(model); err != nil { + _ = d.Close() return err } }Remove the stale "available for recovery" comment at line 188.
Likely an incorrect or invalid review comment.
database/plugin/option.go (1)
45-88: Verification complete: uint64 migration is correctly implemented across the codebaseAfter comprehensive verification, all
PluginOptionTypeUintusages are correctly configured:
- Badger plugin (lines 67, 75 in
database/plugin/blob/badger/plugin.go): Both uint options useuint64defaults and point touint64struct fields (blockCacheSizeandindexCacheSize)- No other plugins define uint options, reducing surface area for type mismatches
- Core handlers (
AddToFlagSet,ProcessEnvVars,ProcessConfig) all correctly handleuint64typesThe type assertion panics you flagged as a risk are not present in the current codebase.
a604363 to
c4aab25
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
database/plugin/blob/gcs/plugin.go (1)
61-88: Mutex protection during flag parsing remains a concern.While
initCmdlineOptions()now has proper locking, theDestpointers (lines 76, 83) still allow external writes tocmdlineOptionsfields during flag parsing viapflagandoption.ProcessEnvVars/ProcessConfig. These writes occur without lock protection, creating a race withNewFromCmdlineOptionsreads.The badger plugin addresses this by calling all option processing under the lock. Verify that the plugin framework serializes these writes before
NewFromCmdlineOptionsis called, or add locking around the write paths.#!/bin/bash # Check how other plugins handle this - specifically badger's pattern rg -n "ProcessEnvVars|ProcessConfig|PopulateCmdlineOptions" --type go -B2 -A5 | head -60
🧹 Nitpick comments (8)
database/plugin/option.go (1)
172-181: Consider documenting the uint64/int type coercion behavior.The code accepts both
intanduint64values from config files, converting non-negativeintvalues touint64. While this provides flexibility, it could lead to subtle issues if users provide negative values or if there are precision concerns.Consider adding a comment explaining this coercion:
case PluginOptionTypeUint: switch value := optionData.(type) { + // Accept int values from YAML/config and convert to uint64 if non-negative case int: if value < 0 { return fmt.Errorf("invalid value for option '%s': negative value: %T", p.Name, optionData) } *(p.Dest.(*uint64)) = uint64(value) case uint64: *(p.Dest.(*uint64)) = value default: return fmt.Errorf("invalid value for option '%s': expected uint64 or int and got %T", p.Name, optionData) }database/plugin/blob/aws/database.go (1)
179-183: Consider making the default timeout configurable.The hardcoded 30-second default timeout for AWS config loading might be insufficient for slow networks or regions with high latency. Consider exposing this as a configurable option or increasing the default.
If timeout configurability is desired, ensure there's a
WithTimeoutoption function inoptions.goand document the default:// Use configured timeout or default to 60 seconds for better reliability timeout := d.timeout if timeout == 0 { timeout = 60 * time.Second }cmd/dingo/main.go (1)
76-117: Handle--blob listand--metadata listsimultaneously inlistPluginsRight now
listPluginsreturns after the first"list"match, sodingo --blob list --metadata listwill only show blob plugins. You could accumulate both sections into a single buffer and return once, matching the combined output style oflistAllPlugins.internal/integration/integration_test.go (2)
116-126: Magic number for plugin count assertion.The test asserts
len(blobPlugins) < 3but this assumes exactly 3 blob plugins are always registered. This could become fragile as plugins are added or removed.Consider asserting on specific expected plugins rather than a count:
// Check for expected plugins by name instead of count expectedBlobs := []string{"badger", "gcs", "s3"} for _, name := range expectedBlobs { if findPluginEntry(blobPlugins, name) == nil { t.Errorf("expected blob plugin %q not found", name) } }
347-357: findPluginEntry returns pointer to loop variable copy.The function returns
&pwherepis the loop variable. In Go, this is safe becausepis a copy of each slice element, but it returns a pointer to a local variable that goes out of scope. This works because the return happens immediately, but it's a subtle pattern.For clarity, consider returning by value or indexing into the slice:
func findPluginEntry( plugins []plugin.PluginEntry, name string, ) *plugin.PluginEntry { - for _, p := range plugins { + for i := range plugins { + p := &plugins[i] if p.Name == name { - return &p + return p } } return nil }database/plugin/blob/badger/database.go (1)
149-158: Move database initialization and GC startup from New() to Start() for consistency with plugin lifecycle pattern.The review comment is accurate. SQLite explicitly defers database initialization to
Start()(see comment atdatabase/plugin/metadata/sqlite/database.go:74), while Badger performs initialization inNew(). This inconsistency could lead to resource leaks ifGetPlugin()is called without a correspondingStart(), though current codebase usage always callsStart()explicitly.The pattern is correct in SQLite: heavy initialization (database open, GC startup) belongs in
Start(), notNew(). Consider moving Badger'sbadger.Open(),gcTickercreation, and GC goroutine launch frominit()(called inNew()) into theStart()method to align with the plugin lifecycle contract.internal/config/config.go (2)
123-139: Allow listing both blob and metadata plugins in a single invocationCurrent control flow returns after the first
"list"check, sodingo --blob list --metadata listwill only show blob plugins and exit before listing metadata:if c.BlobPlugin == "list" { // prints blob list return ErrPluginListRequested } if c.MetadataPlugin == "list" { // prints metadata list return ErrPluginListRequested }If someone sets both flags, it’s more user-friendly to print both lists and then return once:
- if c.BlobPlugin == "list" { - fmt.Println("Available blob plugins:") - ... - return ErrPluginListRequested - } - if c.MetadataPlugin == "list" { - fmt.Println("Available metadata plugins:") - ... - return ErrPluginListRequested - } + listed := false + if c.BlobPlugin == "list" { + fmt.Println("Available blob plugins:") + for _, p := range plugin.GetPlugins(plugin.PluginTypeBlob) { + fmt.Printf(" %s: %s\n", p.Name, p.Description) + } + listed = true + } + if c.MetadataPlugin == "list" { + fmt.Println("Available metadata plugins:") + for _, p := range plugin.GetPlugins(plugin.PluginTypeMetadata) { + fmt.Printf(" %s: %s\n", p.Name, p.Description) + } + listed = true + } + if listed { + return ErrPluginListRequested + }Not critical, but it smooths the UX when both lists are requested.
58-66: Shallow map conversion may leave nested plugin config maps in mixed types
convertToAnyMaponly converts the top-levelmap[string]interface{}tomap[any]any, and the database section logic similarly only normalizes one level deep:func convertToAnyMap(m map[string]interface{}) map[any]any { result := make(map[any]any) for k, v := range m { result[k] = v } return result } ... } else if val, ok := v.(map[string]any); ok { blobConfig[key] = convertToAnyMap(val) }If a plugin’s YAML options include nested maps, those inner maps will still be
map[string]any(ormap[any]anydepending on howyaml.Unmarshalpopulated them), so plugins that assume fully-normalizedmap[any]anymight hit type-assertion friction.If plugin option handlers care about consistent types, consider either:
- Normalizing recursively (walk nested maps and convert them), or
- Standardizing on
map[string]anyend-to-end for plugin config and updatingplugin.ProcessConfig/ plugin option parsing accordingly.If existing plugin code already tolerates the mixed map types, this can stay as-is, but it’s worth double-checking to avoid subtle panics later.
Also applies to: 238-280
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (43)
PLUGIN_DEVELOPMENT.md(1 hunks)README.md(1 hunks)chain/chain_test.go(1 hunks)cmd/dingo/main.go(3 hunks)config.go(0 hunks)database/database.go(1 hunks)database/database_test.go(1 hunks)database/plugin/blob/aws/commit_timestamp.go(1 hunks)database/plugin/blob/aws/database.go(4 hunks)database/plugin/blob/aws/options.go(1 hunks)database/plugin/blob/aws/plugin.go(1 hunks)database/plugin/blob/aws/plugin_test.go(1 hunks)database/plugin/blob/badger/database.go(3 hunks)database/plugin/blob/badger/options.go(1 hunks)database/plugin/blob/badger/options_test.go(1 hunks)database/plugin/blob/badger/plugin.go(1 hunks)database/plugin/blob/blob.go(1 hunks)database/plugin/blob/gcs/commit_timestamp.go(1 hunks)database/plugin/blob/gcs/database.go(4 hunks)database/plugin/blob/gcs/options.go(1 hunks)database/plugin/blob/gcs/options_test.go(1 hunks)database/plugin/blob/gcs/plugin.go(1 hunks)database/plugin/blob/gcs/plugin_test.go(1 hunks)database/plugin/blob/store.go(2 hunks)database/plugin/metadata/sqlite/database.go(2 hunks)database/plugin/metadata/sqlite/options.go(1 hunks)database/plugin/metadata/sqlite/options_test.go(1 hunks)database/plugin/metadata/sqlite/plugin.go(1 hunks)database/plugin/metadata/sqlite/transaction.go(1 hunks)database/plugin/metadata/store.go(2 hunks)database/plugin/option.go(5 hunks)database/plugin/plugin.go(1 hunks)database/plugin/register.go(3 hunks)database/plugin/register_test.go(1 hunks)dingo.yaml.example(1 hunks)go.mod(2 hunks)internal/config/config.go(4 hunks)internal/config/config_test.go(1 hunks)internal/integration/benchmark_test.go(1 hunks)internal/integration/integration_test.go(1 hunks)internal/node/load.go(1 hunks)internal/node/node.go(0 hunks)node.go(1 hunks)
💤 Files with no reviewable changes (2)
- config.go
- internal/node/node.go
🚧 Files skipped from review as they are similar to previous changes (19)
- database/plugin/blob/gcs/plugin_test.go
- node.go
- database/plugin/register.go
- database/plugin/metadata/sqlite/options.go
- chain/chain_test.go
- database/plugin/register_test.go
- go.mod
- database/plugin/blob/aws/plugin.go
- database/plugin/blob/blob.go
- database/plugin/blob/gcs/options_test.go
- database/plugin/blob/gcs/commit_timestamp.go
- database/plugin/blob/gcs/database.go
- database/plugin/blob/aws/plugin_test.go
- database/plugin/blob/badger/options_test.go
- database/plugin/metadata/store.go
- internal/config/config_test.go
- database/plugin/blob/gcs/options.go
- database/plugin/blob/aws/options.go
- README.md
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.
Applied to files:
internal/integration/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: In utxorpc code paths, always operate on github.com/blinklabs-io/gouroboros/ledger/common.Block. Use database/models.Block only as the storage type and call models.Block.Decode() to obtain a ledger/common.Block before further processing.
Applied to files:
internal/integration/benchmark_test.go
🧬 Code graph analysis (10)
database/plugin/metadata/sqlite/plugin.go (7)
database/plugin/register.go (2)
Register(52-54)PluginEntry(42-48)database/plugin/blob/aws/plugin.go (1)
NewFromCmdlineOptions(67-85)database/plugin/blob/badger/plugin.go (1)
NewFromCmdlineOptions(94-109)database/plugin/blob/gcs/plugin.go (1)
NewFromCmdlineOptions(90-106)database/plugin/option.go (2)
PluginOption(35-43)PluginOptionTypeString(29-29)database/plugin/metadata/sqlite/options.go (2)
SqliteOptionFunc(23-23)WithDataDir(42-46)database/plugin/metadata/sqlite/database.go (1)
NewWithOptions(61-76)
cmd/dingo/main.go (2)
database/plugin/register.go (4)
GetPlugins(101-109)PluginTypeBlob(28-28)PluginTypeMetadata(27-27)PopulateCmdlineOptions(56-65)internal/config/config.go (4)
DefaultBlobPlugin(50-50)DefaultMetadataPlugin(51-51)LoadConfig(166-328)WithContext(37-39)
internal/node/load.go (1)
database/plugin/log.go (1)
Logger(18-27)
database/database.go (3)
database/plugin/blob/store.go (1)
New(35-52)database/plugin/metadata/store.go (1)
New(154-171)database/immutable/immutable.go (1)
New(40-48)
internal/config/config.go (5)
database/database.go (3)
New(119-162)Config(34-40)Database(43-48)database/plugin/blob/badger/database.go (1)
New(47-102)database/plugin/blob/store.go (1)
New(35-52)config.go (1)
Config(33-54)database/plugin/register.go (2)
GetPlugins(101-109)PluginTypeBlob(28-28)
database/plugin/blob/gcs/plugin.go (7)
database/plugin/register.go (2)
Register(52-54)PluginEntry(42-48)database/plugin/blob/aws/plugin.go (1)
NewFromCmdlineOptions(67-85)database/plugin/plugin.go (2)
Plugin(19-22)NewErrorPlugin(38-40)database/plugin/blob/gcs/options.go (3)
BlobStoreGCSOptionFunc(23-23)WithBucket(42-46)WithCredentialsFile(49-53)database/plugin/blob/aws/options.go (1)
WithBucket(43-47)database/plugin/blob/aws/database.go (1)
NewWithOptions(86-101)database/plugin/blob/gcs/database.go (1)
NewWithOptions(68-82)
database/plugin/metadata/sqlite/database.go (9)
database/plugin/blob/aws/options.go (2)
WithLogger(27-31)WithPromRegistry(34-40)database/plugin/blob/gcs/options.go (2)
WithLogger(26-30)WithPromRegistry(33-39)config.go (2)
WithLogger(156-160)Config(33-54)database/plugin/blob/badger/database.go (1)
New(47-102)database/plugin/metadata/store.go (1)
New(154-171)node.go (1)
New(49-62)database/immutable/immutable.go (1)
New(40-48)database/plugin/metadata/sqlite/commit_timestamp.go (2)
CommitTimestamp(29-32)CommitTimestamp(34-36)database/models/models.go (1)
MigrateModels(18-51)
database/plugin/blob/badger/database.go (4)
database/plugin/blob/badger/options.go (4)
BlobStoreBadgerOptionFunc(23-23)WithLogger(26-30)WithBlockCacheSize(49-53)WithIndexCacheSize(56-60)database/plugin/blob/badger/plugin.go (2)
DefaultBlockCacheSize(25-25)DefaultIndexCacheSize(26-26)config.go (1)
WithLogger(156-160)database/plugin/blob/badger/logger.go (1)
NewBadgerLogger(28-35)
internal/integration/benchmark_test.go (3)
database/database.go (2)
New(119-162)Config(34-40)internal/config/config.go (1)
Config(79-99)config.go (1)
Config(33-54)
database/plugin/plugin.go (1)
database/plugin/register.go (3)
PluginType(24-24)GetPlugin(111-120)PluginTypeName(31-40)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: docker (ubuntu-latest, amd64)
- GitHub Check: nilaway
- GitHub Check: lint
- GitHub Check: go-test (1.24.x, macos-latest)
- GitHub Check: go-test (1.24.x, ubuntu-latest)
- GitHub Check: Analyze (go)
🔇 Additional comments (32)
database/plugin/blob/aws/commit_timestamp.go (1)
41-44: LGTM - Formatting improvement.The multi-line formatting improves readability without affecting behavior.
database/plugin/metadata/sqlite/options_test.go (1)
24-57: LGTM - Comprehensive option tests.The test coverage for option setters is thorough and follows standard testing patterns.
database/plugin/blob/aws/database.go (2)
85-101: LGTM - Clean option-based constructor.The
NewWithOptionsconstructor follows the established plugin pattern with proper option application and sensible defaults.
172-211: LGTM - Proper plugin lifecycle implementation.The
Startmethod correctly:
- Validates required fields (bucket)
- Loads AWS config with timeout
- Supports region override
- Handles errors appropriately
database/plugin/metadata/sqlite/transaction.go (1)
108-108: LGTM - Modernization to useanytype alias.The change from
interface{}toanyis a stylistic improvement with no behavioral impact, aligning with modern Go conventions.PLUGIN_DEVELOPMENT.md (1)
1-279: LGTM - Excellent plugin development documentation.This comprehensive guide provides clear instructions for plugin development, covering:
- Plugin types and lifecycle
- Registration and options
- Configuration precedence
- Step-by-step development guide
- Testing strategies
- Best practices
The previous markdown linting issue has been addressed.
database/database.go (2)
115-138: LGTM - Clear documentation of defaulting behavior.The updated documentation clearly explains the DataDir defaulting behavior, addressing the previous review concern. The logic for config copying and applying defaults is clean and prevents unintended mutations.
139-150: LGTM - Simplified plugin initialization.The new plugin-based initialization correctly delegates to
blob.Newandmetadata.Newwith just plugin names, leveraging the plugin registry for construction. This is a clean separation of concerns.database/database_test.go (1)
29-35: LGTM - Test config updated for plugin system.The test configuration correctly specifies plugin names and maintains the in-memory testing approach with an empty DataDir.
database/plugin/option.go (1)
74-79: uint64 migration is complete and correct.Verification confirms that all plugins using
PluginOptionTypeUinthave been properly updated to uint64:
- badger plugin:
cmdlineOptionsstruct correctly definesblockCacheSizeandindexCacheSizeasuint64(lines 32–33), with matchingDefaultValueandDestassignments- gcs plugin: Uses only string option types
- aws plugin: Uses only string option types
- sqlite plugin: Does not exist
No breaking changes remain outstanding.
database/plugin/plugin.go (1)
17-65: ErrorPlugin and StartPlugin wiring looks correct and robust
ErrorPlugincleanly defers constructor errors toStart(), andStartPlugingives good context for missing or failing plugins while keeping the lifecycle centralized. No changes needed here.cmd/dingo/main.go (1)
148-183: CLI plugin flags and config override behavior look soundUsing persistent
--blob/-band--metadata/-mflags with defaults fromconfig.DefaultBlobPlugin/DefaultMetadataPlugin, and only overridingcfg.BlobPlugin/cfg.MetadataPluginwhen flags differ from defaults, preserves env/YAML precedence while still allowing explicit CLI override. The pre-run listing before config load is also a nice touch.Also applies to: 187-191
internal/node/load.go (1)
55-61: Database config now correctly delegates to plugin-based blob/metadata selectionPassing
cfg.BlobPluginandcfg.MetadataPluginintodatabase.Configaligns this loader with the new plugin system while keeping logger and data directory wiring intact. Looks good.database/plugin/blob/badger/plugin.go (1)
23-92: Badger blob plugin registration and defaults look consistentThe Badger plugin registration, cache-size defaults, GC default (
gcEnabled = true+DefaultValue: true), andNewFromCmdlineOptionspattern are all coherent and aligned with the other blob plugins. No changes needed here.Also applies to: 94-109
database/plugin/blob/store.go (1)
34-52: Centralized blob plugin startup viaplugin.StartPluginlooks good
Newnow consistently starts the selected blob plugin and verifies it implementsBlobStorebefore returning it. The failure cases (missing plugin or wrong type) are clearly surfaced via errors. This is a clean abstraction boundary for blob storage.database/plugin/metadata/sqlite/plugin.go (1)
23-35: SQLite metadata plugin registration and option handling are consistent with the new plugin patternThe sqlite metadata plugin correctly exposes a
data-diroption, centralizes defaults viainitCmdlineOptions, and usesNewFromCmdlineOptions+NewErrorPluginto defer initialization errors toStart(). This aligns well with the other plugin implementations.Also applies to: 39-55, 59-74
database/plugin/blob/badger/options.go (1)
23-67: Badger option functions are well-structured and side‑effect freeThe
BlobStoreBadgerOptionFuncpattern and individualWithXhelpers are straightforward field setters, making Badger configuration composable and easy to extend. No changes needed.database/plugin/blob/gcs/plugin.go (3)
54-59: LGTM!The
initCmdlineOptionsfunction now properly acquires the mutex lock before any modifications, addressing the initialization pattern. The empty body is appropriate since GCS has no default values to set.
90-106: LGTM!The
NewFromCmdlineOptionsimplementation correctly:
- Uses
RLock/RUnlockto protect reads fromcmdlineOptions- Builds options using the functional option pattern matching AWS plugin
- Handles errors by returning
plugin.NewErrorPlugin(err)to defer errors toStart()
26-44: Potential nil dereference in error path.On line 37, if
os.Statreturns a non-nil error that isn'tos.IsNotExist(e.g., permission denied), the code still attempts to accessinfo.Mode(). However, whenos.Statfails,infomay be nil, causing a panic.Consider restructuring the condition:
- if err != nil || info.Mode().Perm()&0o400 == 0 { + if err != nil { + return fmt.Errorf( + "GCS credentials file is not readable: %s", + credentialsFile, + ) + } + if info.Mode().Perm()&0o400 == 0 { return fmt.Errorf( "GCS credentials file is not readable: %s", credentialsFile, ) }Likely an incorrect or invalid review comment.
database/plugin/metadata/sqlite/database.go (3)
53-76: LGTM! Default logger issue resolved.The previous issue with
slog.NewTextHandler(nil, nil)causing panics has been properly fixed. The code now:
- Constructs
MetadataStoreSqlite{}without a logger- Applies all options first
- Sets a safe default logger with
os.Stderras the writer (line 71)This ensures no nil-writer panics occur.
139-203: Start() implementation looks solid.The initialization logic properly:
- Handles both in-memory and disk-backed configurations
- Creates data directory if needed with appropriate error handling
- Configures WAL mode and performance pragmas for disk-backed stores
- Runs
init()for tracing and vacuum scheduling- Performs schema migrations via
AutoMigrateOne minor observation: the
sync(OFF)pragma (line 172) prioritizes performance over durability. This is acceptable for metadata that can be rebuilt, but ensure this trade-off is documented.
205-208: LGTM!
Stop()correctly delegates toClose(), maintaining the existing cleanup logic while satisfying theplugin.Plugininterface.database/plugin/blob/badger/database.go (3)
46-56: LGTM! Option-based constructor with proper defaults.The constructor now:
- Sets
gcEnabled: trueas default (addressing past review about honoringWithGc)- Uses
DefaultBlockCacheSizeandDefaultIndexCacheSizeconstants fromplugin.go- Applies options after defaults, allowing overrides
116-147: GC lifecycle implementation is well-structured.The ticker-based GC with proper shutdown handling:
- Uses
WaitGroupto ensure goroutine completion- Handles stop signal via channel select
- Re-runs GC on success via
goto again(unconventional but correct)- Logs non-rewrite errors appropriately
160-175: LGTM! Proper GC cleanup in Close().The cleanup sequence is correct:
- Stop the ticker
- Close the stop channel (signaling the goroutine)
- Wait for GC goroutine via
WaitGroup- Close the database
The nil checks prevent panics on double-close or when GC is disabled.
internal/integration/integration_test.go (2)
3-14: LGTM! Imports are correctly structured.The duplicate import issue from the previous review has been resolved:
awsandgcsare named imports (required forNewWithOptionscalls)badgerandsqliteuse blank imports for side-effect registration
16-59: Remove this review comment—GetPlugin correctly returns fresh instances.The implementation shows
GetPlugincallsplugin.NewFromOptionsFunc()on each invocation, which creates a fresh instance every time. This means the test correctly obtains independent plugin instances, and callingStart()on different instances in different tests presents no conflict.Likely an incorrect or invalid review comment.
internal/integration/benchmark_test.go (3)
18-83: LGTM! Block loading with proper safeguards.The
loadBlockDatafunction correctly:
- Handles immutable DB opening with proper error handling
- Uses deferred
iter.Close()for cleanup- Checks for zero blocks at line 66-68 (addressing past review concern)
- Duplicates blocks when needed to reach the requested count
112-147: LGTM! Cloud backend URIs correctly prefixed.The GCS and S3 configurations now correctly use:
"gcs://" + testBucketfor GCS (line 125)"s3://" + testBucketfor S3 (line 143)This addresses the previous review concerns about missing URI prefixes.
211-236: Go version requirements are satisfied—no action needed.The project's
go.modspecifies Go 1.24.0, which supports both features used:
for i := range 10(range-over-integer) – supported since Go 1.22 ✓b.Loop()(benchmark loop API) – supported since Go 1.24 ✓internal/config/config.go (1)
101-114: ParseCmdlineArgs is unused dead code; the actual codebase uses Cobra and properly handles config precedenceThe review identifies a potential issue in
ParseCmdlineArgs(internal/config/config.go:101-165), but this function is never called anywhere in the codebase. The application uses the Cobra framework for CLI argument parsing instead, as shown in cmd/dingo/main.go.The actual implementation in main.go (lines 160-200) correctly handles config precedence:
- LoadConfig is called first, loading YAML/env values into cfg
- Cobra flags are retrieved via
cmd.Root().PersistentFlags().GetString()- Config is only overridden if the flag differs from the default:
if blobPlugin != config.DefaultBlobPlugin { cfg.BlobPlugin = blobPlugin }This preserves YAML/env values when flags are not provided. The concern raised would only manifest if ParseCmdlineArgs were actually being invoked, but since it's unused, the codebase functions as intended.
Likely an incorrect or invalid review comment.
| # Database configuration | ||
| database: | ||
| # Blob storage plugin configuration | ||
| blob: | ||
| # Plugin to use for blob storage (badger, gcs, s3) | ||
| plugin: "badger" | ||
| # Configuration options for each plugin | ||
| badger: | ||
| # Data directory for BadgerDB storage | ||
| data-dir: ".dingo/badger" | ||
| # Block cache size in bytes (default: 805306368 ~768MB) | ||
| block-cache-size: 805306368 | ||
| # Index cache size in bytes (default: 268435456 ~256MB) | ||
| index-cache-size: 268435456 | ||
| # Enable garbage collection (default: true) | ||
| gc: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Badger cache YAML keys don’t match plugin option names (block-cache vs block-cache-size)
In database.blob.badger, the example uses:
block-cache-sizeindex-cache-size
but the Badger plugin registers options named block-cache and index-cache (see database/plugin/blob/badger/plugin.go). Assuming plugin config processing uses PluginOption.Name as the YAML key, these example keys won’t actually configure the cache sizes when copied by users.
Consider aligning the YAML with the option names, e.g.:
- # Block cache size in bytes (default: 805306368 ~768MB)
- block-cache-size: 805306368
- # Index cache size in bytes (default: 268435456 ~256MB)
- index-cache-size: 268435456
+ # Block cache size in bytes (default: 805306368 ~768MB)
+ block-cache: 805306368
+ # Index cache size in bytes (default: 268435456 ~256MB)
+ index-cache: 268435456so the documented config matches the actual plugin option names.
🤖 Prompt for AI Agents
In dingo.yaml.example around lines 12 to 27, the Badger cache YAML keys
(`block-cache-size` and `index-cache-size`) do not match the Badger plugin
option names (`block-cache` and `index-cache`), so users copying this example
won't configure cache sizes; update the keys under database.blob.badger to use
`block-cache` and `index-cache` (and keep values the same), and verify any
documentation/comments mention these exact option names so they align with
PluginOption.Name used by the plugin.
| func benchmarkStorageBackend( | ||
| b *testing.B, | ||
| backendName string, | ||
| config *database.Config, | ||
| ) { | ||
| var tempDir string | ||
| var err error | ||
| // Create temporary directory for on-disk local backends | ||
| if config.BlobPlugin == "badger" && config.DataDir != "" { | ||
| tempDir, err = os.MkdirTemp( | ||
| "", | ||
| fmt.Sprintf("dingo-bench-%s-", backendName), | ||
| ) | ||
| if err != nil { | ||
| b.Fatalf("failed to create temp dir: %v", err) | ||
| } | ||
| defer os.RemoveAll(tempDir) | ||
| config.DataDir = filepath.Join(tempDir, "data") | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential config mutation issue.
The function modifies config.DataDir at line 191, but this mutates the original config struct passed from getTestBackends. Since getTestBackends is called once per benchmark function and configs are shared across sub-benchmarks, this mutation could affect other tests if the config is reused.
Consider creating a copy of the config before modifying:
func benchmarkStorageBackend(
b *testing.B,
backendName string,
config *database.Config,
) {
+ // Create a copy to avoid mutating shared config
+ configCopy := *config
+ config = &configCopy
+
var tempDir string🤖 Prompt for AI Agents
In internal/integration/benchmark_test.go around lines 174 to 192, the function
mutates the passed-in config by setting config.DataDir which can leak into other
sub-benchmarks; create a local copy of the config before any modification (e.g.,
shallow copy the struct into a new variable and use that copy for DataDir
changes and tempDir setup) so the original config passed from getTestBackends
remains unchanged, and ensure all subsequent uses and defer cleanup refer to the
copied config/paths.
- Dynamic plugin registration for blob (BadgerDB, S3, GCS) and metadata (SQLite) backends - Thread-safe global registry with mutex protection and credential validation - CLI integration with plugin listing and selection flags - Comprehensive test suite including unit, integration, and security tests - Configuration system with nested plugin settings and environment variable support - Production-ready with proper error handling, logging, and security hardening Signed-off-by: Chris Gianelloni <wolf31o2@blinklabs.io>
c4aab25 to
efc461b
Compare
agaffney
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a few bits of useless code that could be removed, but it looks good otherwise. The AI coding agent doesn't seem to understand that map[string]interface{} and map[string]any are exactly the same thing 😂
| result[k] = v | ||
| } | ||
| return result | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function doesn't seem to do anything useful at all
| stringInterfaceMap[keyStr] = vv | ||
| } | ||
| } | ||
| blobConfig[k] = convertToStringAnyMap(stringInterfaceMap) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This step is unnecessary
| stringInterfaceMap[keyStr] = vv | ||
| } | ||
| } | ||
| metadataConfig[k] = convertToStringAnyMap(stringInterfaceMap) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This step is unnecessary
|
I'll definitely look into cleaning it up. This was getting a bit out of hand. |
Closes #887
BREAKING CHANGE: Database configuration now uses plugin system instead of hardcoded backends
Summary by cubic
Implements a plugin system for storage backends and migrates database wiring to use it. Supports Badger/GCS/S3 (blob) and SQLite (metadata) with selection via YAML, env vars, or CLI; breaking: database config is now plugin-based.
New Features
Migration
Written for commit efc461b. Summary will update automatically on new commits.
Summary by CodeRabbit
New Features
Changes
Documentation
Tests
✏️ Tip: You can customize this high-level summary in your review settings.