-
Notifications
You must be signed in to change notification settings - Fork 104
fix(metrics): prevent thread leak by ensuring singleton initialization #1492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @sinhasubham, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical issue in the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a critical thread and memory leak by ensuring the metrics subsystem is initialized only once. The approach uses a global flag to track initialization, which is a good start. However, the current implementation is not thread-safe and could still lead to multiple initializations under concurrent Client instantiations. I've provided a suggestion to add a threading.Lock to make the initialization truly a singleton. Additionally, I found a minor code duplication in one of the tests.
36e9f70 to
1ef3da1
Compare
1ef3da1 to
67c682e
Compare
google/cloud/spanner_v1/client.py
Outdated
| if not _metrics_monitor_initialized: | ||
| with _metrics_monitor_lock: | ||
| if not _metrics_monitor_initialized: | ||
| meter_provider = metrics.NoOpMeterProvider() | ||
| try: | ||
| if not _get_spanner_emulator_host(): | ||
| meter_provider = MeterProvider( | ||
| metric_readers=[ | ||
| PeriodicExportingMetricReader( | ||
| CloudMonitoringMetricsExporter( | ||
| project_id=project, | ||
| credentials=credentials, | ||
| ), | ||
| export_interval_millis=METRIC_EXPORT_INTERVAL_MS, | ||
| ), | ||
| ] | ||
| ) | ||
| metrics.set_meter_provider(meter_provider) | ||
| SpannerMetricsTracerFactory() | ||
| _metrics_monitor_initialized = True | ||
| except Exception as e: | ||
| log.warning( | ||
| "Failed to initialize Spanner built-in metrics. Error: %s", | ||
| e, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: could this potentially be moved to a separate function to keep the init function a bit shorter/cleaner?
tests/unit/test_metrics.py
Outdated
| client = Client( | ||
| project="test", | ||
| credentials=TestCredentials(), | ||
| # client_options={"api_endpoint": "none"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: remove
519755b to
1341f21
Compare
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request provides a crucial fix for a thread and memory leak issue caused by repeated initialization of OpenTelemetry providers. The implementation correctly uses a singleton pattern with double-checked locking for the metrics provider initialization, ensuring it only runs once. The switch from a global mutable tracer to a thread-safe contextvars.ContextVar is well-executed and effectively isolates metric tracing across concurrent operations, as demonstrated by the new concurrency tests. The accompanying fix for batch commit idempotency is also a valuable improvement. The test suite has been significantly enhanced with new concurrency tests and refactoring of existing ones, providing strong confidence in the correctness of these changes. I have one suggestion for improving the API clarity in the SpannerMetricsTracerFactory. Overall, this is an excellent and well-tested contribution that addresses a critical issue.
| @property | ||
| def current_metrics_tracer(self) -> MetricsTracer: | ||
| return SpannerMetricsTracerFactory._current_metrics_tracer_ctx.get() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You've introduced both a static method get_current_tracer() and a property current_metrics_tracer that do the same thing: retrieve the tracer from the context variable.
The property current_metrics_tracer is problematic because it replaces a class attribute with an instance property. Any code that previously accessed SpannerMetricsTracerFactory.current_metrics_tracer will now get a property object instead of the tracer, which is a breaking change and could lead to subtle bugs.
Since all new code in this PR uses the clear and unambiguous static method get_current_tracer(), I recommend removing the redundant and potentially confusing current_metrics_tracer property. This will make the API cleaner and prevent accidental misuse.
Summary:
This PR fixes a critical memory and thread leak in the google-cloud-spanner client when built-in metrics are enabled (default behavior).
Previously, the Client constructor unconditionally initialized a new OpenTelemetry MeterProvider and PeriodicExportingMetricReader on every instantiation. Each reader spawned a new background thread for metric exporting that was never cleaned up or reused. In environments where Client objects are frequently created (e.g., Cloud Functions, web servers, or data pipelines), this caused a linear accumulation of threads, leading to RuntimeError: can't start new thread and OOM crashes.
Fix Implementation:
Refactored Metrics Initialization (Thread Safety & Memory Leak Fix):
Implemented a Singleton pattern for the OpenTelemetry MeterProvider using threading.Lock to prevent infinite background thread creation (memory leak).
Moved metrics initialization logic to a cleaner helper function _initialize_metrics in client.py.
Replaced global mutable state in SpannerMetricsTracerFactory with contextvars.ContextVar to ensure thread-safe, isolated metric tracing across concurrent requests.
Updated MetricsInterceptor and MetricsCapture to correctly use the thread-local tracer.
Fixed Batch.commit Idempotency (AlreadyExists Regression):
Modified Batch.commit to initialize nth_request and the attempt counter outside the retry loop.
This ensures that retries (e.g., on ABORTED) reuse the same Request ID, allowing Cloud Spanner to correctly deduplicate requests and preventing spurious AlreadyExists (409) errors.
Verification:
Added tests/unit/test_metrics_concurrency.py to verify tracer isolation and thread safety.
Cleaned up tests/unit/test_metrics.py and consolidated mocks in conftest.py.