Conversation
Two components for turning user downvotes into knowledge base improvements:
1. Event-driven (Postgres trigger + Prefect flow):
- New trigger on user_like_snippets: on first downvote, immediately
hides the snippet and queues it in downvote_review_queue
- New Prefect flow (downvote_review) polls the queue and runs the
full Stage 4 review pipeline (KB researcher + web researcher +
KB updater agents) to create corrective KB entries
- Registered as "downvote_review" process group in main.py
2. On-demand skill (/downvote-review):
- Claude Code command for manual batch reviews
- Queries unprocessed downvotes, groups by theme, researches
correct facts, creates KB entries, generates reports
Database changes (already applied to production):
- New table: downvote_review_queue (with RLS, indexes, unique constraint)
- New trigger: on_downvote_queue_review_trigger
- New SupabaseClient methods for queue CRUD and system-level hiding
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughAdds a new Stage 4 "Downvote Review" flow: DB queue and trigger to capture downvoted snippets, Supabase client helpers for queue lifecycle, an async processor that claims and processes queued items (hides snippet, runs KB research/update), and integration into the main pipeline as a repeatable deployment. Changes
Sequence DiagramsequenceDiagram
participant Pipeline as Processing Pipeline
participant Flow as downvote_review Flow
participant Supabase as SupabaseClient
participant DB as Database
participant Snip as Snippet Store
participant Processor as process_snippet
Pipeline->>Flow: start downvote_review deployment
loop repeat loop
Flow->>Supabase: get_pending_downvote_reviews(limit=1)
Supabase->>DB: SELECT ... FROM downvote_review_queue WHERE status='pending' LIMIT 1
DB-->>Supabase: pending entry / none
Supabase-->>Flow: return queue entry
alt entry found
Flow->>Supabase: claim_downvote_review(queue_id)
Supabase->>DB: UPDATE status='processing' WHERE id=queue_id
DB-->>Supabase: update OK
Flow->>Snip: load snippet(snippet_id)
Snip-->>Flow: snippet data
Flow->>Supabase: hide_snippet_by_system(snippet_id)
Supabase->>DB: INSERT INTO user_hide_snippets ...
DB-->>Supabase: hide OK
Flow->>Processor: process_snippet(snippet + prompts)
Processor-->>Flow: kb entries / result
alt success
Flow->>Supabase: complete_downvote_review(queue_id, kb_entries_created=1)
Supabase->>DB: UPDATE status='completed', processed_at=NOW()
else failure
Flow->>Supabase: fail_downvote_review(queue_id, error_message)
Supabase->>DB: UPDATE status='error', error_message=...
end
else no entry
Flow->>Flow: sleep or exit (based on repeat)
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a comprehensive automated system to manage user downvotes on snippets. It streamlines the process of identifying potentially problematic content by hiding snippets immediately upon a single downvote and initiating an automated review workflow. This system integrates a new database queue, an event-driven Prefect flow for continuous processing, and an on-demand Claude Code command for manual batch reviews, ensuring that user feedback is acted upon swiftly and efficiently to maintain content quality. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a comprehensive automated downvote review system, including a new database table, a Postgres trigger for immediate snippet hiding and queueing, a Prefect flow for processing, and a Claude skill for on-demand reviews. The changes are well-structured and cover both the event-driven and manual paths for handling downvoted snippets. My review focuses on improving the robustness and maintainability of the new system. I've identified a potential logic issue in the Claude skill's query, a hardcoded value that should be dynamic, a risky environment variable modification, a hardcoded secret, and a minor code style violation. Addressing these points will enhance the reliability and security of this new feature.
| supabase_client.complete_downvote_review( | ||
| queue_entry["id"], kb_entries_created=1 | ||
| ) |
There was a problem hiding this comment.
The number of created knowledge base entries is hardcoded to 1. However, the process_snippet agentic pipeline might create zero, one, or multiple KB entries. This hardcoded value will result in inaccurate data being stored in the downvote_review_queue table.
To fix this, the process_snippet function (and its underlying Stage4Executor) should return the actual count of KB entries created during its execution. This count can then be passed to complete_downvote_review.
| supabase_client.complete_downvote_review( | |
| queue_entry["id"], kb_entries_created=1 | |
| ) | |
| kb_entries_created_count = await process_snippet(supabase_client, snippet, prompt_versions) | |
| supabase_client.complete_downvote_review( | |
| queue_entry["id"], kb_entries_created=kb_entries_created_count | |
| ) |
| AND NOT EXISTS ( | ||
| SELECT 1 FROM user_hide_snippets uhs WHERE uhs.snippet = s.id | ||
| ) |
There was a problem hiding this comment.
The condition NOT EXISTS (SELECT 1 FROM user_hide_snippets ...) seems to contradict the new trigger's behavior. The on_downvote_queue_review trigger immediately hides a snippet upon a downvote by inserting it into user_hide_snippets. Consequently, this query will not find any snippets that are downvoted after the trigger is active, potentially rendering this part of the on-demand skill ineffective for new downvotes.
To ensure this command can process all relevant downvoted snippets, including those already hidden by the trigger, I recommend removing this NOT EXISTS clause. The subsequent "Hide snippets" step already uses ON CONFLICT DO NOTHING, which makes it safe to run even for snippets that are already hidden.
|
|
||
| ## Notes | ||
|
|
||
| - The VERDAD Supabase project ID is `dzujjhzgzguciwryzwlx` |
There was a problem hiding this comment.
Hardcoding the Supabase project ID here is a security and maintainability concern. While project IDs are not typically as sensitive as API keys, it's better practice to avoid hardcoding such values. If this command is executed in an environment where environment variables are available, it would be more secure to fetch this ID from an environment variable. This would also make it easier to point the skill to different environments (e.g., staging, production) without changing the command definition.
| - The VERDAD Supabase project ID is `dzujjhzgzguciwryzwlx` | |
| - The VERDAD Supabase project ID is `${VERDAD_SUPABASE_PROJECT_ID}` |
| web researcher + KB updater agents) | ||
| 4. Marks the queue entry as completed or errored | ||
| """ | ||
| os.environ["GOOGLE_API_KEY"] = os.environ.get("GOOGLE_GEMINI_PAID_KEY") |
There was a problem hiding this comment.
Modifying os.environ at runtime is generally considered an unsafe practice. It creates a global side effect that can lead to unpredictable behavior in other parts of the application that might rely on the GOOGLE_API_KEY environment variable. It also makes the code harder to reason about and debug.
A safer approach would be to pass the API key explicitly to the client or function that requires it, rather than altering the environment. This would make the dependency clear and avoid unintended consequences.
|
|
||
| def complete_downvote_review(self, queue_id, kb_entries_created=0): | ||
| """Mark a downvote review as completed.""" | ||
| from datetime import datetime, timezone |
There was a problem hiding this comment.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 038b01a8d6
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| CREATE TRIGGER on_downvote_queue_review_trigger | ||
| AFTER INSERT ON user_like_snippets | ||
| FOR EACH ROW |
There was a problem hiding this comment.
Fire downvote trigger on vote updates
This trigger only runs on AFTER INSERT, but the like_snippet function writes with INSERT ... ON CONFLICT DO UPDATE; when a user changes an existing vote from 1 or 0 to -1, only an UPDATE happens. In that common path, snippets will not be auto-hidden and no row will be queued for review, so the new “hide on first downvote” behavior is skipped for previously voted snippets.
Useful? React with 👍 / 👎.
| await process_snippet(supabase_client, snippet, prompt_versions) | ||
| supabase_client.complete_downvote_review( | ||
| queue_entry["id"], kb_entries_created=1 |
There was a problem hiding this comment.
Fail queue items when Stage 4 processing fails
downvote_review marks the queue entry completed immediately after process_snippet, but process_snippet catches its own exceptions and does not re-raise (it only sets snippet status to Error). That means failed reviews still get status='completed' here and are never retried, causing silent data loss in the downvote review queue.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
This PR introduces an automated “downvote → hide + KB review” pipeline by adding a Supabase queue table + trigger, a new Stage 4 Prefect flow to process queued snippets, and an on-demand Claude Code command for manual/batch processing.
Changes:
- Added
downvote_review_queuetable (status tracking + RLS) to store snippets awaiting downvote-driven KB review. - Added
on_downvote_queue_reviewtrigger to hide a snippet and enqueue it on downvote. - Added a new Stage 4 Prefect flow (
downvote_review) plus process-group registration and a Claude Code/downvote-reviewcommand.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
supabase/database/sql/on_downvote_queue_review.sql |
New trigger function to hide + queue on downvote. |
supabase/database/sql/create_downvote_review_queue.sql |
New queue table with status fields, index, grants, and RLS policies. |
src/processing_pipeline/supabase_utils.py |
Adds queue CRUD helpers and a system hide helper. |
src/processing_pipeline/stage_4/downvote_flows.py |
New Prefect flow that polls/claims queue entries and runs Stage 4 review. |
src/processing_pipeline/stage_4/__init__.py |
Exports the new downvote flow. |
src/processing_pipeline/main.py |
Registers a new downvote_review process group for Prefect serve. |
.claude/commands/downvote-review.md |
Adds an on-demand command workflow for batch downvote review + reporting. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| GRANT ALL ON TABLE public.downvote_review_queue TO service_role; | ||
| GRANT SELECT ON TABLE public.downvote_review_queue TO authenticated; | ||
|
|
||
| CREATE POLICY "Enable full access for service role" | ||
| ON public.downvote_review_queue FOR ALL TO service_role USING (true); | ||
|
|
||
| CREATE POLICY "Enable read access for authenticated users" | ||
| ON public.downvote_review_queue FOR SELECT TO authenticated USING (true); |
|
|
||
| def complete_downvote_review(self, queue_id, kb_entries_created=0): | ||
| """Mark a downvote review as completed.""" | ||
| from datetime import datetime, timezone |
| def fail_downvote_review(self, queue_id, error_message): | ||
| """Mark a downvote review as failed.""" | ||
| response = ( | ||
| self.client.table("downvote_review_queue") | ||
| .update( | ||
| { | ||
| "status": "error", | ||
| "error_message": error_message, | ||
| } | ||
| ) | ||
| .eq("id", queue_id) | ||
| .execute() | ||
| ) | ||
| return response.data[0] if response.data else None |
| def get_pending_downvote_reviews(self, limit=10): | ||
| """Fetch pending downvote review queue entries.""" | ||
| response = ( | ||
| self.client.table("downvote_review_queue") | ||
| .select("*") | ||
| .eq("status", "pending") | ||
| .order("created_at") | ||
| .limit(limit) | ||
| .execute() | ||
| ) | ||
| return response.data if response.data else [] | ||
|
|
||
| def claim_downvote_review(self, queue_id): | ||
| """Atomically claim a downvote review entry for processing.""" | ||
| response = ( | ||
| self.client.table("downvote_review_queue") | ||
| .update({"status": "processing"}) | ||
| .eq("id", queue_id) | ||
| .eq("status", "pending") | ||
| .execute() | ||
| ) | ||
| return response.data[0] if response.data else None | ||
|
|
||
| def complete_downvote_review(self, queue_id, kb_entries_created=0): | ||
| """Mark a downvote review as completed.""" | ||
| from datetime import datetime, timezone | ||
|
|
||
| response = ( | ||
| self.client.table("downvote_review_queue") | ||
| .update( | ||
| { | ||
| "status": "completed", | ||
| "processed_at": datetime.now(timezone.utc).isoformat(), | ||
| "kb_entries_created": kb_entries_created, | ||
| } | ||
| ) | ||
| .eq("id", queue_id) | ||
| .execute() | ||
| ) | ||
| return response.data[0] if response.data else None | ||
|
|
||
| def fail_downvote_review(self, queue_id, error_message): | ||
| """Mark a downvote review as failed.""" | ||
| response = ( | ||
| self.client.table("downvote_review_queue") | ||
| .update( | ||
| { | ||
| "status": "error", | ||
| "error_message": error_message, | ||
| } | ||
| ) | ||
| .eq("id", queue_id) | ||
| .execute() | ||
| ) | ||
| return response.data[0] if response.data else None | ||
|
|
||
| def hide_snippet_by_system(self, snippet_id): | ||
| """Hide a snippet programmatically (no auth/admin check).""" | ||
| response = ( | ||
| self.client.table("user_hide_snippets") | ||
| .upsert({"snippet": snippet_id}, on_conflict="snippet") | ||
| .execute() | ||
| ) | ||
| return response.data[0] if response.data else None |
|
|
||
| await process_snippet(supabase_client, snippet, prompt_versions) | ||
| supabase_client.complete_downvote_review( | ||
| queue_entry["id"], kb_entries_created=1 |
| while True: | ||
| pending = supabase_client.get_pending_downvote_reviews(limit=1) | ||
| if not pending: | ||
| if not repeat: | ||
| print("No pending downvote reviews. Exiting.") | ||
| break | ||
| print("No pending downvote reviews. Sleeping 30s...") | ||
| await asyncio.sleep(30) | ||
| continue | ||
|
|
||
| queue_entry = pending[0] | ||
| claimed = supabase_client.claim_downvote_review(queue_entry["id"]) | ||
| if not claimed: | ||
| print(f"Queue entry {queue_entry['id']} already claimed. Skipping.") | ||
| continue |
| CREATE TRIGGER on_downvote_queue_review_trigger | ||
| AFTER INSERT ON user_like_snippets | ||
| FOR EACH ROW | ||
| EXECUTE FUNCTION on_downvote_queue_review(); |
| GRANT SELECT ON TABLE public.downvote_review_queue TO authenticated; | ||
|
|
||
| CREATE POLICY "Enable full access for service role" | ||
| ON public.downvote_review_queue FOR ALL TO service_role USING (true); | ||
|
|
||
| CREATE POLICY "Enable read access for authenticated users" | ||
| ON public.downvote_review_queue FOR SELECT TO authenticated USING (true); |
| AND NOT EXISTS ( | ||
| SELECT 1 FROM user_hide_snippets uhs WHERE uhs.snippet = s.id | ||
| ) |
| from processing_pipeline.stage_3 import in_depth_analysis | ||
| from processing_pipeline.stage_5 import embedding | ||
| from processing_pipeline.stage_4 import analysis_review | ||
| from processing_pipeline.stage_4 import analysis_review, downvote_review # noqa: F401 |
Registers the downvote_review Prefect flow as a process group with 2GB memory and 4 CPUs (same tier as analysis_review). Concurrency limit is 1 since it processes one snippet at a time. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/commands/downvote-review.md:
- Around line 24-30: The WHERE clause currently excludes rows present in
user_hide_snippets, which filters out items newly queued by the trigger; update
the query that references tables/aliases uls, drq, s and the user_hide_snippets
subquery so that snippets with a corresponding downvote_review_queue record
(drq) are not excluded — e.g., remove the NOT EXISTS(...) filter or change it to
allow rows when drq.status IS NOT NULL/IN ('pending','error') or when a drq.id
exists, ensuring the query returns items in downvote_review_queue while still
excluding other hidden snippets.
In `@src/processing_pipeline/stage_4/downvote_flows.py`:
- Around line 94-97: process_snippet currently swallows errors and returns
nothing, but downvote_flows calls supabase_client.complete_downvote_review
unconditionally with kb_entries_created=1; change process_snippet (in
src/processing_pipeline/stage_4/tasks.py) to return a structured outcome object
(e.g. { success: bool, kb_entries_created: int, error?: Error }) instead of
swallowing failures, then update the caller in downvote_flows.py to check that
outcome: only call supabase_client.complete_downvote_review(queue_entry["id"],
kb_entries_created=outcome.kb_entries_created) when outcome.success is true (or
otherwise call a failure path such as
supabase_client.mark_downvote_review_failed with the error info); use the
symbols process_snippet and supabase_client.complete_downvote_review to locate
where to change behavior.
- Line 31: The assignment to os.environ["GOOGLE_API_KEY"] should be guarded
against a missing GOOGLE_GEMINI_PAID_KEY to avoid assigning None; check the
value returned by os.environ.get("GOOGLE_GEMINI_PAID_KEY") first and only set
os.environ["GOOGLE_API_KEY"] when it's truthy, otherwise emit a clear error via
the module's logger (or raise a descriptive exception) so the worker fails with
an actionable message; locate the assignment to os.environ["GOOGLE_API_KEY"] in
downvote_flows.py and replace it with a conditional guard around the
os.environ.get("GOOGLE_GEMINI_PAID_KEY") lookup.
In `@src/processing_pipeline/supabase_utils.py`:
- Around line 792-813: get_pending_downvote_reviews only selects rows with
status="pending" and claim_downvote_review flips status to "processing" with no
lease, so claimed rows can be stranded; add claim metadata (e.g., claimed_at and
claim_expires_at or claim_ttl) to the downvote_review_queue schema and change
logic so claim_downvote_review sets status="processing", claimed_at=now(),
claim_expires_at=now()+TTL and only updates when status="pending" OR
(status="processing" AND claim_expires_at < now()) to allow reclaiming stale
claims; update get_pending_downvote_reviews to return rows where
status="pending" OR (status="processing" AND claim_expires_at < now()) so
pollers can reclaim, and ensure any finalizers set status to "done"/"failed" and
clear claim fields when finished.
In `@supabase/database/sql/create_downvote_review_queue.sql`:
- Around line 26-33: The current GRANT SELECT and policy "Enable read access for
authenticated users" on public.downvote_review_queue exposes sensitive columns
(downvoted_by, error_message, moderation state) to all authenticated users;
remove or revoke the broad GRANT/SELECT and replace it with a safe alternative:
either create a sanitized view (e.g., downvote_review_queue_sanitized) that
exposes only non-sensitive columns and grant SELECT on that view, or tighten the
policy "Enable read access for authenticated users" to LIMIT rows and mask
fields (for example USING (auth.uid() = downvoted_by) or return NULL for
sensitive columns), while keeping the service_role policies/GRANTs intact for
internal service usage.
In `@supabase/database/sql/on_downvote_queue_review.sql`:
- Around line 25-28: Trigger on_downvote_queue_review_trigger only fires on
INSERTs so it misses users changing a vote to -1; update the trigger to fire on
INSERT OR UPDATE for table user_like_snippets and add a WHEN condition so it
only runs the on_downvote_queue_review() function when NEW.vote = -1 and (OLD is
null OR OLD.vote != -1) to catch both fresh downvotes and changes to -1
(referencing the trigger name on_downvote_queue_review_trigger, target table
user_like_snippets, and function on_downvote_queue_review()).
- Around line 5-23: Modify the on_downvote_queue_review function to be SECURITY
DEFINER and to handle UPDATE events where a vote changes to a downvote: declare
the function with SECURITY DEFINER, then in the body check TG_OP and values so
the queue/hidden-snippet inserts run when (TG_OP = 'INSERT' AND NEW.value = -1)
OR (TG_OP = 'UPDATE' AND OLD.value IS DISTINCT FROM NEW.value AND NEW.value =
-1); keep the same INSERT ... ON CONFLICT logic for user_hide_snippets and
downvote_review_queue so duplicates are avoided. Ensure you update the trigger
to fire ON INSERT OR UPDATE OF value so updates that flip a vote also invoke
on_downvote_queue_review.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f86701e7-ceaf-4c10-bfc0-e8bf9872fa56
📒 Files selected for processing (7)
.claude/commands/downvote-review.mdsrc/processing_pipeline/main.pysrc/processing_pipeline/stage_4/__init__.pysrc/processing_pipeline/stage_4/downvote_flows.pysrc/processing_pipeline/supabase_utils.pysupabase/database/sql/create_downvote_review_queue.sqlsupabase/database/sql/on_downvote_queue_review.sql
| WHERE uls.value = -1 | ||
| AND (drq.status IS NULL OR drq.status = 'pending' OR drq.status = 'error') | ||
| AND NOT EXISTS ( | ||
| SELECT 1 FROM user_hide_snippets uhs WHERE uhs.snippet = s.id | ||
| ) | ||
| GROUP BY s.id, drq.status | ||
| ORDER BY s.created_at DESC; |
There was a problem hiding this comment.
This query filters out the very rows the new trigger creates.
The new trigger hides snippets on the first downvote, so AND NOT EXISTS (SELECT 1 FROM user_hide_snippets ...) removes every freshly queued item. This command will report “no unreviewed snippets” even while downvote_review_queue still has pending work.
Suggested fix
-SELECT
- s.id AS snippet_id,
- s.title,
- s.explanation,
- s.disinformation_categories,
- s.confidence_scores,
- s.created_at,
- drq.status AS queue_status,
- COUNT(uls.id) FILTER (WHERE uls.value = -1) AS downvote_count
-FROM snippets s
-JOIN user_like_snippets uls ON uls.snippet = s.id
-LEFT JOIN downvote_review_queue drq ON drq.snippet_id = s.id
-WHERE uls.value = -1
-AND (drq.status IS NULL OR drq.status = 'pending' OR drq.status = 'error')
-AND NOT EXISTS (
- SELECT 1 FROM user_hide_snippets uhs WHERE uhs.snippet = s.id
-)
-GROUP BY s.id, drq.status
-ORDER BY s.created_at DESC;
+SELECT
+ s.id AS snippet_id,
+ s.title,
+ s.explanation,
+ s.disinformation_categories,
+ s.confidence_scores,
+ s.created_at,
+ drq.status AS queue_status,
+ drq.downvoted_at
+FROM downvote_review_queue drq
+JOIN snippets s ON s.id = drq.snippet_id
+WHERE drq.status IN ('pending', 'error')
+ORDER BY drq.downvoted_at DESC;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/commands/downvote-review.md around lines 24 - 30, The WHERE clause
currently excludes rows present in user_hide_snippets, which filters out items
newly queued by the trigger; update the query that references tables/aliases
uls, drq, s and the user_hide_snippets subquery so that snippets with a
corresponding downvote_review_queue record (drq) are not excluded — e.g., remove
the NOT EXISTS(...) filter or change it to allow rows when drq.status IS NOT
NULL/IN ('pending','error') or when a drq.id exists, ensuring the query returns
items in downvote_review_queue while still excluding other hidden snippets.
| web researcher + KB updater agents) | ||
| 4. Marks the queue entry as completed or errored | ||
| """ | ||
| os.environ["GOOGLE_API_KEY"] = os.environ.get("GOOGLE_GEMINI_PAID_KEY") |
There was a problem hiding this comment.
Guard the Google key before writing to os.environ.
os.environ["GOOGLE_API_KEY"] = None raises TypeError, so a missing GOOGLE_GEMINI_PAID_KEY crashes the worker before it can emit a useful error.
Suggested fix
- os.environ["GOOGLE_API_KEY"] = os.environ.get("GOOGLE_GEMINI_PAID_KEY")
+ google_api_key = os.getenv("GOOGLE_GEMINI_PAID_KEY")
+ if not google_api_key:
+ raise RuntimeError("GOOGLE_GEMINI_PAID_KEY is not set")
+ os.environ["GOOGLE_API_KEY"] = google_api_key📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| os.environ["GOOGLE_API_KEY"] = os.environ.get("GOOGLE_GEMINI_PAID_KEY") | |
| google_api_key = os.getenv("GOOGLE_GEMINI_PAID_KEY") | |
| if not google_api_key: | |
| raise RuntimeError("GOOGLE_GEMINI_PAID_KEY is not set") | |
| os.environ["GOOGLE_API_KEY"] = google_api_key |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/processing_pipeline/stage_4/downvote_flows.py` at line 31, The assignment
to os.environ["GOOGLE_API_KEY"] should be guarded against a missing
GOOGLE_GEMINI_PAID_KEY to avoid assigning None; check the value returned by
os.environ.get("GOOGLE_GEMINI_PAID_KEY") first and only set
os.environ["GOOGLE_API_KEY"] when it's truthy, otherwise emit a clear error via
the module's logger (or raise a descriptive exception) so the worker fails with
an actionable message; locate the assignment to os.environ["GOOGLE_API_KEY"] in
downvote_flows.py and replace it with a conditional guard around the
os.environ.get("GOOGLE_GEMINI_PAID_KEY") lookup.
| await process_snippet(supabase_client, snippet, prompt_versions) | ||
| supabase_client.complete_downvote_review( | ||
| queue_entry["id"], kb_entries_created=1 | ||
| ) |
There was a problem hiding this comment.
Don't mark the queue item completed unconditionally.
process_snippet() catches its own failures in src/processing_pipeline/stage_4/tasks.py:104-150 and returns no result, so this branch also runs when Stage 4 actually failed. That both hides failed work under completed and hardcodes kb_entries_created=1 regardless of whether zero, one, or many KB entries were written.
Suggested direction
- await process_snippet(supabase_client, snippet, prompt_versions)
- supabase_client.complete_downvote_review(
- queue_entry["id"], kb_entries_created=1
- )
+ result = await process_snippet(supabase_client, snippet, prompt_versions)
+ if not result["success"]:
+ supabase_client.fail_downvote_review(queue_entry["id"], result["error"])
+ continue
+ supabase_client.complete_downvote_review(
+ queue_entry["id"],
+ kb_entries_created=result["kb_entries_created"],
+ )This needs a matching change in src/processing_pipeline/stage_4/tasks.py so process_snippet() returns a structured outcome instead of swallowing errors.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/processing_pipeline/stage_4/downvote_flows.py` around lines 94 - 97,
process_snippet currently swallows errors and returns nothing, but
downvote_flows calls supabase_client.complete_downvote_review unconditionally
with kb_entries_created=1; change process_snippet (in
src/processing_pipeline/stage_4/tasks.py) to return a structured outcome object
(e.g. { success: bool, kb_entries_created: int, error?: Error }) instead of
swallowing failures, then update the caller in downvote_flows.py to check that
outcome: only call supabase_client.complete_downvote_review(queue_entry["id"],
kb_entries_created=outcome.kb_entries_created) when outcome.success is true (or
otherwise call a failure path such as
supabase_client.mark_downvote_review_failed with the error info); use the
symbols process_snippet and supabase_client.complete_downvote_review to locate
where to change behavior.
| def get_pending_downvote_reviews(self, limit=10): | ||
| """Fetch pending downvote review queue entries.""" | ||
| response = ( | ||
| self.client.table("downvote_review_queue") | ||
| .select("*") | ||
| .eq("status", "pending") | ||
| .order("created_at") | ||
| .limit(limit) | ||
| .execute() | ||
| ) | ||
| return response.data if response.data else [] | ||
|
|
||
| def claim_downvote_review(self, queue_id): | ||
| """Atomically claim a downvote review entry for processing.""" | ||
| response = ( | ||
| self.client.table("downvote_review_queue") | ||
| .update({"status": "processing"}) | ||
| .eq("id", queue_id) | ||
| .eq("status", "pending") | ||
| .execute() | ||
| ) | ||
| return response.data[0] if response.data else None |
There was a problem hiding this comment.
Claimed rows can get stranded in processing forever.
get_pending_downvote_reviews() only reads pending, and claim_downvote_review() permanently flips the row to processing with no lease/heartbeat. If the worker dies after claiming, that item is never visible again.
Add a claim timestamp + retry metadata, or reclaim stale processing rows during polling instead of treating processing as terminal.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/processing_pipeline/supabase_utils.py` around lines 792 - 813,
get_pending_downvote_reviews only selects rows with status="pending" and
claim_downvote_review flips status to "processing" with no lease, so claimed
rows can be stranded; add claim metadata (e.g., claimed_at and claim_expires_at
or claim_ttl) to the downvote_review_queue schema and change logic so
claim_downvote_review sets status="processing", claimed_at=now(),
claim_expires_at=now()+TTL and only updates when status="pending" OR
(status="processing" AND claim_expires_at < now()) to allow reclaiming stale
claims; update get_pending_downvote_reviews to return rows where
status="pending" OR (status="processing" AND claim_expires_at < now()) so
pollers can reclaim, and ensure any finalizers set status to "done"/"failed" and
clear claim fields when finished.
| GRANT ALL ON TABLE public.downvote_review_queue TO service_role; | ||
| GRANT SELECT ON TABLE public.downvote_review_queue TO authenticated; | ||
|
|
||
| CREATE POLICY "Enable full access for service role" | ||
| ON public.downvote_review_queue FOR ALL TO service_role USING (true); | ||
|
|
||
| CREATE POLICY "Enable read access for authenticated users" | ||
| ON public.downvote_review_queue FOR SELECT TO authenticated USING (true); |
There was a problem hiding this comment.
Don't expose raw queue rows to every authenticated user.
This table includes downvoted_by, error_message, and internal moderation state, but the current grant/policy lets any authenticated client read every row. That leaks cross-user moderation telemetry and user identifiers.
Suggested fix
GRANT ALL ON TABLE public.downvote_review_queue TO service_role;
-GRANT SELECT ON TABLE public.downvote_review_queue TO authenticated;
CREATE POLICY "Enable full access for service role"
ON public.downvote_review_queue FOR ALL TO service_role USING (true);
-
-CREATE POLICY "Enable read access for authenticated users"
- ON public.downvote_review_queue FOR SELECT TO authenticated USING (true);If the app needs visibility here, expose a sanitized view or a narrowly scoped policy instead of the raw table.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| GRANT ALL ON TABLE public.downvote_review_queue TO service_role; | |
| GRANT SELECT ON TABLE public.downvote_review_queue TO authenticated; | |
| CREATE POLICY "Enable full access for service role" | |
| ON public.downvote_review_queue FOR ALL TO service_role USING (true); | |
| CREATE POLICY "Enable read access for authenticated users" | |
| ON public.downvote_review_queue FOR SELECT TO authenticated USING (true); | |
| GRANT ALL ON TABLE public.downvote_review_queue TO service_role; | |
| CREATE POLICY "Enable full access for service role" | |
| ON public.downvote_review_queue FOR ALL TO service_role USING (true); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@supabase/database/sql/create_downvote_review_queue.sql` around lines 26 - 33,
The current GRANT SELECT and policy "Enable read access for authenticated users"
on public.downvote_review_queue exposes sensitive columns (downvoted_by,
error_message, moderation state) to all authenticated users; remove or revoke
the broad GRANT/SELECT and replace it with a safe alternative: either create a
sanitized view (e.g., downvote_review_queue_sanitized) that exposes only
non-sensitive columns and grant SELECT on that view, or tighten the policy
"Enable read access for authenticated users" to LIMIT rows and mask fields (for
example USING (auth.uid() = downvoted_by) or return NULL for sensitive columns),
while keeping the service_role policies/GRANTs intact for internal service
usage.
| CREATE OR REPLACE FUNCTION on_downvote_queue_review() | ||
| RETURNS TRIGGER AS $$ | ||
| BEGIN | ||
| -- Only process downvotes (value = -1) | ||
| IF NEW.value = -1 THEN | ||
| -- Immediately hide the snippet | ||
| INSERT INTO user_hide_snippets (snippet) | ||
| VALUES (NEW.snippet) | ||
| ON CONFLICT (snippet) DO NOTHING; | ||
|
|
||
| -- Queue for KB review (UNIQUE constraint prevents duplicates) | ||
| INSERT INTO downvote_review_queue (snippet_id, downvoted_by, downvoted_at) | ||
| VALUES (NEW.snippet, NEW."user", now()) | ||
| ON CONFLICT (snippet_id) DO NOTHING; | ||
| END IF; | ||
|
|
||
| RETURN NEW; | ||
| END; | ||
| $$ LANGUAGE plpgsql; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In PostgreSQL / Supabase, does a trigger function execute with invoker privileges by default, and would inserting into another RLS-enabled table require SECURITY DEFINER when the caller only has SELECT on that target table?
💡 Result:
No, a trigger function in PostgreSQL (and Supabase) does not execute with invoker privileges by default. By default, functions are SECURITY INVOKER, meaning they execute with the privileges of the caller (the role that triggered the event). Yes, if the caller (e.g., a limited user role like 'authenticated' with only SELECT on the target RLS-enabled table) triggers the function that attempts to INSERT into another RLS-enabled table, and the caller lacks INSERT privileges or the RLS policy does not allow the insert for that role, it will fail with a permission denied error. To allow the insert, the trigger function must be defined as SECURITY DEFINER (typically owned by a privileged role like postgres with BYPASSRLS), so it runs with the owner's elevated privileges, bypassing the caller's limitations and RLS checks on the target table.
Citations:
- 1: https://www.postgresql.org/docs/current/sql-createfunction.html
- 2: https://www.postgresql.org/docs/9.6/sql-createfunction.html
- 3: https://stackoverflow.com/questions/77600776/trigger-function-permissions-issue-in-supabase-access-denied-for-function-calle
- 4: https://supabase.com/docs/guides/database/postgres/row-level-security
- 5: https://www.reddit.com/r/PostgreSQL/comments/1ir94tn/what_role_is_used_when_a_function_that_are_run/
🏁 Script executed:
# Find the trigger definition that uses on_downvote_queue_review
rg "on_downvote_queue_review|CREATE TRIGGER" --type sql -B 2 -A 2Repository: PublicDataWorks/verdad
Length of output: 5243
🏁 Script executed:
# Check for user_like_snippets table and related vote logic
rg "user_like_snippets|downvote" --type sql -lRepository: PublicDataWorks/verdad
Length of output: 628
🏁 Script executed:
# Look at the actual structure of user_like_snippets
rg "CREATE TABLE.*user_like_snippets" --type sql -A 20Repository: PublicDataWorks/verdad
Length of output: 48
🏁 Script executed:
# Find user_like_snippets table definition
rg "CREATE TABLE.*user_like_snippets" --type sql -A 30Repository: PublicDataWorks/verdad
Length of output: 48
🏁 Script executed:
# Check the like_snippet_function to see who can call it
cat supabase/database/sql/like_snippet_function.sqlRepository: PublicDataWorks/verdad
Length of output: 1572
🏁 Script executed:
# Check permissions/grants on user_like_snippets
rg "user_like_snippets|GRANT" --type sql -B 1 -A 1 | head -50Repository: PublicDataWorks/verdad
Length of output: 3872
🏁 Script executed:
# Check the create_downvote_review_queue.sql for permissions
cat supabase/database/sql/create_downvote_review_queue.sqlRepository: PublicDataWorks/verdad
Length of output: 1436
🏁 Script executed:
# Check the update_snippet_hidden_status.sql to see how it handles UPDATE
cat supabase/database/sql/update_snippet_hidden_status.sqlRepository: PublicDataWorks/verdad
Length of output: 1046
🏁 Script executed:
# Check if there's any other handling for upvote-to-downvote transitions
rg "downvote_review_queue|on_downvote_queue_review" --type sql -B 5 -A 5Repository: PublicDataWorks/verdad
Length of output: 5369
🏁 Script executed:
# Verify the complete trigger file to see if there's UPDATE handling
cat supabase/database/sql/on_downvote_queue_review.sqlRepository: PublicDataWorks/verdad
Length of output: 1030
Add SECURITY DEFINER and handle UPDATE events on vote changes.
The function writes to downvote_review_queue, which grants SELECT only to authenticated in supabase/database/sql/create_downvote_review_queue.sql. Without SECURITY DEFINER, an authenticated user's downvote will fail with a permission error when the trigger attempts the INSERT on Line 16.
Additionally, the trigger fires only on INSERT, but like_snippet_function uses ON CONFLICT ... DO UPDATE when changing vote values. If a user changes from upvote to downvote, that UPDATE event won't trigger the queue insertion.
Suggested fix
-CREATE OR REPLACE FUNCTION on_downvote_queue_review()
-RETURNS TRIGGER AS $$
+CREATE OR REPLACE FUNCTION public.on_downvote_queue_review()
+RETURNS TRIGGER
+LANGUAGE plpgsql
+SECURITY DEFINER
+SET search_path = public
+AS $$
BEGIN
-- Only process downvotes (value = -1)
IF NEW.value = -1 THEN
-- Immediately hide the snippet
- INSERT INTO user_hide_snippets (snippet)
+ INSERT INTO public.user_hide_snippets (snippet)
VALUES (NEW.snippet)
ON CONFLICT (snippet) DO NOTHING;
-- Queue for KB review (UNIQUE constraint prevents duplicates)
- INSERT INTO downvote_review_queue (snippet_id, downvoted_by, downvoted_at)
+ INSERT INTO public.downvote_review_queue (snippet_id, downvoted_by, downvoted_at)
VALUES (NEW.snippet, NEW."user", now())
ON CONFLICT (snippet_id) DO NOTHING;
END IF;
RETURN NEW;
END;
-$$ LANGUAGE plpgsql;
+$$;
CREATE TRIGGER on_downvote_queue_review_trigger
-AFTER INSERT ON user_like_snippets
+AFTER INSERT OR UPDATE ON user_like_snippets
FOR EACH ROW
EXECUTE FUNCTION on_downvote_queue_review();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@supabase/database/sql/on_downvote_queue_review.sql` around lines 5 - 23,
Modify the on_downvote_queue_review function to be SECURITY DEFINER and to
handle UPDATE events where a vote changes to a downvote: declare the function
with SECURITY DEFINER, then in the body check TG_OP and values so the
queue/hidden-snippet inserts run when (TG_OP = 'INSERT' AND NEW.value = -1) OR
(TG_OP = 'UPDATE' AND OLD.value IS DISTINCT FROM NEW.value AND NEW.value = -1);
keep the same INSERT ... ON CONFLICT logic for user_hide_snippets and
downvote_review_queue so duplicates are avoided. Ensure you update the trigger
to fire ON INSERT OR UPDATE OF value so updates that flip a vote also invoke
on_downvote_queue_review.
| CREATE TRIGGER on_downvote_queue_review_trigger | ||
| AFTER INSERT ON user_like_snippets | ||
| FOR EACH ROW | ||
| EXECUTE FUNCTION on_downvote_queue_review(); |
There was a problem hiding this comment.
Handle vote changes to -1, not just fresh inserts.
update_snippet_hidden_status already runs on INSERT OR UPDATE, but this trigger only runs on insert. If a user changes an existing vote from 0 or 1 to -1, the snippet is never hidden and never queued for review.
Suggested fix
CREATE TRIGGER on_downvote_queue_review_trigger
-AFTER INSERT ON user_like_snippets
+AFTER INSERT OR UPDATE ON user_like_snippets
FOR EACH ROW
EXECUTE FUNCTION on_downvote_queue_review();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| CREATE TRIGGER on_downvote_queue_review_trigger | |
| AFTER INSERT ON user_like_snippets | |
| FOR EACH ROW | |
| EXECUTE FUNCTION on_downvote_queue_review(); | |
| CREATE TRIGGER on_downvote_queue_review_trigger | |
| AFTER INSERT OR UPDATE ON user_like_snippets | |
| FOR EACH ROW | |
| EXECUTE FUNCTION on_downvote_queue_review(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@supabase/database/sql/on_downvote_queue_review.sql` around lines 25 - 28,
Trigger on_downvote_queue_review_trigger only fires on INSERTs so it misses
users changing a vote to -1; update the trigger to fire on INSERT OR UPDATE for
table user_like_snippets and add a WHEN condition so it only runs the
on_downvote_queue_review() function when NEW.vote = -1 and (OLD is null OR
OLD.vote != -1) to catch both fresh downvotes and changes to -1 (referencing the
trigger name on_downvote_queue_review_trigger, target table user_like_snippets,
and function on_downvote_queue_review()).
Summary
/downvote-review): A Claude Code command for batch review of accumulated downvotes with themed grouping, research, verification, and Slack reportingdownvote_review_queuetable with RLS, deduplication via UNIQUE constraint, and status tracking (pending → processing → completed/error)How it works
Event-driven path (automatic):
on_downvote_queue_reviewtrigger firesuser_hide_snippets) and queues it for review (inserts intodownvote_review_queue)downvote_reviewPrefect flow polls the queue, claims pending entries, and runs the full Stage 4 pipeline (KB researcher + web researcher + reviewer + KB updater agents) with added context that this is a false-positive reviewOn-demand path (manual):
/downvote-reviewin Claude CodeFiles changed
supabase/database/sql/create_downvote_review_queue.sqlsupabase/database/sql/on_downvote_queue_review.sqlsrc/processing_pipeline/supabase_utils.pysrc/processing_pipeline/stage_4/downvote_flows.pysrc/processing_pipeline/stage_4/__init__.pysrc/processing_pipeline/main.pydownvote_reviewprocess group.claude/commands/downvote-review.mdBehavior change
Previously, snippets were auto-hidden only after 2 downvotes. Now they are hidden on the first downvote (via the new trigger). The old trigger (
update_snippet_hidden_status) remains in place but becomes a no-op since the snippet is already hidden.Database migration
The migration has already been applied to the production Supabase instance:
downvote_review_queuecreatedon_downvote_queue_review_triggercreatedSELECT * FROM downvote_review_queue; SELECT tgname FROM pg_trigger WHERE tgname = 'on_downvote_queue_review_trigger';Deployment
To activate the Prefect flow, add a new process group to the Fly.io config:
Test plan
user_like_snippetswithvalue = -1and confirm the snippet appears in bothuser_hide_snippetsanddownvote_review_queue/downvote-reviewskill to confirm the on-demand path worksupdate_snippet_hidden_statustrigger doesn't conflict (both triggers useON CONFLICT DO NOTHING)🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation