Skip to content

Add automated downvote review system#68

Open
rajivsinclair wants to merge 2 commits intomainfrom
features/downvote-review-automation
Open

Add automated downvote review system#68
rajivsinclair wants to merge 2 commits intomainfrom
features/downvote-review-automation

Conversation

@rajivsinclair
Copy link
Copy Markdown
Contributor

@rajivsinclair rajivsinclair commented Mar 18, 2026

Summary

  • Event-driven downvote handler: A Postgres trigger + Prefect flow that automatically hides snippets on the first downvote and queues them for KB review via the existing Stage 4 agents
  • On-demand skill (/downvote-review): A Claude Code command for batch review of accumulated downvotes with themed grouping, research, verification, and Slack reporting
  • Database infrastructure: New downvote_review_queue table with RLS, deduplication via UNIQUE constraint, and status tracking (pending → processing → completed/error)

How it works

Event-driven path (automatic):

  1. User downvotes a snippet → on_downvote_queue_review trigger fires
  2. Trigger immediately hides the snippet (inserts into user_hide_snippets) and queues it for review (inserts into downvote_review_queue)
  3. The downvote_review Prefect flow polls the queue, claims pending entries, and runs the full Stage 4 pipeline (KB researcher + web researcher + reviewer + KB updater agents) with added context that this is a false-positive review
  4. Queue entry is marked completed with the count of KB entries created

On-demand path (manual):

  1. Run /downvote-review in Claude Code
  2. Queries for unprocessed downvotes, groups by theme
  3. Researches correct facts with web search, creates verified KB entries
  4. Generates a report and posts to Slack

Files changed

File Change
supabase/database/sql/create_downvote_review_queue.sql New table with RLS policies
supabase/database/sql/on_downvote_queue_review.sql New trigger: hide + queue on first downvote
src/processing_pipeline/supabase_utils.py 5 new methods for queue CRUD + system hiding
src/processing_pipeline/stage_4/downvote_flows.py New Prefect flow for automated review
src/processing_pipeline/stage_4/__init__.py Export the new flow
src/processing_pipeline/main.py Register downvote_review process group
.claude/commands/downvote-review.md Claude Code skill for on-demand reviews

Behavior change

Previously, snippets were auto-hidden only after 2 downvotes. Now they are hidden on the first downvote (via the new trigger). The old trigger (update_snippet_hidden_status) remains in place but becomes a no-op since the snippet is already hidden.

Database migration

The migration has already been applied to the production Supabase instance:

  • Table downvote_review_queue created
  • Trigger on_downvote_queue_review_trigger created
  • Both can be verified: SELECT * FROM downvote_review_queue; SELECT tgname FROM pg_trigger WHERE tgname = 'on_downvote_queue_review_trigger';

Deployment

To activate the Prefect flow, add a new process group to the Fly.io config:

FLY_PROCESS_GROUP=downvote_review

Test plan

  • Verify trigger fires on downvote: insert a test row into user_like_snippets with value = -1 and confirm the snippet appears in both user_hide_snippets and downvote_review_queue
  • Verify deduplication: downvoting the same snippet twice should not create duplicate queue entries
  • Verify the Prefect flow processes queue entries (requires Google ADK + OpenAI keys in the deployment environment)
  • Run /downvote-review skill to confirm the on-demand path works
  • Verify the old update_snippet_hidden_status trigger doesn't conflict (both triggers use ON CONFLICT DO NOTHING)

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Added an automated downvote review pipeline that queues downvoted snippets, hides them from view, runs research/verification, creates KB entries, and updates review status.
  • Documentation

    • Added a detailed guide describing the end-to-end downvote review workflow, data validation rules, reporting, and operational procedures.

Two components for turning user downvotes into knowledge base improvements:

1. Event-driven (Postgres trigger + Prefect flow):
   - New trigger on user_like_snippets: on first downvote, immediately
     hides the snippet and queues it in downvote_review_queue
   - New Prefect flow (downvote_review) polls the queue and runs the
     full Stage 4 review pipeline (KB researcher + web researcher +
     KB updater agents) to create corrective KB entries
   - Registered as "downvote_review" process group in main.py

2. On-demand skill (/downvote-review):
   - Claude Code command for manual batch reviews
   - Queries unprocessed downvotes, groups by theme, researches
     correct facts, creates KB entries, generates reports

Database changes (already applied to production):
   - New table: downvote_review_queue (with RLS, indexes, unique constraint)
   - New trigger: on_downvote_queue_review_trigger
   - New SupabaseClient methods for queue CRUD and system-level hiding

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings March 18, 2026 23:48
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 18, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 6a4be16a-de43-4145-8caf-0fb97bfb178d

📥 Commits

Reviewing files that changed from the base of the PR and between 038b01a and fac2d08.

📒 Files selected for processing (1)
  • fly.processing_worker.toml

Walkthrough

Adds a new Stage 4 "Downvote Review" flow: DB queue and trigger to capture downvoted snippets, Supabase client helpers for queue lifecycle, an async processor that claims and processes queued items (hides snippet, runs KB research/update), and integration into the main pipeline as a repeatable deployment.

Changes

Cohort / File(s) Summary
Documentation & Workflow Guide
\.claude/commands/downvote-review.md
New end-to-end guide describing downvote review workflow, SQL procedures, research/KB steps, validation rules, embeddings backfill, and Slack reporting.
Pipeline Entry & Exports
src/processing_pipeline/main.py, src/processing_pipeline/stage_4/__init__.py
Registers downvote_review as a deployment route in main; exports downvote_review from stage_4 package.
Downvote Review Flow
src/processing_pipeline/stage_4/downvote_flows.py
New async downvote_review(repeat=True) flow that polls the queue, claims entries, hides snippets, prepends downvote context, runs snippet processing, and records completion/failure.
Supabase Queue Helpers
src/processing_pipeline/supabase_utils.py
Adds SupabaseClient methods: get_pending_downvote_reviews, claim_downvote_review, complete_downvote_review, fail_downvote_review, and hide_snippet_by_system. Also minor formatting refactors.
Database Queue & Trigger
supabase/database/sql/create_downvote_review_queue.sql, supabase/database/sql/on_downvote_queue_review.sql
Adds downvote_review_queue table (RLS, unique constraint, status lifecycle), index, grants/policies, and a trigger that hides snippets and enqueues reviews on downvote.
Worker Process Config
fly.processing_worker.toml
Adds downvote_review process entry and a dedicated VM configuration for the new worker.

Sequence Diagram

sequenceDiagram
    participant Pipeline as Processing Pipeline
    participant Flow as downvote_review Flow
    participant Supabase as SupabaseClient
    participant DB as Database
    participant Snip as Snippet Store
    participant Processor as process_snippet

    Pipeline->>Flow: start downvote_review deployment
    loop repeat loop
        Flow->>Supabase: get_pending_downvote_reviews(limit=1)
        Supabase->>DB: SELECT ... FROM downvote_review_queue WHERE status='pending' LIMIT 1
        DB-->>Supabase: pending entry / none
        Supabase-->>Flow: return queue entry
        alt entry found
            Flow->>Supabase: claim_downvote_review(queue_id)
            Supabase->>DB: UPDATE status='processing' WHERE id=queue_id
            DB-->>Supabase: update OK
            Flow->>Snip: load snippet(snippet_id)
            Snip-->>Flow: snippet data
            Flow->>Supabase: hide_snippet_by_system(snippet_id)
            Supabase->>DB: INSERT INTO user_hide_snippets ...
            DB-->>Supabase: hide OK
            Flow->>Processor: process_snippet(snippet + prompts)
            Processor-->>Flow: kb entries / result
            alt success
                Flow->>Supabase: complete_downvote_review(queue_id, kb_entries_created=1)
                Supabase->>DB: UPDATE status='completed', processed_at=NOW()
            else failure
                Flow->>Supabase: fail_downvote_review(queue_id, error_message)
                Supabase->>DB: UPDATE status='error', error_message=...
            end
        else no entry
            Flow->>Flow: sleep or exit (based on repeat)
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • nhphong

Poem

🐰 A downvote thumped, so I hop to the queue,
I hide and I search with a nibble or two,
I stitch facts to KB, check sources with care,
Then mark the review done and leave tidy hair—🥕✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 17.07% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change—adding an automated system to review downvoted snippets for KB creation via event-driven triggers and Prefect workflows.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch features/downvote-review-automation
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive automated system to manage user downvotes on snippets. It streamlines the process of identifying potentially problematic content by hiding snippets immediately upon a single downvote and initiating an automated review workflow. This system integrates a new database queue, an event-driven Prefect flow for continuous processing, and an on-demand Claude Code command for manual batch reviews, ensuring that user feedback is acted upon swiftly and efficiently to maintain content quality.

Highlights

  • Event-driven downvote handler: Implemented a Postgres trigger and Prefect flow that automatically hides snippets on the first downvote and queues them for Knowledge Base (KB) review via existing Stage 4 agents.
  • On-demand review skill: Introduced a new Claude Code command (/downvote-review) for batch review of accumulated downvotes, featuring themed grouping, web research, verification, and Slack reporting.
  • Database infrastructure: Established a new downvote_review_queue table with Row Level Security (RLS), a unique constraint for deduplication, and status tracking (pending, processing, completed, error).
  • Behavior change: Snippets are now hidden immediately upon the first downvote, streamlining the response to user feedback compared to the previous two-downvote threshold.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive automated downvote review system, including a new database table, a Postgres trigger for immediate snippet hiding and queueing, a Prefect flow for processing, and a Claude skill for on-demand reviews. The changes are well-structured and cover both the event-driven and manual paths for handling downvoted snippets. My review focuses on improving the robustness and maintainability of the new system. I've identified a potential logic issue in the Claude skill's query, a hardcoded value that should be dynamic, a risky environment variable modification, a hardcoded secret, and a minor code style violation. Addressing these points will enhance the reliability and security of this new feature.

Comment on lines +95 to +97
supabase_client.complete_downvote_review(
queue_entry["id"], kb_entries_created=1
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The number of created knowledge base entries is hardcoded to 1. However, the process_snippet agentic pipeline might create zero, one, or multiple KB entries. This hardcoded value will result in inaccurate data being stored in the downvote_review_queue table.

To fix this, the process_snippet function (and its underlying Stage4Executor) should return the actual count of KB entries created during its execution. This count can then be passed to complete_downvote_review.

Suggested change
supabase_client.complete_downvote_review(
queue_entry["id"], kb_entries_created=1
)
kb_entries_created_count = await process_snippet(supabase_client, snippet, prompt_versions)
supabase_client.complete_downvote_review(
queue_entry["id"], kb_entries_created=kb_entries_created_count
)

Comment on lines +26 to +28
AND NOT EXISTS (
SELECT 1 FROM user_hide_snippets uhs WHERE uhs.snippet = s.id
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The condition NOT EXISTS (SELECT 1 FROM user_hide_snippets ...) seems to contradict the new trigger's behavior. The on_downvote_queue_review trigger immediately hides a snippet upon a downvote by inserting it into user_hide_snippets. Consequently, this query will not find any snippets that are downvoted after the trigger is active, potentially rendering this part of the on-demand skill ineffective for new downvotes.

To ensure this command can process all relevant downvoted snippets, including those already hidden by the trigger, I recommend removing this NOT EXISTS clause. The subsequent "Hide snippets" step already uses ON CONFLICT DO NOTHING, which makes it safe to run even for snippets that are already hidden.


## Notes

- The VERDAD Supabase project ID is `dzujjhzgzguciwryzwlx`
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Hardcoding the Supabase project ID here is a security and maintainability concern. While project IDs are not typically as sensitive as API keys, it's better practice to avoid hardcoding such values. If this command is executed in an environment where environment variables are available, it would be more secure to fetch this ID from an environment variable. This would also make it easier to point the skill to different environments (e.g., staging, production) without changing the command definition.

Suggested change
- The VERDAD Supabase project ID is `dzujjhzgzguciwryzwlx`
- The VERDAD Supabase project ID is `${VERDAD_SUPABASE_PROJECT_ID}`

web researcher + KB updater agents)
4. Marks the queue entry as completed or errored
"""
os.environ["GOOGLE_API_KEY"] = os.environ.get("GOOGLE_GEMINI_PAID_KEY")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Modifying os.environ at runtime is generally considered an unsafe practice. It creates a global side effect that can lead to unpredictable behavior in other parts of the application that might rely on the GOOGLE_API_KEY environment variable. It also makes the code harder to reason about and debug.

A safer approach would be to pass the API key explicitly to the client or function that requires it, rather than altering the environment. This would make the dependency clear and avoid unintended consequences.


def complete_downvote_review(self, queue_id, kb_entries_created=0):
"""Mark a downvote review as completed."""
from datetime import datetime, timezone
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This local import of datetime and timezone is redundant, as they are already imported at the top of the file (line 4). According to PEP 8, imports should be placed at the top of the file. Removing this line will improve code cleanliness and adhere to Python's standard style guide.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 038b01a8d6

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +25 to +27
CREATE TRIGGER on_downvote_queue_review_trigger
AFTER INSERT ON user_like_snippets
FOR EACH ROW
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Fire downvote trigger on vote updates

This trigger only runs on AFTER INSERT, but the like_snippet function writes with INSERT ... ON CONFLICT DO UPDATE; when a user changes an existing vote from 1 or 0 to -1, only an UPDATE happens. In that common path, snippets will not be auto-hidden and no row will be queued for review, so the new “hide on first downvote” behavior is skipped for previously voted snippets.

Useful? React with 👍 / 👎.

Comment on lines +94 to +96
await process_snippet(supabase_client, snippet, prompt_versions)
supabase_client.complete_downvote_review(
queue_entry["id"], kb_entries_created=1
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Fail queue items when Stage 4 processing fails

downvote_review marks the queue entry completed immediately after process_snippet, but process_snippet catches its own exceptions and does not re-raise (it only sets snippet status to Error). That means failed reviews still get status='completed' here and are never retried, causing silent data loss in the downvote review queue.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces an automated “downvote → hide + KB review” pipeline by adding a Supabase queue table + trigger, a new Stage 4 Prefect flow to process queued snippets, and an on-demand Claude Code command for manual/batch processing.

Changes:

  • Added downvote_review_queue table (status tracking + RLS) to store snippets awaiting downvote-driven KB review.
  • Added on_downvote_queue_review trigger to hide a snippet and enqueue it on downvote.
  • Added a new Stage 4 Prefect flow (downvote_review) plus process-group registration and a Claude Code /downvote-review command.

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 10 comments.

Show a summary per file
File Description
supabase/database/sql/on_downvote_queue_review.sql New trigger function to hide + queue on downvote.
supabase/database/sql/create_downvote_review_queue.sql New queue table with status fields, index, grants, and RLS policies.
src/processing_pipeline/supabase_utils.py Adds queue CRUD helpers and a system hide helper.
src/processing_pipeline/stage_4/downvote_flows.py New Prefect flow that polls/claims queue entries and runs Stage 4 review.
src/processing_pipeline/stage_4/__init__.py Exports the new downvote flow.
src/processing_pipeline/main.py Registers a new downvote_review process group for Prefect serve.
.claude/commands/downvote-review.md Adds an on-demand command workflow for batch downvote review + reporting.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +26 to +33
GRANT ALL ON TABLE public.downvote_review_queue TO service_role;
GRANT SELECT ON TABLE public.downvote_review_queue TO authenticated;

CREATE POLICY "Enable full access for service role"
ON public.downvote_review_queue FOR ALL TO service_role USING (true);

CREATE POLICY "Enable read access for authenticated users"
ON public.downvote_review_queue FOR SELECT TO authenticated USING (true);

def complete_downvote_review(self, queue_id, kb_entries_created=0):
"""Mark a downvote review as completed."""
from datetime import datetime, timezone
Comment on lines +833 to +846
def fail_downvote_review(self, queue_id, error_message):
"""Mark a downvote review as failed."""
response = (
self.client.table("downvote_review_queue")
.update(
{
"status": "error",
"error_message": error_message,
}
)
.eq("id", queue_id)
.execute()
)
return response.data[0] if response.data else None
Comment on lines +792 to +855
def get_pending_downvote_reviews(self, limit=10):
"""Fetch pending downvote review queue entries."""
response = (
self.client.table("downvote_review_queue")
.select("*")
.eq("status", "pending")
.order("created_at")
.limit(limit)
.execute()
)
return response.data if response.data else []

def claim_downvote_review(self, queue_id):
"""Atomically claim a downvote review entry for processing."""
response = (
self.client.table("downvote_review_queue")
.update({"status": "processing"})
.eq("id", queue_id)
.eq("status", "pending")
.execute()
)
return response.data[0] if response.data else None

def complete_downvote_review(self, queue_id, kb_entries_created=0):
"""Mark a downvote review as completed."""
from datetime import datetime, timezone

response = (
self.client.table("downvote_review_queue")
.update(
{
"status": "completed",
"processed_at": datetime.now(timezone.utc).isoformat(),
"kb_entries_created": kb_entries_created,
}
)
.eq("id", queue_id)
.execute()
)
return response.data[0] if response.data else None

def fail_downvote_review(self, queue_id, error_message):
"""Mark a downvote review as failed."""
response = (
self.client.table("downvote_review_queue")
.update(
{
"status": "error",
"error_message": error_message,
}
)
.eq("id", queue_id)
.execute()
)
return response.data[0] if response.data else None

def hide_snippet_by_system(self, snippet_id):
"""Hide a snippet programmatically (no auth/admin check)."""
response = (
self.client.table("user_hide_snippets")
.upsert({"snippet": snippet_id}, on_conflict="snippet")
.execute()
)
return response.data[0] if response.data else None

await process_snippet(supabase_client, snippet, prompt_versions)
supabase_client.complete_downvote_review(
queue_entry["id"], kb_entries_created=1
Comment on lines +53 to +67
while True:
pending = supabase_client.get_pending_downvote_reviews(limit=1)
if not pending:
if not repeat:
print("No pending downvote reviews. Exiting.")
break
print("No pending downvote reviews. Sleeping 30s...")
await asyncio.sleep(30)
continue

queue_entry = pending[0]
claimed = supabase_client.claim_downvote_review(queue_entry["id"])
if not claimed:
print(f"Queue entry {queue_entry['id']} already claimed. Skipping.")
continue
Comment on lines +25 to +28
CREATE TRIGGER on_downvote_queue_review_trigger
AFTER INSERT ON user_like_snippets
FOR EACH ROW
EXECUTE FUNCTION on_downvote_queue_review();
Comment on lines +27 to +33
GRANT SELECT ON TABLE public.downvote_review_queue TO authenticated;

CREATE POLICY "Enable full access for service role"
ON public.downvote_review_queue FOR ALL TO service_role USING (true);

CREATE POLICY "Enable read access for authenticated users"
ON public.downvote_review_queue FOR SELECT TO authenticated USING (true);
Comment on lines +26 to +28
AND NOT EXISTS (
SELECT 1 FROM user_hide_snippets uhs WHERE uhs.snippet = s.id
)
from processing_pipeline.stage_3 import in_depth_analysis
from processing_pipeline.stage_5 import embedding
from processing_pipeline.stage_4 import analysis_review
from processing_pipeline.stage_4 import analysis_review, downvote_review # noqa: F401
Registers the downvote_review Prefect flow as a process group
with 2GB memory and 4 CPUs (same tier as analysis_review).
Concurrency limit is 1 since it processes one snippet at a time.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/commands/downvote-review.md:
- Around line 24-30: The WHERE clause currently excludes rows present in
user_hide_snippets, which filters out items newly queued by the trigger; update
the query that references tables/aliases uls, drq, s and the user_hide_snippets
subquery so that snippets with a corresponding downvote_review_queue record
(drq) are not excluded — e.g., remove the NOT EXISTS(...) filter or change it to
allow rows when drq.status IS NOT NULL/IN ('pending','error') or when a drq.id
exists, ensuring the query returns items in downvote_review_queue while still
excluding other hidden snippets.

In `@src/processing_pipeline/stage_4/downvote_flows.py`:
- Around line 94-97: process_snippet currently swallows errors and returns
nothing, but downvote_flows calls supabase_client.complete_downvote_review
unconditionally with kb_entries_created=1; change process_snippet (in
src/processing_pipeline/stage_4/tasks.py) to return a structured outcome object
(e.g. { success: bool, kb_entries_created: int, error?: Error }) instead of
swallowing failures, then update the caller in downvote_flows.py to check that
outcome: only call supabase_client.complete_downvote_review(queue_entry["id"],
kb_entries_created=outcome.kb_entries_created) when outcome.success is true (or
otherwise call a failure path such as
supabase_client.mark_downvote_review_failed with the error info); use the
symbols process_snippet and supabase_client.complete_downvote_review to locate
where to change behavior.
- Line 31: The assignment to os.environ["GOOGLE_API_KEY"] should be guarded
against a missing GOOGLE_GEMINI_PAID_KEY to avoid assigning None; check the
value returned by os.environ.get("GOOGLE_GEMINI_PAID_KEY") first and only set
os.environ["GOOGLE_API_KEY"] when it's truthy, otherwise emit a clear error via
the module's logger (or raise a descriptive exception) so the worker fails with
an actionable message; locate the assignment to os.environ["GOOGLE_API_KEY"] in
downvote_flows.py and replace it with a conditional guard around the
os.environ.get("GOOGLE_GEMINI_PAID_KEY") lookup.

In `@src/processing_pipeline/supabase_utils.py`:
- Around line 792-813: get_pending_downvote_reviews only selects rows with
status="pending" and claim_downvote_review flips status to "processing" with no
lease, so claimed rows can be stranded; add claim metadata (e.g., claimed_at and
claim_expires_at or claim_ttl) to the downvote_review_queue schema and change
logic so claim_downvote_review sets status="processing", claimed_at=now(),
claim_expires_at=now()+TTL and only updates when status="pending" OR
(status="processing" AND claim_expires_at < now()) to allow reclaiming stale
claims; update get_pending_downvote_reviews to return rows where
status="pending" OR (status="processing" AND claim_expires_at < now()) so
pollers can reclaim, and ensure any finalizers set status to "done"/"failed" and
clear claim fields when finished.

In `@supabase/database/sql/create_downvote_review_queue.sql`:
- Around line 26-33: The current GRANT SELECT and policy "Enable read access for
authenticated users" on public.downvote_review_queue exposes sensitive columns
(downvoted_by, error_message, moderation state) to all authenticated users;
remove or revoke the broad GRANT/SELECT and replace it with a safe alternative:
either create a sanitized view (e.g., downvote_review_queue_sanitized) that
exposes only non-sensitive columns and grant SELECT on that view, or tighten the
policy "Enable read access for authenticated users" to LIMIT rows and mask
fields (for example USING (auth.uid() = downvoted_by) or return NULL for
sensitive columns), while keeping the service_role policies/GRANTs intact for
internal service usage.

In `@supabase/database/sql/on_downvote_queue_review.sql`:
- Around line 25-28: Trigger on_downvote_queue_review_trigger only fires on
INSERTs so it misses users changing a vote to -1; update the trigger to fire on
INSERT OR UPDATE for table user_like_snippets and add a WHEN condition so it
only runs the on_downvote_queue_review() function when NEW.vote = -1 and (OLD is
null OR OLD.vote != -1) to catch both fresh downvotes and changes to -1
(referencing the trigger name on_downvote_queue_review_trigger, target table
user_like_snippets, and function on_downvote_queue_review()).
- Around line 5-23: Modify the on_downvote_queue_review function to be SECURITY
DEFINER and to handle UPDATE events where a vote changes to a downvote: declare
the function with SECURITY DEFINER, then in the body check TG_OP and values so
the queue/hidden-snippet inserts run when (TG_OP = 'INSERT' AND NEW.value = -1)
OR (TG_OP = 'UPDATE' AND OLD.value IS DISTINCT FROM NEW.value AND NEW.value =
-1); keep the same INSERT ... ON CONFLICT logic for user_hide_snippets and
downvote_review_queue so duplicates are avoided. Ensure you update the trigger
to fire ON INSERT OR UPDATE OF value so updates that flip a vote also invoke
on_downvote_queue_review.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f86701e7-ceaf-4c10-bfc0-e8bf9872fa56

📥 Commits

Reviewing files that changed from the base of the PR and between 3dbcd00 and 038b01a.

📒 Files selected for processing (7)
  • .claude/commands/downvote-review.md
  • src/processing_pipeline/main.py
  • src/processing_pipeline/stage_4/__init__.py
  • src/processing_pipeline/stage_4/downvote_flows.py
  • src/processing_pipeline/supabase_utils.py
  • supabase/database/sql/create_downvote_review_queue.sql
  • supabase/database/sql/on_downvote_queue_review.sql

Comment on lines +24 to +30
WHERE uls.value = -1
AND (drq.status IS NULL OR drq.status = 'pending' OR drq.status = 'error')
AND NOT EXISTS (
SELECT 1 FROM user_hide_snippets uhs WHERE uhs.snippet = s.id
)
GROUP BY s.id, drq.status
ORDER BY s.created_at DESC;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This query filters out the very rows the new trigger creates.

The new trigger hides snippets on the first downvote, so AND NOT EXISTS (SELECT 1 FROM user_hide_snippets ...) removes every freshly queued item. This command will report “no unreviewed snippets” even while downvote_review_queue still has pending work.

Suggested fix
-SELECT
-    s.id AS snippet_id,
-    s.title,
-    s.explanation,
-    s.disinformation_categories,
-    s.confidence_scores,
-    s.created_at,
-    drq.status AS queue_status,
-    COUNT(uls.id) FILTER (WHERE uls.value = -1) AS downvote_count
-FROM snippets s
-JOIN user_like_snippets uls ON uls.snippet = s.id
-LEFT JOIN downvote_review_queue drq ON drq.snippet_id = s.id
-WHERE uls.value = -1
-AND (drq.status IS NULL OR drq.status = 'pending' OR drq.status = 'error')
-AND NOT EXISTS (
-    SELECT 1 FROM user_hide_snippets uhs WHERE uhs.snippet = s.id
-)
-GROUP BY s.id, drq.status
-ORDER BY s.created_at DESC;
+SELECT
+    s.id AS snippet_id,
+    s.title,
+    s.explanation,
+    s.disinformation_categories,
+    s.confidence_scores,
+    s.created_at,
+    drq.status AS queue_status,
+    drq.downvoted_at
+FROM downvote_review_queue drq
+JOIN snippets s ON s.id = drq.snippet_id
+WHERE drq.status IN ('pending', 'error')
+ORDER BY drq.downvoted_at DESC;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/commands/downvote-review.md around lines 24 - 30, The WHERE clause
currently excludes rows present in user_hide_snippets, which filters out items
newly queued by the trigger; update the query that references tables/aliases
uls, drq, s and the user_hide_snippets subquery so that snippets with a
corresponding downvote_review_queue record (drq) are not excluded — e.g., remove
the NOT EXISTS(...) filter or change it to allow rows when drq.status IS NOT
NULL/IN ('pending','error') or when a drq.id exists, ensuring the query returns
items in downvote_review_queue while still excluding other hidden snippets.

web researcher + KB updater agents)
4. Marks the queue entry as completed or errored
"""
os.environ["GOOGLE_API_KEY"] = os.environ.get("GOOGLE_GEMINI_PAID_KEY")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard the Google key before writing to os.environ.

os.environ["GOOGLE_API_KEY"] = None raises TypeError, so a missing GOOGLE_GEMINI_PAID_KEY crashes the worker before it can emit a useful error.

Suggested fix
-    os.environ["GOOGLE_API_KEY"] = os.environ.get("GOOGLE_GEMINI_PAID_KEY")
+    google_api_key = os.getenv("GOOGLE_GEMINI_PAID_KEY")
+    if not google_api_key:
+        raise RuntimeError("GOOGLE_GEMINI_PAID_KEY is not set")
+    os.environ["GOOGLE_API_KEY"] = google_api_key
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
os.environ["GOOGLE_API_KEY"] = os.environ.get("GOOGLE_GEMINI_PAID_KEY")
google_api_key = os.getenv("GOOGLE_GEMINI_PAID_KEY")
if not google_api_key:
raise RuntimeError("GOOGLE_GEMINI_PAID_KEY is not set")
os.environ["GOOGLE_API_KEY"] = google_api_key
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/processing_pipeline/stage_4/downvote_flows.py` at line 31, The assignment
to os.environ["GOOGLE_API_KEY"] should be guarded against a missing
GOOGLE_GEMINI_PAID_KEY to avoid assigning None; check the value returned by
os.environ.get("GOOGLE_GEMINI_PAID_KEY") first and only set
os.environ["GOOGLE_API_KEY"] when it's truthy, otherwise emit a clear error via
the module's logger (or raise a descriptive exception) so the worker fails with
an actionable message; locate the assignment to os.environ["GOOGLE_API_KEY"] in
downvote_flows.py and replace it with a conditional guard around the
os.environ.get("GOOGLE_GEMINI_PAID_KEY") lookup.

Comment on lines +94 to +97
await process_snippet(supabase_client, snippet, prompt_versions)
supabase_client.complete_downvote_review(
queue_entry["id"], kb_entries_created=1
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't mark the queue item completed unconditionally.

process_snippet() catches its own failures in src/processing_pipeline/stage_4/tasks.py:104-150 and returns no result, so this branch also runs when Stage 4 actually failed. That both hides failed work under completed and hardcodes kb_entries_created=1 regardless of whether zero, one, or many KB entries were written.

Suggested direction
-            await process_snippet(supabase_client, snippet, prompt_versions)
-            supabase_client.complete_downvote_review(
-                queue_entry["id"], kb_entries_created=1
-            )
+            result = await process_snippet(supabase_client, snippet, prompt_versions)
+            if not result["success"]:
+                supabase_client.fail_downvote_review(queue_entry["id"], result["error"])
+                continue
+            supabase_client.complete_downvote_review(
+                queue_entry["id"],
+                kb_entries_created=result["kb_entries_created"],
+            )

This needs a matching change in src/processing_pipeline/stage_4/tasks.py so process_snippet() returns a structured outcome instead of swallowing errors.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/processing_pipeline/stage_4/downvote_flows.py` around lines 94 - 97,
process_snippet currently swallows errors and returns nothing, but
downvote_flows calls supabase_client.complete_downvote_review unconditionally
with kb_entries_created=1; change process_snippet (in
src/processing_pipeline/stage_4/tasks.py) to return a structured outcome object
(e.g. { success: bool, kb_entries_created: int, error?: Error }) instead of
swallowing failures, then update the caller in downvote_flows.py to check that
outcome: only call supabase_client.complete_downvote_review(queue_entry["id"],
kb_entries_created=outcome.kb_entries_created) when outcome.success is true (or
otherwise call a failure path such as
supabase_client.mark_downvote_review_failed with the error info); use the
symbols process_snippet and supabase_client.complete_downvote_review to locate
where to change behavior.

Comment on lines +792 to +813
def get_pending_downvote_reviews(self, limit=10):
"""Fetch pending downvote review queue entries."""
response = (
self.client.table("downvote_review_queue")
.select("*")
.eq("status", "pending")
.order("created_at")
.limit(limit)
.execute()
)
return response.data if response.data else []

def claim_downvote_review(self, queue_id):
"""Atomically claim a downvote review entry for processing."""
response = (
self.client.table("downvote_review_queue")
.update({"status": "processing"})
.eq("id", queue_id)
.eq("status", "pending")
.execute()
)
return response.data[0] if response.data else None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Claimed rows can get stranded in processing forever.

get_pending_downvote_reviews() only reads pending, and claim_downvote_review() permanently flips the row to processing with no lease/heartbeat. If the worker dies after claiming, that item is never visible again.

Add a claim timestamp + retry metadata, or reclaim stale processing rows during polling instead of treating processing as terminal.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/processing_pipeline/supabase_utils.py` around lines 792 - 813,
get_pending_downvote_reviews only selects rows with status="pending" and
claim_downvote_review flips status to "processing" with no lease, so claimed
rows can be stranded; add claim metadata (e.g., claimed_at and claim_expires_at
or claim_ttl) to the downvote_review_queue schema and change logic so
claim_downvote_review sets status="processing", claimed_at=now(),
claim_expires_at=now()+TTL and only updates when status="pending" OR
(status="processing" AND claim_expires_at < now()) to allow reclaiming stale
claims; update get_pending_downvote_reviews to return rows where
status="pending" OR (status="processing" AND claim_expires_at < now()) so
pollers can reclaim, and ensure any finalizers set status to "done"/"failed" and
clear claim fields when finished.

Comment on lines +26 to +33
GRANT ALL ON TABLE public.downvote_review_queue TO service_role;
GRANT SELECT ON TABLE public.downvote_review_queue TO authenticated;

CREATE POLICY "Enable full access for service role"
ON public.downvote_review_queue FOR ALL TO service_role USING (true);

CREATE POLICY "Enable read access for authenticated users"
ON public.downvote_review_queue FOR SELECT TO authenticated USING (true);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't expose raw queue rows to every authenticated user.

This table includes downvoted_by, error_message, and internal moderation state, but the current grant/policy lets any authenticated client read every row. That leaks cross-user moderation telemetry and user identifiers.

Suggested fix
 GRANT ALL ON TABLE public.downvote_review_queue TO service_role;
-GRANT SELECT ON TABLE public.downvote_review_queue TO authenticated;
 
 CREATE POLICY "Enable full access for service role"
     ON public.downvote_review_queue FOR ALL TO service_role USING (true);
-
-CREATE POLICY "Enable read access for authenticated users"
-    ON public.downvote_review_queue FOR SELECT TO authenticated USING (true);

If the app needs visibility here, expose a sanitized view or a narrowly scoped policy instead of the raw table.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
GRANT ALL ON TABLE public.downvote_review_queue TO service_role;
GRANT SELECT ON TABLE public.downvote_review_queue TO authenticated;
CREATE POLICY "Enable full access for service role"
ON public.downvote_review_queue FOR ALL TO service_role USING (true);
CREATE POLICY "Enable read access for authenticated users"
ON public.downvote_review_queue FOR SELECT TO authenticated USING (true);
GRANT ALL ON TABLE public.downvote_review_queue TO service_role;
CREATE POLICY "Enable full access for service role"
ON public.downvote_review_queue FOR ALL TO service_role USING (true);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@supabase/database/sql/create_downvote_review_queue.sql` around lines 26 - 33,
The current GRANT SELECT and policy "Enable read access for authenticated users"
on public.downvote_review_queue exposes sensitive columns (downvoted_by,
error_message, moderation state) to all authenticated users; remove or revoke
the broad GRANT/SELECT and replace it with a safe alternative: either create a
sanitized view (e.g., downvote_review_queue_sanitized) that exposes only
non-sensitive columns and grant SELECT on that view, or tighten the policy
"Enable read access for authenticated users" to LIMIT rows and mask fields (for
example USING (auth.uid() = downvoted_by) or return NULL for sensitive columns),
while keeping the service_role policies/GRANTs intact for internal service
usage.

Comment on lines +5 to +23
CREATE OR REPLACE FUNCTION on_downvote_queue_review()
RETURNS TRIGGER AS $$
BEGIN
-- Only process downvotes (value = -1)
IF NEW.value = -1 THEN
-- Immediately hide the snippet
INSERT INTO user_hide_snippets (snippet)
VALUES (NEW.snippet)
ON CONFLICT (snippet) DO NOTHING;

-- Queue for KB review (UNIQUE constraint prevents duplicates)
INSERT INTO downvote_review_queue (snippet_id, downvoted_by, downvoted_at)
VALUES (NEW.snippet, NEW."user", now())
ON CONFLICT (snippet_id) DO NOTHING;
END IF;

RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

In PostgreSQL / Supabase, does a trigger function execute with invoker privileges by default, and would inserting into another RLS-enabled table require SECURITY DEFINER when the caller only has SELECT on that target table?

💡 Result:

No, a trigger function in PostgreSQL (and Supabase) does not execute with invoker privileges by default. By default, functions are SECURITY INVOKER, meaning they execute with the privileges of the caller (the role that triggered the event). Yes, if the caller (e.g., a limited user role like 'authenticated' with only SELECT on the target RLS-enabled table) triggers the function that attempts to INSERT into another RLS-enabled table, and the caller lacks INSERT privileges or the RLS policy does not allow the insert for that role, it will fail with a permission denied error. To allow the insert, the trigger function must be defined as SECURITY DEFINER (typically owned by a privileged role like postgres with BYPASSRLS), so it runs with the owner's elevated privileges, bypassing the caller's limitations and RLS checks on the target table.

Citations:


🏁 Script executed:

# Find the trigger definition that uses on_downvote_queue_review
rg "on_downvote_queue_review|CREATE TRIGGER" --type sql -B 2 -A 2

Repository: PublicDataWorks/verdad

Length of output: 5243


🏁 Script executed:

# Check for user_like_snippets table and related vote logic
rg "user_like_snippets|downvote" --type sql -l

Repository: PublicDataWorks/verdad

Length of output: 628


🏁 Script executed:

# Look at the actual structure of user_like_snippets
rg "CREATE TABLE.*user_like_snippets" --type sql -A 20

Repository: PublicDataWorks/verdad

Length of output: 48


🏁 Script executed:

# Find user_like_snippets table definition
rg "CREATE TABLE.*user_like_snippets" --type sql -A 30

Repository: PublicDataWorks/verdad

Length of output: 48


🏁 Script executed:

# Check the like_snippet_function to see who can call it
cat supabase/database/sql/like_snippet_function.sql

Repository: PublicDataWorks/verdad

Length of output: 1572


🏁 Script executed:

# Check permissions/grants on user_like_snippets
rg "user_like_snippets|GRANT" --type sql -B 1 -A 1 | head -50

Repository: PublicDataWorks/verdad

Length of output: 3872


🏁 Script executed:

# Check the create_downvote_review_queue.sql for permissions
cat supabase/database/sql/create_downvote_review_queue.sql

Repository: PublicDataWorks/verdad

Length of output: 1436


🏁 Script executed:

# Check the update_snippet_hidden_status.sql to see how it handles UPDATE
cat supabase/database/sql/update_snippet_hidden_status.sql

Repository: PublicDataWorks/verdad

Length of output: 1046


🏁 Script executed:

# Check if there's any other handling for upvote-to-downvote transitions
rg "downvote_review_queue|on_downvote_queue_review" --type sql -B 5 -A 5

Repository: PublicDataWorks/verdad

Length of output: 5369


🏁 Script executed:

# Verify the complete trigger file to see if there's UPDATE handling
cat supabase/database/sql/on_downvote_queue_review.sql

Repository: PublicDataWorks/verdad

Length of output: 1030


Add SECURITY DEFINER and handle UPDATE events on vote changes.

The function writes to downvote_review_queue, which grants SELECT only to authenticated in supabase/database/sql/create_downvote_review_queue.sql. Without SECURITY DEFINER, an authenticated user's downvote will fail with a permission error when the trigger attempts the INSERT on Line 16.

Additionally, the trigger fires only on INSERT, but like_snippet_function uses ON CONFLICT ... DO UPDATE when changing vote values. If a user changes from upvote to downvote, that UPDATE event won't trigger the queue insertion.

Suggested fix
-CREATE OR REPLACE FUNCTION on_downvote_queue_review()
-RETURNS TRIGGER AS $$
+CREATE OR REPLACE FUNCTION public.on_downvote_queue_review()
+RETURNS TRIGGER
+LANGUAGE plpgsql
+SECURITY DEFINER
+SET search_path = public
+AS $$
 BEGIN
     -- Only process downvotes (value = -1)
     IF NEW.value = -1 THEN
         -- Immediately hide the snippet
-        INSERT INTO user_hide_snippets (snippet)
+        INSERT INTO public.user_hide_snippets (snippet)
         VALUES (NEW.snippet)
         ON CONFLICT (snippet) DO NOTHING;
 
         -- Queue for KB review (UNIQUE constraint prevents duplicates)
-        INSERT INTO downvote_review_queue (snippet_id, downvoted_by, downvoted_at)
+        INSERT INTO public.downvote_review_queue (snippet_id, downvoted_by, downvoted_at)
         VALUES (NEW.snippet, NEW."user", now())
         ON CONFLICT (snippet_id) DO NOTHING;
     END IF;
 
     RETURN NEW;
 END;
-$$ LANGUAGE plpgsql;
+$$;

 CREATE TRIGGER on_downvote_queue_review_trigger
-AFTER INSERT ON user_like_snippets
+AFTER INSERT OR UPDATE ON user_like_snippets
 FOR EACH ROW
 EXECUTE FUNCTION on_downvote_queue_review();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@supabase/database/sql/on_downvote_queue_review.sql` around lines 5 - 23,
Modify the on_downvote_queue_review function to be SECURITY DEFINER and to
handle UPDATE events where a vote changes to a downvote: declare the function
with SECURITY DEFINER, then in the body check TG_OP and values so the
queue/hidden-snippet inserts run when (TG_OP = 'INSERT' AND NEW.value = -1) OR
(TG_OP = 'UPDATE' AND OLD.value IS DISTINCT FROM NEW.value AND NEW.value = -1);
keep the same INSERT ... ON CONFLICT logic for user_hide_snippets and
downvote_review_queue so duplicates are avoided. Ensure you update the trigger
to fire ON INSERT OR UPDATE OF value so updates that flip a vote also invoke
on_downvote_queue_review.

Comment on lines +25 to +28
CREATE TRIGGER on_downvote_queue_review_trigger
AFTER INSERT ON user_like_snippets
FOR EACH ROW
EXECUTE FUNCTION on_downvote_queue_review();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle vote changes to -1, not just fresh inserts.

update_snippet_hidden_status already runs on INSERT OR UPDATE, but this trigger only runs on insert. If a user changes an existing vote from 0 or 1 to -1, the snippet is never hidden and never queued for review.

Suggested fix
 CREATE TRIGGER on_downvote_queue_review_trigger
-AFTER INSERT ON user_like_snippets
+AFTER INSERT OR UPDATE ON user_like_snippets
 FOR EACH ROW
 EXECUTE FUNCTION on_downvote_queue_review();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TRIGGER on_downvote_queue_review_trigger
AFTER INSERT ON user_like_snippets
FOR EACH ROW
EXECUTE FUNCTION on_downvote_queue_review();
CREATE TRIGGER on_downvote_queue_review_trigger
AFTER INSERT OR UPDATE ON user_like_snippets
FOR EACH ROW
EXECUTE FUNCTION on_downvote_queue_review();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@supabase/database/sql/on_downvote_queue_review.sql` around lines 25 - 28,
Trigger on_downvote_queue_review_trigger only fires on INSERTs so it misses
users changing a vote to -1; update the trigger to fire on INSERT OR UPDATE for
table user_like_snippets and add a WHEN condition so it only runs the
on_downvote_queue_review() function when NEW.vote = -1 and (OLD is null OR
OLD.vote != -1) to catch both fresh downvotes and changes to -1 (referencing the
trigger name on_downvote_queue_review_trigger, target table user_like_snippets,
and function on_downvote_queue_review()).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants