Conversation
- Add waitForGraphPresent() polling helper to apiCalls.ts to retry getGraphs() until expected graph appears instead of one-shot calls - Add connectDatabaseWithRetry() helper to retry streaming connection on transient errors with diagnostic logging - Enhance parseStreamingResponse() to log error message details - Update all database.spec.ts tests to use scoped test.setTimeout(120000/180000) - Increase waitForDatabaseConnection timeout to 90s in all DB connection tests - Replace bare getGraphs() calls with waitForGraphPresent() polling - Add console.log diagnostics throughout for easier CI debugging Co-authored-by: gkorland <753206+gkorland@users.noreply.github.com>
Bumps [playwright](https://github.com/microsoft/playwright-python) from 1.57.0 to 1.58.0. - [Release notes](https://github.com/microsoft/playwright-python/releases) - [Commits](microsoft/playwright-python@v1.57.0...v1.58.0) --- updated-dependencies: - dependency-name: playwright dependency-version: 1.58.0 dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com>
…cific DB predicates, polling for deletion - connectDatabaseWithRetry: wrap per-attempt logic in try/catch so network/parse exceptions don't abort retries; log with attempt# via console.error; backoff delay behaviour unchanged - Add expect(messages.length).toBeGreaterThan(0) guard before accessing finalMessage in all 4 caller blocks (PostgreSQL API, MySQL API, PostgreSQL delete, MySQL delete) - Fix UI-to-API test predicates from generic 'graphs.length > 0' to 'testdb'/'_testdb' match, avoiding false positives on pre-existing graphs - Replace wait(1000)+getGraphs() in both delete tests with waitForGraphPresent polling until the deleted graphId is absent Co-authored-by: gkorland <753206+gkorland@users.noreply.github.com>
- Rename waitForGraphPresent -> waitForGraphs in apiCalls.ts (more neutral name since it's used for both presence and absence checks) - Update all 10 call sites in database.spec.ts accordingly - Change outer test.describe -> test.describe.serial to prevent cross-test interference on local multi-worker runs (CI is already single-worker via workers: CI ? 1 : undefined in playwright.config.ts) Co-authored-by: gkorland <753206+gkorland@users.noreply.github.com>
Replace id.includes('testdb_delete') with
id === 'testdb_delete' || id.endsWith('_testdb_delete') in both
delete test predicates and find() calls so only the exact graph forms
('testdb_delete' or '{userId}_testdb_delete') match, preventing
accidental matches on unrelated graph names.
Co-authored-by: gkorland <753206+gkorland@users.noreply.github.com>
…wright-logs Fix flaky Playwright e2e tests: DB connection timeouts and streaming response errors
…ht-1.58.0 Bump playwright from 1.57.0 to 1.58.0
Completed Working on "Code Review"✅ Workflow completed successfully. |
|
🚅 Deployed to the QueryWeaver-pr-438 environment in queryweaver
|
Dependency ReviewThe following issues were found:
License IssuesPipfile.lock
package-lock.json
OpenSSF ScorecardScorecard details
Scanned Files
|
📝 WalkthroughWalkthroughAdds server-side CSRF middleware and client-side CSRF helpers, wires CSRF protection into frontend requests and E2E helpers, updates E2E API utilities with polling/retry for DB connections, and bumps several dependencies in Pipfile and app/package.json. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR syncs staging into main by updating dependency lockfiles and improving Playwright E2E database tests to be more reliable in CI (timeouts, retries, and polling for eventual consistency).
Changes:
- Bump the frontend app version in
package-lock.jsonand update lockfile metadata. - Improve E2E database connection/deletion stability via serial execution, longer timeouts, and API polling/retries.
- Update Python Playwright dev dependency from
1.57.0to1.58.0(and refreshPipfile.lock).
Reviewed changes
Copilot reviewed 3 out of 5 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| package-lock.json | Updates app version metadata and lockfile entries. |
| e2e/tests/database.spec.ts | Makes DB tests serial and more CI-robust via explicit timeouts, retries, and polling. |
| e2e/logic/api/apiCalls.ts | Adds waitForGraphs polling helper and connectDatabaseWithRetry to reduce flaky streaming connect behavior. |
| Pipfile | Bumps Python Playwright dev dependency to ~=1.58.0. |
| Pipfile.lock | Updates lock hashes and Playwright pinned version to ==1.58.0. |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
e2e/tests/database.spec.ts (1)
354-361:⚠️ Potential issue | 🟠 MajorMissing
waitForDatabaseConnectionbefore UI interaction — inconsistent with MySQL delete test.The PostgreSQL delete test creates a new page (Line 355) and immediately interacts with the database selector dropdown (Line 359) without waiting for the UI to reflect the connected database. Compare with the MySQL delete test (Lines 407–411) which correctly calls
homePage.waitForDatabaseConnection(90000)before UI interaction.This omission could cause flaky failures in CI when the UI hasn't finished loading the schema.
Proposed fix
const homePage = await browser.createNewPage(HomePage, getBaseUrl()); await browser.setPageToFullScreen(); + // Wait for UI to reflect the connection (increased timeout for schema loading) + const connectionEstablished = await homePage.waitForDatabaseConnection(90000); + if (!connectionEstablished) { + console.log('[PostgreSQL delete] waitForDatabaseConnection timed out'); + } + expect(connectionEstablished).toBeTruthy(); + // Delete via UI - open dropdown, click delete, confirm await homePage.clickOnDatabaseSelector();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/tests/database.spec.ts` around lines 354 - 361, Add a wait for the UI to finish connecting before interacting with the database selector: call homePage.waitForDatabaseConnection(90000) after creating the page (after browser.createNewPage/HomePage) and before homePage.clickOnDatabaseSelector(); this mirrors the MySQL delete test and prevents flakiness when invoking homePage.clickOnDatabaseSelector(), homePage.clickOnDeleteGraph(graphId!), and homePage.clickOnDeleteModalConfirm().
🧹 Nitpick comments (1)
e2e/tests/database.spec.ts (1)
22-290: Consider parameterizing PostgreSQL/MySQL test pairs to reduce duplication.Each connection mode (API, URL, Manual) has near-identical PostgreSQL and MySQL tests differing only in the connection URL, database type string, and log label. This could be consolidated using a data-driven pattern:
const databases = [ { name: 'PostgreSQL', type: 'postgresql', url: () => getTestDatabases().postgres }, { name: 'MySQL', type: 'mysql', url: () => getTestDatabases().mysql }, ]; for (const db of databases) { test(`connect ${db.name} via API -> verify in UI`, async () => { // shared logic using db.type, db.url(), db.name }); }This would halve the test code and make it easier to add new database types in the future.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/tests/database.spec.ts` around lines 22 - 290, Tests for PostgreSQL and MySQL are duplicated across connection modes; replace the repeated test blocks with a data-driven loop using a databases array (e.g., const databases = [{ name: 'PostgreSQL', type: 'postgresql', url: () => getTestDatabases().postgres }, { name: 'MySQL', type: 'mysql', url: () => getTestDatabases().mysql }]) and iterate to create tests for each mode. For each generated test, reuse the shared logic that calls apiCall.connectDatabaseWithRetry, apiCall.waitForGraphs, and the HomePage methods (waitForDatabaseConnection, isDatabaseConnected, getSelectedDatabaseName, clickOnDatabaseSelector, isDatabaseInList, clickOnConnectDatabase, selectDatabaseType, selectConnectionModeUrl/selectConnectionModeManual, enterConnectionUrl, enterConnectionDetails, clickOnDatabaseModalConnect) while substituting db.type, db.url(), and db.name for the database-specific values and log labels. Ensure assertions and timeouts remain the same and preserve the graphId lookup (graphsList.find(...)) and final expectations. Finally, remove the original duplicated test blocks so each scenario is only defined once per database via the loop.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@e2e/tests/database.spec.ts`:
- Around line 354-361: Add a wait for the UI to finish connecting before
interacting with the database selector: call
homePage.waitForDatabaseConnection(90000) after creating the page (after
browser.createNewPage/HomePage) and before homePage.clickOnDatabaseSelector();
this mirrors the MySQL delete test and prevents flakiness when invoking
homePage.clickOnDatabaseSelector(), homePage.clickOnDeleteGraph(graphId!), and
homePage.clickOnDeleteModalConfirm().
---
Nitpick comments:
In `@e2e/tests/database.spec.ts`:
- Around line 22-290: Tests for PostgreSQL and MySQL are duplicated across
connection modes; replace the repeated test blocks with a data-driven loop using
a databases array (e.g., const databases = [{ name: 'PostgreSQL', type:
'postgresql', url: () => getTestDatabases().postgres }, { name: 'MySQL', type:
'mysql', url: () => getTestDatabases().mysql }]) and iterate to create tests for
each mode. For each generated test, reuse the shared logic that calls
apiCall.connectDatabaseWithRetry, apiCall.waitForGraphs, and the HomePage
methods (waitForDatabaseConnection, isDatabaseConnected,
getSelectedDatabaseName, clickOnDatabaseSelector, isDatabaseInList,
clickOnConnectDatabase, selectDatabaseType,
selectConnectionModeUrl/selectConnectionModeManual, enterConnectionUrl,
enterConnectionDetails, clickOnDatabaseModalConnect) while substituting db.type,
db.url(), and db.name for the database-specific values and log labels. Ensure
assertions and timeouts remain the same and preserve the graphId lookup
(graphsList.find(...)) and final expectations. Finally, remove the original
duplicated test blocks so each scenario is only defined once per database via
the loop.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
Pipfile.lockis excluded by!**/*.lockpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (3)
Pipfilee2e/logic/api/apiCalls.tse2e/tests/database.spec.ts
There was a problem hiding this comment.
Summary: 6 MAJOR, 1 MINOR findings. Dependency and lockfile drift currently breaks both Pipenv and npm installs. The new API helper logic swallows underlying HTTP/timeout failures, causing retries to return misleading data and making test flakiness harder to diagnose. Polling now logs full graph payloads, which unnecessarily exposes identifiers in CI logs.
Next steps: (1) Re-align Pipfile/Pipfile.lock and regenerate package-lock.json using the package manager without manual peer flags. (2) Update the API helpers to throw once retries/polling time out and preserve HTTP status/body information. (3) Reduce graph logging verbosity or gate it behind a debug flag to avoid leaking sensitive details.
| pytest = "~=8.4.2" | ||
| pylint = "~=4.0.3" | ||
| playwright = "~=1.57.0" | ||
| playwright = "~=1.58.0" |
There was a problem hiding this comment.
[major]: Pipfile still pins pytest-playwright~=0.7.1 but the regenerated lockfile resolves pytest-playwright 0.7.2. Pipenv will refuse to install because the lock hashes no longer match the Pipfile entry, so the Playwright 1.58 upgrade cannot be installed. Keep both files on the same version (ideally 0.7.2 which is the first release that supports Playwright 1.58) and regenerate the lockfile.
| "version": "22.19.7", | ||
| "dev": true, | ||
| "license": "MIT", | ||
| "peer": true, |
There was a problem hiding this comment.
[major]: The added "peer": true flags are not produced by npm itself—running npm install immediately rewrites the lock and removes them, leaving the repo dirty and the lockfile non‑deterministic. Please regenerate the lock via the package manager and drop these manual edits so installs do not constantly produce diffs.
| "app/node_modules/react": { | ||
| "version": "18.3.1", | ||
| "license": "MIT", | ||
| "peer": true, |
There was a problem hiding this comment.
[major]: Marking core packages such as React, Tailwind, date-fns, @types/*, etc. with "peer": true means npm will omit them when installs are run with --omit=peer (common for production deploys), causing runtime/build failures because those packages are actually direct deps. They should remain normal dependencies in the lockfile.
| await new Promise((resolve) => setTimeout(resolve, retryDelayMs)); | ||
| } | ||
| } | ||
| console.log( |
There was a problem hiding this comment.
[major]: connectDatabaseWithRetry catches every exception and, after all attempts fail, just returns the last (possibly empty) message array. Callers now proceed as if they received a response, so tests fail later with vague length/final_result assertions and the original HTTP/parse error is lost. After the last attempt the helper should throw with the most recent error or status so failures surface immediately with actionable context.
| await new Promise((resolve) => | ||
| setTimeout(resolve, Math.min(pollIntervalMs, remaining)) | ||
| ); | ||
| } |
There was a problem hiding this comment.
[major]: waitForGraphs always returns the last list even when the predicate is never satisfied, so higher-level tests cannot distinguish a real timeout from a legitimate state (e.g. deleting a graph). This silently masks flakes and makes retries continue forever. The helper should reject once timeoutMs elapses (or resolve with a flag) so callers can fail fast with a clear timeout reason instead of treating an incomplete list as success.
There was a problem hiding this comment.
[minor]: When the predicate fails, waitForGraphs dumps every graph id in the console. In CI this leaks real customer database names and can produce multi‑kilobyte logs on every flake. Consider logging only the count or matching ids and gate the verbose dump behind a debug flag.
| .split("|||FALKORDB_MESSAGE_BOUNDARY|||") | ||
| .filter((msg) => msg.trim()) | ||
| .map((msg) => JSON.parse(msg.trim())); | ||
| // Log error messages to help diagnose CI failures |
There was a problem hiding this comment.
[major]: parseStreamingResponse now swallows response-level errors: if the body is empty or not JSON, the catch path throws a new Failed to parse error without logging the HTTP status/text. Combined with the new retry helper, this makes debugging very difficult. Preserve the original status/body (or rethrow) so callers can see whether the backend failed vs. the parser itself.
| setTimeout(resolve, Math.min(pollIntervalMs, remaining)) | ||
| ); | ||
| } | ||
| console.log( |
There was a problem hiding this comment.
[minor]: When the predicate fails, waitForGraphs dumps every graph id in the console. In CI this leaks real customer database names and can produce multi‑kilobyte logs on every flake. Consider logging only the count or matching ids and gate the verbose dump behind a debug flag.
Update dependency versions: - fastapi: ~=0.131.0 → ~=0.133.0 - uvicorn: ~=0.40.0 → ~=0.41.0 - litellm: ~=1.80.9 → ~=1.81.15 - playwright: ~=1.57.0 → ~=1.58.0 - globals (npm): ^15.15.0 → ^17.3.0 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
e2e/tests/database.spec.ts (1)
354-360:⚠️ Potential issue | 🟠 MajorPostgreSQL delete test skips
waitForDatabaseConnectionbefore UI interaction — unlike the MySQL delete test.The MySQL delete test (lines 407-411) correctly awaits
waitForDatabaseConnection(90000)before opening the database selector, but the PostgreSQL delete test jumps straight toclickOnDatabaseSelector()on a freshly created page without any UI-readiness check. The schema may still be loading at that point, making this test prone to flakiness where the selector dropdown does not yet containgraphId.🐛 Proposed fix
const homePage = await browser.createNewPage(HomePage, getBaseUrl()); await browser.setPageToFullScreen(); + // Wait for UI to reflect the connection (increased timeout for schema loading) + const connectionEstablished = await homePage.waitForDatabaseConnection(90000); + if (!connectionEstablished) { + console.log('[PostgreSQL delete] waitForDatabaseConnection timed out'); + } + expect(connectionEstablished).toBeTruthy(); + // Delete via UI - open dropdown, click delete, confirm await homePage.clickOnDatabaseSelector();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/tests/database.spec.ts` around lines 354 - 360, The PostgreSQL delete test is missing the UI readiness wait used in the MySQL test; before calling HomePage.clickOnDatabaseSelector() in the block that creates homePage (created via browser.createNewPage(HomePage, getBaseUrl())), call and await waitForDatabaseConnection(90000) to ensure the schema and selector are loaded, then proceed to clickOnDatabaseSelector() and clickOnDeleteGraph(graphId!); this mirrors the MySQL delete test flow and prevents flakiness.
♻️ Duplicate comments (4)
Pipfile (1)
25-26:⚠️ Potential issue | 🟠 Major
playwright/pytest-playwrightversion mismatch is still unresolved.
playwrightwas bumped to~=1.58.0butpytest-playwrightremains at~=0.7.1. pytest-playwright0.7.2was released November 24, 2025 while0.7.1was released September 8, 2025. Bumppytest-playwrightto~=0.7.2and regeneratePipfile.lockto keep hashes consistent.🐛 Proposed fix
-pytest-playwright = "~=0.7.1" +pytest-playwright = "~=0.7.2"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Pipfile` around lines 25 - 26, Update the pytest-playwright dependency to match the Playwright bump by changing "pytest-playwright" from "~=0.7.1" to "~=0.7.2" in the Pipfile and then regenerate the Pipfile.lock (e.g., run pipenv lock) so the lockfile hashes and resolved versions are consistent with the updated "playwright" and "pytest-playwright" entries.e2e/tests/database.spec.ts (1)
363-368:⚠️ Potential issue | 🟠 MajorDeletion-completion polling inherits the silent-timeout risk from
waitForGraphs.If the graph is not removed within 30 seconds,
waitForGraphsreturns the still-populated list and the subsequentexpect(graphsList.length).toBe(initialCount - 1)assertion fails with a misleading count diff rather than a clear "deletion timed out" message. OncewaitForGraphsis fixed to reject on timeout (the root issue flagged ine2e/logic/api/apiCalls.ts), this will improve automatically.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/tests/database.spec.ts` around lines 363 - 368, The deletion polling here relies on waitForGraphs; update the test to detect a timeout and fail with a clear message instead of relying on the final length assertion: after calling apiCall.waitForGraphs((graphs) => !graphs.some((id) => id === graphId), 30000) check whether the returned graphsList still contains graphId or whether waitForGraphs signaled a timeout and, if so, throw an explicit test error like "deletion of graphId timed out after 30s" (reference graphsList, graphId, initialCount and the expect(...).toBe(...) assertion) so failures show a clear timed-out message rather than a misleading count difference.e2e/logic/api/apiCalls.ts (2)
282-310:⚠️ Potential issue | 🟠 Major
waitForGraphssilently returns stale data on timeout instead of rejecting.When the predicate is never satisfied the helper returns the last observed list unchanged, making it impossible for callers to distinguish a successful wait from a timed-out one. Higher-level assertions (e.g.,
expect(graphsList.length).toBe(initialCount - 1)) then produce vague diff failures with no indication that a timeout occurred.Additionally, the timeout log at line 307 dumps the full
JSON.stringify(lastGraphs)array to stdout on every CI flake, which leaks internal graph/database IDs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/logic/api/apiCalls.ts` around lines 282 - 310, The waitForGraphs helper currently returns stale lastGraphs on timeout and logs the full JSON, making timeouts indistinguishable and leaking IDs; update waitForGraphs (the async function that calls getGraphs with predicate, timeoutMs and pollIntervalMs) to throw/reject a descriptive Error when the deadline elapses instead of returning lastGraphs, include in the error a concise, non-sensitive summary (e.g., timeoutMs and lastGraphs.length or a sanitized summary) so callers can detect timeouts, and remove the JSON.stringify(lastGraphs) dump from the timeout log (or replace it with the non-sensitive summary) to avoid leaking internal IDs.
316-349:⚠️ Potential issue | 🟠 Major
connectDatabaseWithRetryswallows all errors and returns an empty array after exhaustion.After all attempts fail,
lastMessagesmay still be[]. Callers immediately index into it withmessages[messages.length - 1](e.g.,database.spec.tsline 33/85/329/380) and the original error that caused every attempt to fail is never surfaced, making CI failures very hard to diagnose.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/logic/api/apiCalls.ts` around lines 316 - 349, connectDatabaseWithRetry currently swallows all errors and can return an empty lastMessages array, causing callers to index into an empty array and losing the original error; capture and rethrow the last thrown error (or throw a new Error that includes the last error message and any lastMessages content) after all retries fail. Specifically, inside connectDatabaseWithRetry add a variable (e.g., lastError) to store the caught error from each catch block (reference connectDatabase and parseStreamingResponse where the error originates), and after the retry loop, if lastMessages is empty or final_result was never returned, throw lastError (or throw new Error(`connectDatabaseWithRetry failed after ${maxAttempts} attempts: ${lastError?.message} ; lastMessages: ${JSON.stringify(lastMessages)}`)) instead of returning lastMessages so callers (e.g., database.spec.ts) receive the original failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/logic/api/apiCalls.ts`:
- Around line 261-268: The current parsing/logging in parseStreamingResponse
logs full StreamMessage objects via JSON.stringify(errorMessages), which can
leak sensitive fields (content, sql_query, data); change the logging to only
emit non-sensitive metadata: map errorMessages to objects containing type and a
short truncated excerpt (e.g., first N chars) of message/text, or completely
omit content unless a CI_VERBOSE or DEBUG env flag is set; update the log site
that references errorMessages in parseStreamingResponse to use this redacted
mapping (reference StreamMessage shape in e2e/logic/api/apiResponses.ts) and
gate verbose dumps behind the env flag.
---
Outside diff comments:
In `@e2e/tests/database.spec.ts`:
- Around line 354-360: The PostgreSQL delete test is missing the UI readiness
wait used in the MySQL test; before calling HomePage.clickOnDatabaseSelector()
in the block that creates homePage (created via browser.createNewPage(HomePage,
getBaseUrl())), call and await waitForDatabaseConnection(90000) to ensure the
schema and selector are loaded, then proceed to clickOnDatabaseSelector() and
clickOnDeleteGraph(graphId!); this mirrors the MySQL delete test flow and
prevents flakiness.
---
Duplicate comments:
In `@e2e/logic/api/apiCalls.ts`:
- Around line 282-310: The waitForGraphs helper currently returns stale
lastGraphs on timeout and logs the full JSON, making timeouts indistinguishable
and leaking IDs; update waitForGraphs (the async function that calls getGraphs
with predicate, timeoutMs and pollIntervalMs) to throw/reject a descriptive
Error when the deadline elapses instead of returning lastGraphs, include in the
error a concise, non-sensitive summary (e.g., timeoutMs and lastGraphs.length or
a sanitized summary) so callers can detect timeouts, and remove the
JSON.stringify(lastGraphs) dump from the timeout log (or replace it with the
non-sensitive summary) to avoid leaking internal IDs.
- Around line 316-349: connectDatabaseWithRetry currently swallows all errors
and can return an empty lastMessages array, causing callers to index into an
empty array and losing the original error; capture and rethrow the last thrown
error (or throw a new Error that includes the last error message and any
lastMessages content) after all retries fail. Specifically, inside
connectDatabaseWithRetry add a variable (e.g., lastError) to store the caught
error from each catch block (reference connectDatabase and
parseStreamingResponse where the error originates), and after the retry loop, if
lastMessages is empty or final_result was never returned, throw lastError (or
throw new Error(`connectDatabaseWithRetry failed after ${maxAttempts} attempts:
${lastError?.message} ; lastMessages: ${JSON.stringify(lastMessages)}`)) instead
of returning lastMessages so callers (e.g., database.spec.ts) receive the
original failure.
In `@e2e/tests/database.spec.ts`:
- Around line 363-368: The deletion polling here relies on waitForGraphs; update
the test to detect a timeout and fail with a clear message instead of relying on
the final length assertion: after calling apiCall.waitForGraphs((graphs) =>
!graphs.some((id) => id === graphId), 30000) check whether the returned
graphsList still contains graphId or whether waitForGraphs signaled a timeout
and, if so, throw an explicit test error like "deletion of graphId timed out
after 30s" (reference graphsList, graphId, initialCount and the
expect(...).toBe(...) assertion) so failures show a clear timed-out message
rather than a misleading count difference.
In `@Pipfile`:
- Around line 25-26: Update the pytest-playwright dependency to match the
Playwright bump by changing "pytest-playwright" from "~=0.7.1" to "~=0.7.2" in
the Pipfile and then regenerate the Pipfile.lock (e.g., run pipenv lock) so the
lockfile hashes and resolved versions are consistent with the updated
"playwright" and "pytest-playwright" entries.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
Pipfile.lockis excluded by!**/*.lockapp/package-lock.jsonis excluded by!**/package-lock.jsonpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (4)
Pipfileapp/package.jsone2e/logic/api/apiCalls.tse2e/tests/database.spec.ts
| // Log error messages to help diagnose CI failures | ||
| const errorMessages = messages.filter((m) => m.type === "error"); | ||
| if (errorMessages.length > 0) { | ||
| console.log( | ||
| `[parseStreamingResponse] HTTP status: ${response.status()}, error messages received:`, | ||
| JSON.stringify(errorMessages) | ||
| ); | ||
| } |
There was a problem hiding this comment.
Error-message logging may expose sensitive payload data in CI logs.
JSON.stringify(errorMessages) serializes the full StreamMessage objects, which include content, message, sql_query, data, and other fields that may carry PII or sensitive query results (see e2e/logic/api/apiResponses.ts lines 87-109). Consider logging only the type and—if necessary—a truncated excerpt of message, or gate verbose output behind an environment flag.
♻️ Proposed safer log
- console.log(
- `[parseStreamingResponse] HTTP status: ${response.status()}, error messages received:`,
- JSON.stringify(errorMessages)
- );
+ console.log(
+ `[parseStreamingResponse] HTTP status: ${response.status()}, ${errorMessages.length} error message(s):`,
+ errorMessages.map((m) => ({ type: m.type, message: m.message?.slice(0, 200) }))
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Log error messages to help diagnose CI failures | |
| const errorMessages = messages.filter((m) => m.type === "error"); | |
| if (errorMessages.length > 0) { | |
| console.log( | |
| `[parseStreamingResponse] HTTP status: ${response.status()}, error messages received:`, | |
| JSON.stringify(errorMessages) | |
| ); | |
| } | |
| // Log error messages to help diagnose CI failures | |
| const errorMessages = messages.filter((m) => m.type === "error"); | |
| if (errorMessages.length > 0) { | |
| console.log( | |
| `[parseStreamingResponse] HTTP status: ${response.status()}, ${errorMessages.length} error message(s):`, | |
| errorMessages.map((m) => ({ type: m.type, message: m.message?.slice(0, 200) })) | |
| ); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e/logic/api/apiCalls.ts` around lines 261 - 268, The current
parsing/logging in parseStreamingResponse logs full StreamMessage objects via
JSON.stringify(errorMessages), which can leak sensitive fields (content,
sql_query, data); change the logging to only emit non-sensitive metadata: map
errorMessages to objects containing type and a short truncated excerpt (e.g.,
first N chars) of message/text, or completely omit content unless a CI_VERBOSE
or DEBUG env flag is set; update the log site that references errorMessages in
parseStreamingResponse to use this redacted mapping (reference StreamMessage
shape in e2e/logic/api/apiResponses.ts) and gate verbose dumps behind the env
flag.
* Return generic 400 for RequestValidationError instead of Pydantic details
Add a global RequestValidationError exception handler that returns
{"detail": "Bad request"} with status 400, preventing internal
Pydantic validation details from leaking to clients. This primarily
affects the SPA catch-all proxy route when accessed without the
expected path parameter.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Scope validation handler to SPA catch-all, add logging, fix tests
Address PR review feedback:
- Scope the generic 400 handler to only the SPA catch-all route
(query._full_path errors) so API consumers still get useful 422
responses with field-level detail
- Add logging.warning of validation details for server-side debugging
- Make test assertions unconditional instead of guarding behind
status-code checks
- Add test verifying API routes preserve 422 with field-level info
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix SPA catch-all route parameter name mismatch
The function parameter `_full_path` didn't match the URL template
`{full_path:path}`, causing FastAPI to treat it as a required query
parameter and return 422 for every non-API route.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Remove validation error handler workaround
The handler was masking a parameter name mismatch in the catch-all
route. Now that the root cause is fixed, the handler, its import,
pylint suppression, and test file are no longer needed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Suppress pylint unused-argument for catch-all route parameter
The parameter name must match the URL template to avoid validation
errors, but the function body doesn't use it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Add CSRF protection via double-submit cookie pattern Add CSRFMiddleware to protect all state-changing endpoints (POST, PUT, DELETE, PATCH) against cross-site request forgery attacks. Backend: - New CSRFMiddleware in app_factory.py sets a csrf_token cookie (non-HttpOnly, readable by JS) on every response - State-changing requests must echo the token via X-CSRF-Token header - Uses hmac.compare_digest for timing-safe validation - Exempts Bearer token auth (not CSRF-vulnerable), login/signup/OAuth flows, and MCP endpoints Frontend: - New app/src/lib/csrf.ts utility reads the cookie and builds headers - All service files (auth, tokens, database, chat) now include the X-CSRF-Token header on every state-changing fetch call Fixes: - CSRF on POST /tokens/generate (API token hijack) - CSRF on POST /logout (forced session termination) - Missing CSRF protection on all other mutating endpoints Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Address PR review feedback on CSRF protection - Set CSRF cookie on 403 rejection responses so clients can retry - Add max_age (14 days) to CSRF cookie matching session cookie lifetime - Guard document access in csrf.ts for SSR/Node compatibility - Add console.warn when CSRF cookie is missing for easier debugging - Add comment clarifying MCP exempt prefix pattern - Add comprehensive unit tests for CSRF middleware (12 test cases) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix E2E tests: seed CSRF token in API request helpers The E2E API helpers (postRequest, deleteRequest, patchRequest) now make a lightweight GET to /auth-status first to obtain the csrf_token cookie, then include it as X-CSRF-Token header on the actual request. This ensures E2E tests pass with the new CSRF middleware. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Address PR #432 review: fix missing CSRF headers and improvements - Add csrfHeaders() to POST /graphs/{id}/refresh in Index.tsx - Add csrfHeaders() to POST /database in DatabaseModal.tsx - Refactor CSRFMiddleware.dispatch() to single return path - Change console.warn to console.debug in csrf.ts - Cache CSRF token per APIRequestContext in E2E helpers - Add DELETE/PATCH and secure-flag tests Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix: update @falkordb/canvas version to 0.0.34 in package.json and package-lock.json * fix: update @falkordb/canvas version to 0.0.35 in package.json and package-lock.json * fix: update @falkordb/canvas version to 0.0.36 in package.json and package-lock.json * fix: update @falkordb/canvas version to 0.0.40 in package.json and package-lock.json * fix: update @falkordb/canvas version to 1.51.1 in package-lock.json --------- Co-authored-by: Guy Korland <gkorland@gmail.com>
… pages - Add defaultRequestContext field to ApiCalls class, set via constructor - All API methods now use the default context for auth (session cookies + CSRF) - Tests use Playwright's request fixture which inherits storageState from config - Pass storageState path to BrowserWrapper.createNewPage for authenticated browser sessions - Revert outer test.describe.serial() to test.describe() to prevent cascade failures (inner Database Deletion Tests remain serial as needed) Fixes unauthenticated API requests that caused 401 errors in Firefox E2E tests. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (3)
e2e/logic/api/apiCalls.ts (3)
290-318:⚠️ Potential issue | 🟠 MajorTimeout path should fail explicitly instead of returning ambiguous data.
When timeout is hit, Line [317] returns
lastGraphsas if success. Callers cannot distinguish timeout vs. valid terminal state reliably.💡 Suggested fix
- console.log( - `[waitForGraphs] timed out after ${timeoutMs}ms. Last graphs: ${JSON.stringify(lastGraphs)}` - ); - return lastGraphs; + throw new Error( + `[waitForGraphs] timed out after ${timeoutMs}ms. Last observed graphs count: ${lastGraphs.length}` + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/logic/api/apiCalls.ts` around lines 290 - 318, The timeout path in waitForGraphs currently returns lastGraphs making callers unable to tell if the predicate succeeded or a timeout occurred; update waitForGraphs (the async function) to throw an explicit Error when the deadline is exceeded instead of returning lastGraphs—include contextual details (timeoutMs and a JSON.stringify(lastGraphs) snapshot) in the error message so callers can distinguish timeout failures from valid responses while preserving existing logging.
269-275:⚠️ Potential issue | 🟡 MinorAvoid logging full streaming error payloads in CI.
Line [274] serializes full
StreamMessageerror objects, which can expose sensitive content fields. Log only minimal metadata (type/count, truncated message).💡 Suggested fix
if (errorMessages.length > 0) { console.log( - `[parseStreamingResponse] HTTP status: ${response.status()}, error messages received:`, - JSON.stringify(errorMessages) + `[parseStreamingResponse] HTTP status: ${response.status()}, ${errorMessages.length} error message(s):`, + errorMessages.map((m) => ({ type: m.type, message: m.message?.slice(0, 200) })) ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/logic/api/apiCalls.ts` around lines 269 - 275, The current parseStreamingResponse logging prints full serialized StreamMessage error objects which may leak sensitive payloads; update the logging in parseStreamingResponse to avoid serializing entire messages and instead log minimal metadata: include response.status(), errorMessages.length, and for each error message log only its type and a truncated preview of its content (e.g., first 100–200 chars) or a single truncated message sample; reference the local variables messages and errorMessages and replace JSON.stringify(errorMessages) with constructing a safe summary (type/count + truncated text) before calling console.log.
324-357:⚠️ Potential issue | 🟠 MajorDo not swallow terminal connect failures after retries are exhausted.
After the final attempt, Line [356] returns
lastMessagesinstead of throwing, which hides root cause and lets callers continue with invalid assumptions.💡 Suggested fix
async connectDatabaseWithRetry( connectionUrl: string, maxAttempts: number = 3, retryDelayMs: number = 3000 ): Promise<StreamMessage[]> { let lastMessages: StreamMessage[] = []; + let lastError: Error | undefined; for (let attempt = 1; attempt <= maxAttempts; attempt++) { try { const response = await this.connectDatabase(connectionUrl); const messages = await this.parseStreamingResponse(response); const finalMessage = messages[messages.length - 1]; if (finalMessage && finalMessage.type === "final_result") { return messages; } console.log( `[connectDatabaseWithRetry] attempt ${attempt}/${maxAttempts} did not return final_result.`, `Last message: ${JSON.stringify(finalMessage)}` ); lastMessages = messages; } catch (err) { + lastError = err as Error; console.error( `[connectDatabaseWithRetry] attempt ${attempt}/${maxAttempts} threw an error:`, (err as Error).message ); } if (attempt < maxAttempts) { await new Promise((resolve) => setTimeout(resolve, retryDelayMs)); } } - console.log( - `[connectDatabaseWithRetry] all ${maxAttempts} attempts exhausted. Last messages: ${JSON.stringify(lastMessages)}` - ); - return lastMessages; + throw new Error( + `[connectDatabaseWithRetry] all ${maxAttempts} attempts exhausted.` + + (lastError ? ` Last error: ${lastError.message}` : ` Last messages: ${JSON.stringify(lastMessages)}`) + ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/logic/api/apiCalls.ts` around lines 324 - 357, connectDatabaseWithRetry currently returns lastMessages after all retries, which swallows terminal failures; update the function (connectDatabaseWithRetry) to throw a meaningful Error when no final_result is received after maxAttempts instead of returning lastMessages. Track the last caught error (e.g., lastError) and include either lastError.message or a summary of lastMessages in the thrown Error so callers can handle failures explicitly; ensure the thrown Error contains context like attempt count and the last response details.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/app_factory.py`:
- Around line 90-94: The current CSRF bypass checks skip protection solely when
an Authorization: Bearer header exists; change the condition so that
Bearer-based exemption only applies when there is no browser session cookie
present. Update the if in the CSRF check (the block using SAFE_METHODS,
EXEMPT_PREFIXES and request.headers.get("authorization")) to also require that
request has no cookies (e.g., request.headers.get("cookie") is empty) or
specifically lacks your session cookie name (e.g., "session" / "sessionid" /
"auth"), so that requests with both a Bearer header and a session cookie do not
bypass CSRF; keep the rest of the logic (SAFE_METHODS, EXEMPT_PREFIXES)
unchanged.
- Around line 61-64: The current check uses exact equality on the raw header
value from request.headers.get("x-forwarded-proto"), which fails for case
variants or comma-separated values; update the logic around the forwarded_proto
variable so you normalize and robustly parse the header (e.g., if
forwarded_proto exists, convert to lowercase, split on commas, strip whitespace
and use the first non-empty token) and then compare that token to "https"; keep
the existing fallback to request.url.scheme == "https" when the header is
missing.
In `@app/src/lib/csrf.ts`:
- Line 28: The call to decodeURIComponent in the get-cookie logic can throw on
malformed cookie values and cause csrfHeaders() to fail; wrap the
decodeURIComponent(match[1]) usage in a safe guard (e.g., try/catch) inside the
function that extracts cookies so that if decoding throws you fall back to the
raw match[1] or an empty string and return that instead, ensuring csrfHeaders()
continues to work; update the function that currently returns
decodeURIComponent(match[1]) to catch decode errors and return a safe fallback.
In `@e2e/infra/api/apiRequests.ts`:
- Around line 22-23: The CSRF cache currently keyed only by csrfCache
(WeakMap<APIRequestContext, string>) must be changed to store tokens per-origin
so contexts reused across origins don't mix tokens: replace csrfCache with a
WeakMap<APIRequestContext, Map<string, string>> (or WeakMap -> Map keyed by
origin string) and update all places that read/write using originOf(url)
(references: csrfCache, originOf(url), and the functions that set/get the token
at the spots noted around lines 29-30 and 37-38) to first get or create the
inner Map for the APIRequestContext and then get/set the token by origin. Ensure
reads use innerMap.get(origin) and writes use innerMap.set(origin, token).
- Around line 11-13: The extracted CSRF cookie value is returned raw from the
regex match (match[1]) which can be URL-encoded; update the extraction to decode
the cookie before returning by applying a URL-decoding step (e.g.,
decodeURIComponent) to the regex capture result so the returned CSRF token used
in headers is the decoded value; locate the cookie extraction logic where
header.match(/csrf_token=([^;]+)/) is used and replace the raw return of
match[1] with the decoded version.
In `@e2e/tests/database.spec.ts`:
- Around line 35-36: Reduce verbose CI logging by redacting full payloads:
replace console.log calls that print objects like finalMessage and graphsList
with logs that print counts, IDs, or truncated summaries (e.g., finalMessage.id,
finalMessage.type, graphsList.length, map of graph IDs) and gate detailed dumps
behind a debug flag; update the console.log statements referencing finalMessage
and graphsList in database.spec.ts (all occurrences noted around the existing
prints) to only output non-sensitive, minimal fields or short substrings.
---
Duplicate comments:
In `@e2e/logic/api/apiCalls.ts`:
- Around line 290-318: The timeout path in waitForGraphs currently returns
lastGraphs making callers unable to tell if the predicate succeeded or a timeout
occurred; update waitForGraphs (the async function) to throw an explicit Error
when the deadline is exceeded instead of returning lastGraphs—include contextual
details (timeoutMs and a JSON.stringify(lastGraphs) snapshot) in the error
message so callers can distinguish timeout failures from valid responses while
preserving existing logging.
- Around line 269-275: The current parseStreamingResponse logging prints full
serialized StreamMessage error objects which may leak sensitive payloads; update
the logging in parseStreamingResponse to avoid serializing entire messages and
instead log minimal metadata: include response.status(), errorMessages.length,
and for each error message log only its type and a truncated preview of its
content (e.g., first 100–200 chars) or a single truncated message sample;
reference the local variables messages and errorMessages and replace
JSON.stringify(errorMessages) with constructing a safe summary (type/count +
truncated text) before calling console.log.
- Around line 324-357: connectDatabaseWithRetry currently returns lastMessages
after all retries, which swallows terminal failures; update the function
(connectDatabaseWithRetry) to throw a meaningful Error when no final_result is
received after maxAttempts instead of returning lastMessages. Track the last
caught error (e.g., lastError) and include either lastError.message or a summary
of lastMessages in the thrown Error so callers can handle failures explicitly;
ensure the thrown Error contains context like attempt count and the last
response details.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
app/package-lock.jsonis excluded by!**/package-lock.jsonpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (14)
api/app_factory.pyapp/package.jsonapp/src/components/modals/DatabaseModal.tsxapp/src/lib/csrf.tsapp/src/pages/Index.tsxapp/src/services/auth.tsapp/src/services/chat.tsapp/src/services/database.tsapp/src/services/tokens.tse2e/infra/api/apiRequests.tse2e/logic/api/apiCalls.tse2e/tests/chat.spec.tse2e/tests/database.spec.tstests/test_csrf_middleware.py
✅ Files skipped from review due to trivial changes (1)
- tests/test_csrf_middleware.py
| forwarded_proto = request.headers.get("x-forwarded-proto") | ||
| if forwarded_proto: | ||
| return forwarded_proto == "https" | ||
| return request.url.scheme == "https" |
There was a problem hiding this comment.
Harden forwarded-proto parsing for secure-cookie decisions
x-forwarded-proto is often case-variant or comma-separated. Exact equality can mis-detect HTTPS and set CSRF cookies with the wrong secure attribute.
💡 Proposed fix
def _is_secure_request(request: Request) -> bool:
"""Determine if the request is over HTTPS."""
forwarded_proto = request.headers.get("x-forwarded-proto")
if forwarded_proto:
- return forwarded_proto == "https"
+ proto = forwarded_proto.split(",")[0].strip().lower()
+ return proto == "https"
return request.url.scheme == "https"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/app_factory.py` around lines 61 - 64, The current check uses exact
equality on the raw header value from request.headers.get("x-forwarded-proto"),
which fails for case variants or comma-separated values; update the logic around
the forwarded_proto variable so you normalize and robustly parse the header
(e.g., if forwarded_proto exists, convert to lowercase, split on commas, strip
whitespace and use the first non-empty token) and then compare that token to
"https"; keep the existing fallback to request.url.scheme == "https" when the
header is missing.
| if ( | ||
| request.method not in self.SAFE_METHODS | ||
| and not request.url.path.startswith(self.EXEMPT_PREFIXES) | ||
| and not request.headers.get("authorization", "").lower().startswith("bearer ") | ||
| ): |
There was a problem hiding this comment.
Avoid CSRF bypass based only on Authorization: Bearer presence
Skipping CSRF solely because a Bearer header exists allows unsafe requests to bypass CSRF checks even when session cookies are present. This weakens browser-session protection.
💡 Proposed fix
- if (
+ auth_header = request.headers.get("authorization", "")
+ is_bearer_request = auth_header.lower().startswith("bearer ")
+ has_session_cookie = "session" in request.cookies
+
+ if (
request.method not in self.SAFE_METHODS
and not request.url.path.startswith(self.EXEMPT_PREFIXES)
- and not request.headers.get("authorization", "").lower().startswith("bearer ")
+ and not (is_bearer_request and not has_session_cookie)
):📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if ( | |
| request.method not in self.SAFE_METHODS | |
| and not request.url.path.startswith(self.EXEMPT_PREFIXES) | |
| and not request.headers.get("authorization", "").lower().startswith("bearer ") | |
| ): | |
| auth_header = request.headers.get("authorization", "") | |
| is_bearer_request = auth_header.lower().startswith("bearer ") | |
| has_session_cookie = "session" in request.cookies | |
| if ( | |
| request.method not in self.SAFE_METHODS | |
| and not request.url.path.startswith(self.EXEMPT_PREFIXES) | |
| and not (is_bearer_request and not has_session_cookie) | |
| ): |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/app_factory.py` around lines 90 - 94, The current CSRF bypass checks skip
protection solely when an Authorization: Bearer header exists; change the
condition so that Bearer-based exemption only applies when there is no browser
session cookie present. Update the if in the CSRF check (the block using
SAFE_METHODS, EXEMPT_PREFIXES and request.headers.get("authorization")) to also
require that request has no cookies (e.g., request.headers.get("cookie") is
empty) or specifically lacks your session cookie name (e.g., "session" /
"sessionid" / "auth"), so that requests with both a Bearer header and a session
cookie do not bypass CSRF; keep the rest of the logic (SAFE_METHODS,
EXEMPT_PREFIXES) unchanged.
| ); | ||
| return ''; | ||
| } | ||
| return decodeURIComponent(match[1]); |
There was a problem hiding this comment.
Guard cookie decode to prevent request-path crashes
decodeURIComponent can throw on malformed cookie values, which would make csrfHeaders() throw and break state-changing calls unexpectedly.
💡 Proposed fix
- return decodeURIComponent(match[1]);
+ try {
+ return decodeURIComponent(match[1]);
+ } catch {
+ console.debug(
+ `CSRF token cookie "${CSRF_COOKIE_NAME}" is malformed. Ignoring token.`
+ );
+ return '';
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return decodeURIComponent(match[1]); | |
| try { | |
| return decodeURIComponent(match[1]); | |
| } catch { | |
| console.debug( | |
| `CSRF token cookie "${CSRF_COOKIE_NAME}" is malformed. Ignoring token.` | |
| ); | |
| return ''; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/lib/csrf.ts` at line 28, The call to decodeURIComponent in the
get-cookie logic can throw on malformed cookie values and cause csrfHeaders() to
fail; wrap the decodeURIComponent(match[1]) usage in a safe guard (e.g.,
try/catch) inside the function that extracts cookies so that if decoding throws
you fall back to the raw match[1] or an empty string and return that instead,
ensuring csrfHeaders() continues to work; update the function that currently
returns decodeURIComponent(match[1]) to catch decode errors and return a safe
fallback.
| const match = header.match(/csrf_token=([^;]+)/); | ||
| if (match) return match[1]; | ||
| } |
There was a problem hiding this comment.
Decode extracted CSRF cookie values before header use.
Line [12] returns the raw cookie value. If the cookie is encoded, this can produce header/token mismatches.
💡 Suggested fix
function extractCsrfToken(setCookieHeaders: string[]): string | undefined {
for (const header of setCookieHeaders) {
const match = header.match(/csrf_token=([^;]+)/);
- if (match) return match[1];
+ if (match) return decodeURIComponent(match[1]);
}
return undefined;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const match = header.match(/csrf_token=([^;]+)/); | |
| if (match) return match[1]; | |
| } | |
| const match = header.match(/csrf_token=([^;]+)/); | |
| if (match) return decodeURIComponent(match[1]); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e/infra/api/apiRequests.ts` around lines 11 - 13, The extracted CSRF cookie
value is returned raw from the regex match (match[1]) which can be URL-encoded;
update the extraction to decode the cookie before returning by applying a
URL-decoding step (e.g., decodeURIComponent) to the regex capture result so the
returned CSRF token used in headers is the decoded value; locate the cookie
extraction logic where header.match(/csrf_token=([^;]+)/) is used and replace
the raw return of match[1] with the decoded version.
| const csrfCache = new WeakMap<APIRequestContext, string>(); | ||
|
|
There was a problem hiding this comment.
Scope CSRF cache by origin, not only by request context.
Line [22] caches one token per APIRequestContext, but token retrieval is origin-based (originOf(url)). Reusing one context across multiple origins can send the wrong CSRF token.
💡 Suggested fix
-const csrfCache = new WeakMap<APIRequestContext, string>();
+const csrfCache = new WeakMap<APIRequestContext, Map<string, string>>();
async function getCsrfToken(baseUrl: string, ctx: APIRequestContext): Promise<string | undefined> {
- const cached = csrfCache.get(ctx);
- if (cached) return cached;
+ let perOrigin = csrfCache.get(ctx);
+ if (!perOrigin) {
+ perOrigin = new Map<string, string>();
+ csrfCache.set(ctx, perOrigin);
+ }
+
+ const cached = perOrigin.get(baseUrl);
+ if (cached) return cached;
const seedResp = await ctx.get(`${baseUrl}/auth-status`);
const setCookies = seedResp.headersArray()
.filter(h => h.name.toLowerCase() === 'set-cookie')
.map(h => h.value);
const token = extractCsrfToken(setCookies);
- if (token) csrfCache.set(ctx, token);
+ if (token) perOrigin.set(baseUrl, token);
return token;
}Also applies to: 29-30, 37-38
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e/infra/api/apiRequests.ts` around lines 22 - 23, The CSRF cache currently
keyed only by csrfCache (WeakMap<APIRequestContext, string>) must be changed to
store tokens per-origin so contexts reused across origins don't mix tokens:
replace csrfCache with a WeakMap<APIRequestContext, Map<string, string>> (or
WeakMap -> Map keyed by origin string) and update all places that read/write
using originOf(url) (references: csrfCache, originOf(url), and the functions
that set/get the token at the spots noted around lines 29-30 and 37-38) to first
get or create the inner Map for the APIRequestContext and then get/set the token
by origin. Ensure reads use innerMap.get(origin) and writes use
innerMap.set(origin, token).
| console.log(`[PostgreSQL API connect] unexpected final message: ${JSON.stringify(finalMessage)}`); | ||
| } |
There was a problem hiding this comment.
Redact verbose CI logs to avoid leaking runtime data.
These lines log full finalMessage/graphsList payloads. Prefer counts, IDs only, or truncated fields behind a debug flag.
💡 Suggested pattern
- console.log(`[PostgreSQL API connect] unexpected final message: ${JSON.stringify(finalMessage)}`);
+ console.log(
+ `[PostgreSQL API connect] unexpected final message type=${finalMessage.type}, success=${Boolean(finalMessage.success)}`
+ );
- console.log(`[PostgreSQL API connect] graphs after connection: ${JSON.stringify(graphsList)}`);
+ console.log(`[PostgreSQL API connect] graph count after connection: ${graphsList.length}`);Also applies to: 48-48, 87-88, 100-100, 154-154, 193-193, 237-237, 281-281, 331-332, 347-348, 382-383, 396-397
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e/tests/database.spec.ts` around lines 35 - 36, Reduce verbose CI logging
by redacting full payloads: replace console.log calls that print objects like
finalMessage and graphsList with logs that print counts, IDs, or truncated
summaries (e.g., finalMessage.id, finalMessage.type, graphsList.length, map of
graph IDs) and gate detailed dumps behind a debug flag; update the console.log
statements referencing finalMessage and graphsList in database.spec.ts (all
occurrences noted around the existing prints) to only output non-sensitive,
minimal fields or short substrings.
Summary by CodeRabbit
Chores
New Features
Tests
Bug Fixes