Skip to content

Conversation

@ngoiyaeric
Copy link
Collaborator

@ngoiyaeric ngoiyaeric commented Jan 31, 2026

User description

This PR enhances the resolution search feature by improving the quality of captured satellite imagery and providing a better user experience for viewing the analyzed map data.

Key changes:

  1. Higher Resolution: Added scale=2 to the Google Static Maps API request, doubling the pixel density of captured images.
  2. Interactive Image Display: Introduced a new ResolutionImage component that displays the analyzed map as a thumbnail in the chat. Clicking the thumbnail opens a full-screen Dialog (modal) for detailed viewing.
  3. State Persistence: The captured image is now stored as a data URL within the resolution_search_result message in the AI state. This allows the image to be correctly restored when reloading the page or sharing the chat.
  4. Immediate UI Updates: Fixed an issue where GeoJSON overlays were only visible after a page refresh; they are now appended to the UI stream as soon as the analysis is complete.
  5. Architectural Consistency: Follows existing patterns for Card and Dialog components, using Framer Motion for smooth interactions.

PR created automatically by Jules for task 3118023839244746163 started by @ngoiyaeric


PR Type

Enhancement


Description

  • Added scale=2 parameter to Google Static Maps API for doubled pixel density

  • Created new ResolutionImage component with thumbnail and full-screen modal display

  • Implemented image persistence in AI state for page reload and chat sharing

  • Fixed GeoJSON layer visibility by appending to UI stream immediately after analysis

  • Integrated image display into resolution search response UI


Diagram Walkthrough

flowchart LR
  A["Google Static Maps API"] -->|"scale=2 parameter"| B["Higher Resolution Image"]
  B --> C["ResolutionImage Component"]
  C --> D["Thumbnail Display"]
  C --> E["Full-screen Modal"]
  F["Analysis Result"] -->|"image data URL"| G["AI State Storage"]
  G --> H["Page Reload/Share"]
  I["GeoJSON Analysis"] -->|"immediate append"| J["UI Stream"]
Loading

File Walkthrough

Relevant files
Enhancement
actions.tsx
Integrate image persistence and immediate GeoJSON display

app/actions.tsx

  • Imported new ResolutionImage component for image display
  • Added groupeId variable initialization before resolution search
    processing
  • Appended GeoJSON layer to UI stream immediately after analysis
    completion
  • Stored captured image as data URL in AI state within
    resolution_search_result message
  • Updated getUIStateFromAIState to restore and display images from saved
    messages
  • Integrated ResolutionImage component into response section UI
+18/-3   
header-search-button.tsx
Increase satellite map image resolution with scale parameter

components/header-search-button.tsx

  • Added scale=2 parameter to Google Static Maps API request URL
  • Doubles the pixel density of captured satellite imagery for improved
    quality
+1/-1     
resolution-image.tsx
New component for thumbnail and modal image display           

components/resolution-image.tsx

  • Created new client component for displaying satellite analysis images
  • Implemented thumbnail display with hover and tap animations using
    Framer Motion
  • Added Dialog modal for full-screen image viewing with responsive
    sizing
  • Used Card component for consistent styling with glassmorphic design
+59/-0   

Summary by CodeRabbit

  • New Features

    • Clickable, expandable map image previews that show both Mapbox and Google Satellite views.
    • GeoJSON overlays appear with map-based results for richer geographic context.
  • Enhancements

    • Multimodal analysis now uses multiple map previews (Mapbox + Google) for more comprehensive results.
    • Capture and preview quality improved with higher-resolution static map imagery.
  • Other

    • Assistant messages and UI elements remain better associated for clearer context.

✏️ Tip: You can customize this high-level summary in your review settings.

- Added `scale=2` to Google Static Maps capture for improved resolution.
- Created `ResolutionImage` component for thumbnail and full-screen display of analyzed satellite imagery.
- Updated the core response logic to include images in the initial stream and persistent AI state.
- Updated `getUIStateFromAIState` to support image restoration from saved messages.
- Fixed a bug where GeoJson layers were not immediately appended to the UI stream during analysis.

Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
@google-labs-jules
Copy link
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@vercel
Copy link

vercel bot commented Jan 31, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
qcx Ready Ready Preview, Comment Feb 1, 2026 0:14am

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@charliecreates charliecreates bot requested a review from CharlieHelps January 31, 2026 14:14
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 31, 2026

Walkthrough

Adds multimodal map imagery and Google static-map support to resolution search: streams Mapbox and Google data URLs, associates GeoJSON layers and assistant messages with a new groupeId, adds a ResolutionImage UI component, and surfaces Google static-map URLs (scale=2) via geospatial helpers.

Changes

Cohort / File(s) Summary
Resolution search & actions
app/actions.tsx
Accepts mapboxFile, googleFile, and legacy file; builds data URLs, enforces presence of at least one image, attaches image payload bundling mapbox/google URLs, emits resolution_search_result with groupeId, and inserts GeoJsonLayer + ResolutionImage into UI stream.
Header capture & submission
components/header-search-button.tsx
Captures separate mapboxBlob and googleBlob (Google static map uses scale=2 and computed zoom), appends mapboxFile/googleFile to FormData, preserves backward-compatible file field, and sends mapProvider.
Image display component
components/resolution-image.tsx
New client React component ResolutionImage rendering thumbnail(s) for Mapbox/Google sources and a full-size dialog with responsive layout, accessibility attributes, and hover/tap motion.
Followup form
components/followup-panel.tsx
Includes mapProvider from settings in submitted FormData.
Geospatial tools & Google URL helper
lib/agents/tools/geospatial.tsx
Adds getGoogleStaticMapUrl(latitude, longitude), populates mcpData.mapUrl for Google provider when coordinates exist to ensure mapUrl is present.
Agents — model selection (vision-aware)
lib/agents/inquire.tsx, lib/agents/query-suggestor.tsx, lib/agents/task-manager.tsx
Detects image presence across messages (hasImage/hasImageAnywhere) and passes it to getModel(hasImage). task-manager early-returns when latest user message contains an image.
Resolution agent prompt
lib/agents/resolution-search.tsx
Prompt updated to require analysis of two previews (Mapbox live preview + Google Satellite), include drawn features in context, and produce structured outputs comparing both sources.
Model defaults
lib/utils/index.ts
Changes default model selections: xAI uses grok-vision-beta when vision required else grok-beta; Gemini fallback updated to gemini-1.5-pro.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Frontend as UI
    participant Server as app/actions.tsx
    participant MapAPI as Google Static Maps
    participant Agent as resolution-search

    Client->>Frontend: Trigger resolution search (capture + submit)
    Frontend->>MapAPI: Request Google static map (scale=2) [if center present]
    MapAPI-->>Frontend: Return Google image blob
    Frontend->>Server: POST with mapboxFile?, googleFile?, mapProvider
    Server->>Agent: Run resolution_search (includes both image dataUrls + drawn features)
    Agent-->>Server: analysisResult (geoJson, image bundle)
    Server->>Server: Attach `groupeId`, emit resolution_search_result + GeoJsonLayer
    Server->>Frontend: Stream result (image bundle + groupeId)
    Frontend->>Frontend: Render GeoJsonLayer (groupeId) and ResolutionImage thumbnail(s)
    Client->>Frontend: Click thumbnail
    Frontend->>Frontend: Open full-size image dialog (Mapbox & Google)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🐰 I nibble pixels, hop between two skies,
Mapbox on one paw, Google in my eyes,
GroupeId threads the layers tight,
Thumbs tap thumbnails — what a sight!
Dialogs bloom and maps delight.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the main change: enhancing resolution search by adding high-resolution imagery capture and UI components for displaying those images.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/resolution-search-enhancement-img-3118023839244746163

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Jan 31, 2026

ⓘ Your approaching your monthly quota for Qodo. Upgrade your plan

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
Untrusted image source

Description: ResolutionImage renders from persisted AI-state content without
validating/allowlisting schemes or origins, so a crafted resolution_search_result.image
could force the client to load attacker-controlled remote URLs (user/IP tracking) or risky
data: payloads (e.g., data:image/svg+xml,...) depending on browser behavior; consider
restricting to expected data:image/(png|jpeg);base64,... or trusted origins before
rendering.
resolution-image.tsx [21-53]

Referred Code
export function ResolutionImage({ src, className }: ResolutionImageProps) {
  if (!src) return null

  return (
    <div className={cn('mt-2 mb-4', className)}>
      <Dialog>
        <DialogTrigger asChild>
          <motion.div
            className="w-fit cursor-pointer relative glassmorphic overflow-hidden rounded-lg border bg-muted"
            whileHover={{ scale: 1.02 }}
            whileTap={{ scale: 0.98 }}
          >
            <Card className="border-none shadow-none bg-transparent">
              <CardContent className="p-2">
                <img
                  src={src}
                  alt="Satellite Analysis Map"
                  className="max-w-xs max-h-64 rounded-md object-contain"
                />
              </CardContent>
            </Card>


 ... (clipped 12 lines)
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

🔴
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status:
Misspelled identifier: The new identifier groupeId appears to be a misspelling of groupId, reducing clarity and
self-documentation.

Referred Code
const groupeId = nanoid();

async function processResolutionSearch() {
  try {
    // Call the simplified agent, which now returns a stream.
    const streamResult = await resolutionSearch(messages, timezone, drawnFeatures);

    let fullSummary = '';
    for await (const partialObject of streamResult.partialObjectStream) {
      if (partialObject.summary) {
        fullSummary = partialObject.summary;
        summaryStream.update(fullSummary);
      }
    }

    const analysisResult = await streamResult.object;

    // Mark the summary stream as done with the result.
    summaryStream.done(analysisResult.summary || 'Analysis complete.');

    if (analysisResult.geoJson) {


 ... (clipped 7 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Unhandled JSON parse: The new JSON.parse(content as string) has no guard/try-catch, so malformed or unexpected
content can throw and break UI state restoration.

Referred Code
const analysisResult = JSON.parse(content as string);
const geoJson = analysisResult.geoJson as FeatureCollection;
const image = analysisResult.image as string;

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status:
Potential error leakage: If JSON.parse throws during getUIStateFromAIState, the resulting error handling path is
not visible in the diff and may expose internal details depending on the global error
boundary behavior.

Referred Code
const analysisResult = JSON.parse(content as string);
const geoJson = analysisResult.geoJson as FeatureCollection;
const image = analysisResult.image as string;

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status:
Sensitive data persistence: The PR persists a full image data URL into the resolution_search_result message for
reload/share, which may retain/share sensitive imagery/location context and should be
validated against data retention/sharing requirements.

Referred Code
content: JSON.stringify({
  ...analysisResult,
  image: dataUrl
}),

Learn more about managing compliance generic rules or creating your own custom rules

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Jan 31, 2026

ⓘ Your approaching your monthly quota for Qodo. Upgrade your plan

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Store image URL instead of data

Instead of storing large Base64 image data in the AI state, upload the image to
a file storage service and store only the URL to improve performance and
scalability.

Examples:

app/actions.tsx [159-162]
              content: JSON.stringify({
                ...analysisResult,
                image: dataUrl
              }),
app/actions.tsx [720-726]
              const image = analysisResult.image as string;

              return {
                id,
                component: (
                  <>
                    {image && <ResolutionImage src={image} />}

Solution Walkthrough:

Before:

// in header-search-button.tsx
async function handleResolutionSearch() {
  const response = await fetch(staticMapUrl);
  const blob = await response.blob();
  const dataUrl = await new Promise(resolve => {
    const reader = new FileReader();
    reader.onloadend = () => resolve(reader.result);
    reader.readAsDataURL(blob);
  });
  // submit dataUrl in FormData
}

// in app/actions.tsx
async function submit(formData) {
  const dataUrl = formData.get('fileDataUrl');
  aiState.done({
    ...aiState.get(),
    messages: [
      ...aiState.get().messages,
      {
        content: JSON.stringify({ ..., image: dataUrl }),
        type: 'resolution_search_result'
      }
    ]
  });
}

After:

// in a new server action for uploads
async function uploadImage(blob) {
  // Upload to a service like Vercel Blob, S3, etc.
  const { url } = await put('satellite-image.png', blob);
  return url;
}

// in header-search-button.tsx
async function handleResolutionSearch() {
  const response = await fetch(staticMapUrl);
  const blob = await response.blob();
  const imageUrl = await uploadImage(blob); // Upload and get URL
  // submit imageUrl in FormData
}

// in app/actions.tsx
async function submit(formData) {
  const imageUrl = formData.get('imageUrl');
  aiState.done({
    ...aiState.get(),
    messages: [
      ...aiState.get().messages,
      {
        content: JSON.stringify({ ..., image: imageUrl }),
        type: 'resolution_search_result'
      }
    ]
  });
}
Suggestion importance[1-10]: 8

__

Why: The suggestion addresses a significant architectural flaw, as storing large Base64-encoded images in the application state can lead to performance degradation and scalability issues.

Medium
Possible issue
Avoid rendering duplicate map layers

To prevent duplicate map layers and potential race conditions, remove the
imperative uiStream.append call for GeoJsonLayer and rely solely on rendering it
from the AI state.

app/actions.tsx [114-125]

 summaryStream.done(analysisResult.summary || 'Analysis complete.');
-
-if (analysisResult.geoJson) {
-  uiStream.append(
-    <GeoJsonLayer
-      id={groupeId}
-      data={analysisResult.geoJson as FeatureCollection}
-    />
-  );
-}
 
 messages.push({ role: 'assistant', content: analysisResult.summary || 'Analysis complete.' });
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that rendering GeoJsonLayer from both uiStream and aiState can cause race conditions and duplicate rendering, proposing a fix that aligns with a single source of truth architecture.

Medium
General
Use Next.js Image for optimization

Replace the native tags with the Next.js component for improved performance
and to prevent layout shift. Use the unoptimized prop as the src is a data URL.

components/resolution-image.tsx [1-59]

-/* eslint-disable @next/next/no-img-element */
 'use client'
 
 import React from 'react'
-...
+import Image from 'next/image'
+import { Card, CardContent } from '@/components/ui/card'
+import {
+  Dialog,
+  DialogContent,
+  DialogHeader,
+  DialogTitle,
+  DialogTrigger
+} from '@/components/ui/dialog'
+import { motion } from 'framer-motion'
+import { cn } from '@/lib/utils'
+
+interface ResolutionImageProps {
+  src: string
+  className?: string
+}
+
 export function ResolutionImage({ src, className }: ResolutionImageProps) {
   if (!src) return null
 
   return (
     <div className={cn('mt-2 mb-4', className)}>
       <Dialog>
         <DialogTrigger asChild>
           <motion.div
             className="w-fit cursor-pointer relative glassmorphic overflow-hidden rounded-lg border bg-muted"
             whileHover={{ scale: 1.02 }}
             whileTap={{ scale: 0.98 }}
           >
             <Card className="border-none shadow-none bg-transparent">
               <CardContent className="p-2">
-                <img
+                <Image
                   src={src}
                   alt="Satellite Analysis Map"
-                  className="max-w-xs max-h-64 rounded-md object-contain"
+                  width={320} // max-w-xs is 20rem = 320px
+                  height={256} // max-h-64 is 16rem = 256px
+                  className="rounded-md object-contain"
+                  unoptimized
                 />
               </CardContent>
             </Card>
           </motion.div>
         </DialogTrigger>
         <DialogContent className="sm:max-w-5xl max-h-[90vh] p-1 glassmorphic border-none">
           <DialogHeader className="sr-only">
             <DialogTitle>Satellite Imagery</DialogTitle>
           </DialogHeader>
           <div className="flex items-center justify-center h-full">
-            <img
+            <Image
               src={src}
               alt="Satellite Analysis Map Full"
+              width={1280} // Example width, adjust as needed
+              height={720}  // Example height, adjust as needed
               className="h-auto w-full object-contain max-h-[85vh] rounded-md"
+              unoptimized
             />
           </div>
         </DialogContent>
       </Dialog>
     </div>
   )
 }

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 6

__

Why: The suggestion correctly recommends using the Next.js Image component for better performance and to avoid layout shift, which is a best practice, although it has a moderate impact.

Low
Fix variable naming typo

Rename the variable groupeId to groupId to fix the typo and ensure consistent
naming.

app/actions.tsx [96]

-const groupeId = nanoid();
+const groupId = nanoid();
  • Apply / Chat
Suggestion importance[1-10]: 3

__

Why: The suggestion correctly points out a typo in the variable name groupeId, and fixing it improves code consistency and readability, but it is a minor style issue.

Low
  • Update

Copy link

@charliecreates charliecreates bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main risks are id inconsistency for GeoJsonLayer between live streaming and AI-state rehydration, and persisting large base64 data URLs into chat state which can degrade performance and storage scalability. There’s also an overly broad lint suppression in ResolutionImage and a maintainability concern in how the Google Static Maps URL is constructed. Addressing these will make the feature more robust across reloads and long-running chats.

Additional notes (3)
  • Performance | app/actions.tsx:147-147
    This persists the entire captured map image (dataUrl) into AI state. Data URLs for scale=2 images can be very large, which can bloat persisted chat state, slow down rehydration, and potentially exceed storage limits (cookie/localStorage/db row limits depending on how saveChat is implemented). This is especially risky if multiple resolution searches occur in a single thread.

Given the context, it’s a good UX improvement, but storing the raw base64 payload in every saved message is a scalability footgun.

  • Compatibility | app/actions.tsx:717-717
    const analysisResult = JSON.parse(content as string); assumes valid JSON for all resolution_search_result messages. If older chats (before this change) stored a different shape or non-JSON content for this message type, rehydration will throw and can break the entire UI state reconstruction.

You’re already evolving the payload (image added). This should be resilient to legacy data.

  • Maintainability | components/header-search-button.tsx:73-79
    The Google Static Maps URL is constructed via string interpolation without encodeURIComponent for query values. While center.lat/lng are numeric, this pattern tends to grow (markers, paths, style params) and becomes error-prone or vulnerable to malformed values. Additionally, scale=2 increases payload size; you should ensure the requested size and zoom stay within Static Maps limits, and consider clamping zoom to the supported range (0–21/22 depending on API).
Summary of changes

What changed

✅ Resolution search UI/UX enhancements

  • Added a new ResolutionImage component (components/resolution-image.tsx) to render a clickable thumbnail that opens a full-screen Dialog for detailed imagery.
  • Updated app/actions.tsx to:
    • Render <ResolutionImage src={dataUrl} /> immediately in the streamed UI alongside the summary.
    • Persist the captured image by embedding image: dataUrl into the stored resolution_search_result payload.
    • Restore the image on reload via getUIStateFromAIState by reading analysisResult.image.

🗺️ GeoJSON overlay timing fix

  • In app/actions.tsx, GeoJSON overlays are now appended to the uiStream immediately after analysis completes (instead of only appearing after a refresh).

🛰️ Higher resolution static maps capture

  • In components/header-search-button.tsx, updated Google Static Maps request to include scale=2 for higher pixel density (...&size=640x480&scale=2...).

Comment on lines +1 to +12
/* eslint-disable @next/next/no-img-element */
'use client'

import React from 'react'
import { Card, CardContent } from '@/components/ui/card'
import {
Dialog,
DialogContent,
DialogHeader,
DialogTitle,
DialogTrigger
} from '@/components/ui/dialog'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new file disables @next/next/no-img-element at the file level. That’s a broad lint suppression and removes optimizations/accessibility guarantees of next/image (responsive sizing, lazy loading defaults, blur placeholders, etc.). Even if next/image can’t consume data URLs efficiently in your setup, suppressing the rule for the entire file makes it easy to accidentally introduce more <img> usages later.

This violates the general best practice of keeping lint suppressions narrowly scoped and justified (the diff explicitly adds a file-level disable).

Suggestion

Narrow the suppression to only the lines that need it, and document why <img> is required here. For example:

  • Remove the file-level /* eslint-disable ... */.
  • Add // eslint-disable-next-line @next/next/no-img-element directly above each <img>.

Alternatively, try next/image for the dialog view (where layout is stable) and keep <img> only where needed.

Reply with "@CharlieHelps yes please" if you’d like me to add a commit that scopes the lint suppression to the two <img> elements and adds a brief rationale comment.

@charliecreates charliecreates bot removed the request for review from CharlieHelps January 31, 2026 14:18
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@components/resolution-image.tsx`:
- Around line 27-33: The DialogTrigger is wrapped around a non-focusable
motion.div which prevents keyboard users from opening the dialog; replace the
motion.div with a focusable button element (e.g., motion.button) when using
DialogTrigger asChild, ensure it has type="button" and preserves the existing
className and motion props, and keep the Card inside unchanged so the trigger
remains visually identical while becoming keyboard-accessible; update any
import/typing if needed for motion.button.
📜 Review details

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2fef8f0 and 74a8ad5.

📒 Files selected for processing (3)
  • app/actions.tsx
  • components/header-search-button.tsx
  • components/resolution-image.tsx
🧰 Additional context used
🧬 Code graph analysis (3)
components/header-search-button.tsx (1)
components/map/google-map.tsx (2)
  • apiKey (20-29)
  • GoogleMapComponent (12-86)
app/actions.tsx (2)
components/map/geojson-layer.tsx (1)
  • GeoJsonLayer (13-100)
components/resolution-image.tsx (1)
  • ResolutionImage (21-59)
components/resolution-image.tsx (3)
lib/utils/index.ts (1)
  • cn (12-14)
components/ui/dialog.tsx (9)
  • Dialog (112-112)
  • DialogTrigger (116-116)
  • DialogContent (117-117)
  • DialogHeader (118-118)
  • DialogTitle (120-120)
  • props (70-81)
  • props (35-53)
  • props (20-29)
  • props (87-96)
components/ui/card.tsx (2)
  • Card (79-79)
  • CardContent (79-79)
🔇 Additional comments (6)
app/actions.tsx (5)

25-27: LGTM: ResolutionImage import is correctly wired for new UI output.


96-123: Nice: shared groupeId + immediate GeoJsonLayer append ensures overlays render without refresh.


159-162: LGTM: image is persisted alongside analysis results for reload/share support.


191-196: Good UX: show captured image immediately while the summary streams.


718-727: LGTM: optional image rendering in UI state is guarded and clean.

components/header-search-button.tsx (1)

74-77: No action needed — configuration is within API limits.

scale=2 is a documented parameter supported by Google Static Maps API. The configuration size=640x480&scale=2 produces a 1280×960 pixel image, which is within the documented limits (max size is 640×640, and scale=2 simply doubles the pixel output while maintaining coverage). No API errors will occur.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines 27 to 33
<DialogTrigger asChild>
<motion.div
className="w-fit cursor-pointer relative glassmorphic overflow-hidden rounded-lg border bg-muted"
whileHover={{ scale: 1.02 }}
whileTap={{ scale: 0.98 }}
>
<Card className="border-none shadow-none bg-transparent">
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make the dialog trigger keyboard-accessible (use a button).

DialogTrigger asChild wraps a div, which isn’t keyboard-focusable or activatable by default, blocking keyboard users from opening the dialog.

Proposed fix
-        <DialogTrigger asChild>
-          <motion.div
-            className="w-fit cursor-pointer relative glassmorphic overflow-hidden rounded-lg border bg-muted"
-            whileHover={{ scale: 1.02 }}
-            whileTap={{ scale: 0.98 }}
-          >
+        <DialogTrigger asChild>
+          <motion.button
+            type="button"
+            aria-label="Open satellite imagery"
+            className="w-fit cursor-pointer relative glassmorphic overflow-hidden rounded-lg border bg-muted"
+            whileHover={{ scale: 1.02 }}
+            whileTap={{ scale: 0.98 }}
+          >
             <Card className="border-none shadow-none bg-transparent">
               <CardContent className="p-2">
                 <img
                   src={src}
                   alt="Satellite Analysis Map"
                   className="max-w-xs max-h-64 rounded-md object-contain"
                 />
               </CardContent>
             </Card>
-          </motion.div>
+          </motion.button>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<DialogTrigger asChild>
<motion.div
className="w-fit cursor-pointer relative glassmorphic overflow-hidden rounded-lg border bg-muted"
whileHover={{ scale: 1.02 }}
whileTap={{ scale: 0.98 }}
>
<Card className="border-none shadow-none bg-transparent">
<DialogTrigger asChild>
<motion.button
type="button"
aria-label="Open satellite imagery"
className="w-fit cursor-pointer relative glassmorphic overflow-hidden rounded-lg border bg-muted"
whileHover={{ scale: 1.02 }}
whileTap={{ scale: 0.98 }}
>
<Card className="border-none shadow-none bg-transparent">
<CardContent className="p-2">
<img
src={src}
alt="Satellite Analysis Map"
className="max-w-xs max-h-64 rounded-md object-contain"
/>
</CardContent>
</Card>
</motion.button>
</DialogTrigger>
🤖 Prompt for AI Agents
In `@components/resolution-image.tsx` around lines 27 - 33, The DialogTrigger is
wrapped around a non-focusable motion.div which prevents keyboard users from
opening the dialog; replace the motion.div with a focusable button element
(e.g., motion.button) when using DialogTrigger asChild, ensure it has
type="button" and preserves the existing className and motion props, and keep
the Card inside unchanged so the trigger remains visually identical while
becoming keyboard-accessible; update any import/typing if needed for
motion.button.

@ngoiyaeric
Copy link
Collaborator Author

@coderabbitai its supposed to return an image of the google maps render for that site

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 31, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Read the header-search-button.tsx to see how Google Maps API is being called
cat components/header-search-button.tsx

Length of output: 4591


🏁 Script executed:

#!/bin/bash
# Read the resolution-image.tsx component to see how images are displayed
cat components/resolution-image.tsx

Length of output: 1780


🏁 Script executed:

#!/bin/bash
# Read relevant parts of app/actions.tsx to see the full flow
cat app/actions.tsx | head -200

Length of output: 6880


🏁 Script executed:

#!/bin/bash
# Search for where the Google Maps image is captured and stored
rg -A 10 -B 5 "dataUrl|data:image" --type=tsx --type=ts

Length of output: 27


🏁 Script executed:

#!/bin/bash
# Search for dataUrl and image data handling
rg -A 10 -B 5 "dataUrl|data:image" --type=typescript

Length of output: 34


🏁 Script executed:

#!/bin/bash
# Search more broadly
rg -A 10 -B 5 "dataUrl" 

Length of output: 4031


🏁 Script executed:

#!/bin/bash
# Let's also check how the static map is being used
rg -A 5 -B 5 "staticmap|Static.*Map" app/

Length of output: 0


@ngoiyaeric Yes, that's exactly what this implementation does! Let me confirm the flow:

The complete image capture and display flow:

  1. Capture Google Maps Render (header-search-button.tsx, Line 72-79):

    • Constructs a Google Static Maps API URL with the current map view parameters
    • URL includes: center, zoom, size=640x480, scale=2 (for high-resolution), maptype=satellite
    • Fetches the satellite imagery from Google's API as a blob
  2. Convert to Data URL (app/actions.tsx, Line 57):

    • The image blob is converted to a base64 data URL: data:image/png;base64,...
    • This allows the image to be embedded directly in the UI and persisted in the chat state
  3. Display the Image (app/actions.tsx, Line 183):

    • The ResolutionImage component immediately renders the Google Maps satellite image as a clickable thumbnail
    • Clicking opens a full-screen dialog for detailed viewing
  4. Persist in State (app/actions.tsx, Line 152-157):

    • The image is stored in the resolution_search_result message as image: dataUrl
    • This ensures the Google Maps render persists across page reloads and when sharing chats

The scale=2 parameter you added ensures the returned image has double the pixel density (1280x960 actual pixels at 640x480 logical size) for sharper satellite imagery analysis.

Is this working as expected in your testing, or are you encountering any issues with the Google Maps imagery?

@google-labs-jules
Copy link
Contributor

@coderabbitai its supposed to return an image of the google maps render for that site

Hi! I'm Jules. I've addressed your feedback. I've enhanced the Resolution Search to display a high-resolution Google Maps render (scale=2) in the chat response. Additionally, I've extended this capability to the general geospatial query tool. Now, whenever a location is found or a map is requested (even via standard search), a Google Maps static image will be displayed in the chat if you're using the Google Maps provider. This ensures a consistent 'google maps render' is returned as you requested.

…l tools

- Increased Google Static Maps resolution (scale=2).
- Added image display to Resolution Search results.
- Extended image display to general geospatial query tool results (geocode, search, map).
- Added generic `alt` prop to `ResolutionImage` component.
- Implemented Google Static Maps URL generation in `geospatialTool` for the Google provider.

Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@lib/agents/tools/geospatial.tsx`:
- Around line 158-162: The getGoogleStaticMapUrl function exposes a server-only
secret and omits satellite imagery: change it to only read
NEXT_PUBLIC_GOOGLE_MAPS_API_KEY (remove the fallback to GOOGLE_MAPS_API_KEY) so
no server secret can be embedded in client-rendered mapUrl (see usage in
app/actions.tsx), and append maptype=satellite to the generated URL query string
so the Static Maps API returns satellite imagery.
📜 Review details

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 74a8ad5 and 76ace98.

📒 Files selected for processing (3)
  • app/actions.tsx
  • components/resolution-image.tsx
  • lib/agents/tools/geospatial.tsx
🧰 Additional context used
🧬 Code graph analysis (3)
app/actions.tsx (4)
components/map/geojson-layer.tsx (1)
  • GeoJsonLayer (13-100)
components/resolution-image.tsx (1)
  • ResolutionImage (22-60)
components/map/map-query-handler.tsx (3)
  • MapQueryHandler (30-82)
  • GeospatialToolOutput (18-23)
  • prevData (39-48)
lib/agents/resolution-search.tsx (1)
  • resolutionSearch (26-70)
components/resolution-image.tsx (3)
lib/utils/index.ts (1)
  • cn (12-14)
components/ui/dialog.tsx (9)
  • Dialog (112-112)
  • DialogTrigger (116-116)
  • DialogContent (117-117)
  • DialogHeader (118-118)
  • DialogTitle (120-120)
  • props (70-81)
  • props (35-53)
  • props (20-29)
  • props (87-96)
components/ui/card.tsx (2)
  • Card (79-79)
  • CardContent (79-79)
lib/agents/tools/geospatial.tsx (2)
components/map/google-map.tsx (2)
  • GoogleMapComponent (12-86)
  • apiKey (20-29)
components/map/map-query-handler.tsx (2)
  • McpResponseData (8-16)
  • toolOutput (33-73)
🔇 Additional comments (8)
app/actions.tsx (6)

26-30: No issues with the new ResolutionImage import.


96-123: Good use of a shared groupeId for GeoJSON/UI association.

Immediate GeoJSON append keeps the map overlay in sync with the streamed response.


159-162: Persisting the image alongside the analysis result enables reload/share.


194-195: Live response now surfaces the captured image promptly.


720-727: Rehydration correctly restores the image + GeoJSON layers.


746-758: Map preview rendering before MapQueryHandler improves UX.

lib/agents/tools/geospatial.tsx (2)

281-291: No additional feedback for this hunk.


409-412: No additional feedback for this hunk.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines +158 to +162
function getGoogleStaticMapUrl(latitude: number, longitude: number): string {
const apiKey = process.env.NEXT_PUBLIC_GOOGLE_MAPS_API_KEY || process.env.GOOGLE_MAPS_API_KEY;
if (!apiKey) return '';
return `https://maps.googleapis.com/maps/api/staticmap?center=${latitude},${longitude}&zoom=15&size=640x480&scale=2&markers=color:red%7C${latitude},${longitude}&key=${apiKey}`;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n lib/agents/tools/geospatial.tsx | head -170

Repository: QueueLab/QCX

Length of output: 7236


🏁 Script executed:

rg "mapUrl" --type tsx --type ts -B 2 -A 2

Repository: QueueLab/QCX

Length of output: 83


🌐 Web query:

Google Static Maps API documentation maptype parameter default value satellite roadmap

💡 Result:

In the Google Maps Static API, the maptype parameter is optional. If you don’t include maptype, the API defaults to roadmap (not satellite). (developers.google.com)

Citations:


🏁 Script executed:

rg "mapUrl" -B 3 -A 3

Repository: QueueLab/QCX

Length of output: 5234


🏁 Script executed:

rg "GOOGLE_MAPS_API_KEY" -B 2 -A 2

Repository: QueueLab/QCX

Length of output: 1313


🏁 Script executed:

fd ".env" --type f | head -5

Repository: QueueLab/QCX

Length of output: 38


🏁 Script executed:

rg "NEXT_PUBLIC" --type json --type js --type ts | grep -i "key\|map" | head -10

Repository: QueueLab/QCX

Length of output: 892


Fix API key exposure and add satellite maptype.

The function falls back to GOOGLE_MAPS_API_KEY, a server-only secret. Since mapUrl is rendered client-side in app/actions.tsx, this secret would be embedded in the HTML and exposed to browsers. Additionally, the Google Static Maps API defaults to roadmap imagery without the maptype parameter—maptype=satellite is required to meet the imagery requirement.

Use only NEXT_PUBLIC_GOOGLE_MAPS_API_KEY (the public key) and add maptype=satellite:

Proposed fix
function getGoogleStaticMapUrl(latitude: number, longitude: number): string {
  const apiKey = process.env.NEXT_PUBLIC_GOOGLE_MAPS_API_KEY;
  if (!apiKey) return '';
-  return `https://maps.googleapis.com/maps/api/staticmap?center=${latitude},${longitude}&zoom=15&size=640x480&scale=2&markers=color:red%7C${latitude},${longitude}&key=${apiKey}`;
+  const params = new URLSearchParams({
+    center: `${latitude},${longitude}`,
+    zoom: '15',
+    size: '640x480',
+    scale: '2',
+    maptype: 'satellite',
+    markers: `color:red|${latitude},${longitude}`,
+    key: apiKey,
+  });
+  return `https://maps.googleapis.com/maps/api/staticmap?${params.toString()}`;
}
🤖 Prompt for AI Agents
In `@lib/agents/tools/geospatial.tsx` around lines 158 - 162, The
getGoogleStaticMapUrl function exposes a server-only secret and omits satellite
imagery: change it to only read NEXT_PUBLIC_GOOGLE_MAPS_API_KEY (remove the
fallback to GOOGLE_MAPS_API_KEY) so no server secret can be embedded in
client-rendered mapUrl (see usage in app/actions.tsx), and append
maptype=satellite to the generated URL query string so the Static Maps API
returns satellite imagery.

@ngoiyaeric
Copy link
Collaborator Author

@coderabbitai The image that should be the google maps preview. Can you screen shot that and return it there an index both google maps and the Mapbox preview in the analysis. the follow ups from resolution search are not working

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 31, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Read the full geospatial tool to understand resolution search and follow-up handling
cat lib/agents/tools/geospatial.tsx

Length of output: 18724


🏁 Script executed:

#!/bin/bash
# Search for follow-up related code and state management
rg -n -C 5 "follow.*up|followUp|resolution.*search" --type=ts --type=tsx

Length of output: 27


🏁 Script executed:

#!/bin/bash
# Search for Mapbox integration to see if there's dual preview support
rg -n -C 3 "mapbox|Mapbox" --type=ts --type=tsx

Length of output: 27


🏁 Script executed:

#!/bin/bash
# Read the full actions.tsx to understand the resolution search flow
cat app/actions.tsx | rg -A 20 "resolution_search_result|RESOLUTION_ANALYSIS_TRIGGER"

Length of output: 3206


🏁 Script executed:

#!/bin/bash
# Look for any state management issues around resolution search
rg -n -C 5 "resolution.*image|ResolutionImage" app/actions.tsx

Length of output: 1780


🏁 Script executed:

#!/bin/bash
# Search for how followup messages are handled/rendered
rg -n "type.*followup|followup.*type" app/ components/

Length of output: 327


🏁 Script executed:

#!/bin/bash
# Look for FollowUp component or related query handling
fd -e tsx -e ts | xargs rg -l "FollowUp|follow.*up" | head -20

Length of output: 34


🏁 Script executed:

#!/bin/bash
# Check the full switch statement in getUIStateFromAIState to see all message types
cat app/actions.tsx | sed -n '/getUIStateFromAIState/,/^}/p' | head -300

Length of output: 7516


🏁 Script executed:

#!/bin/bash
# Look at how related queries are rendered
rg -n -A 10 "SearchRelated|related.*query" app/actions.tsx

Length of output: 3475


🏁 Script executed:

#!/bin/bash
# Check if there's a component that handles followup
fd -e tsx -e ts "follow" 

Length of output: 29


🏁 Script executed:

#!/bin/bash
# Read the FollowupPanel component to understand how followups work
cat components/followup-panel.tsx

Length of output: 1670


🏁 Script executed:

#!/bin/bash
# Search for how the resolution search creates related queries
rg -n -B 5 -A 15 "querySuggestor" app/actions.tsx

Length of output: 2607


🏁 Script executed:

#!/bin/bash
# Check if there's a querySuggestor function definition
fd -e tsx -e ts | xargs rg -l "querySuggestor" | head -10

Length of output: 46


🏁 Script executed:

#!/bin/bash
# Look at the resolution search flow to understand how followup should work
cat app/actions.tsx | sed -n '/processResolutionSearch/,/finally/p' | head -100

Length of output: 3048


🏁 Script executed:

#!/bin/bash
# Check the MapProvider settings to understand dual provider support
rg -n "MapProvider|mapProvider" lib/store/settings.ts

Length of output: 234


🏁 Script executed:

#!/bin/bash
# Read the query suggestor to understand how related queries work
cat lib/agents/query-suggestor.tsx

Length of output: 2217


🏁 Script executed:

#!/bin/bash
# Check the resolutionSearch agent implementation
cat lib/agents/resolution-search.tsx

Length of output: 3426


🏁 Script executed:

#!/bin/bash
# Look for the submit function to understand the follow-up flow
rg -n -A 20 "export.*submit|async.*submit.*formData" app/actions.tsx | head -60

Length of output: 850


🏁 Script executed:

#!/bin/bash
# Check if there are any console errors or issues with the follow-up panel state
rg -n "useActions|submit" components/followup-panel.tsx

Length of output: 277


🏁 Script executed:

#!/bin/bash
# Search for any issues with message filtering that might break follow-ups
rg -n -C 3 "filter.*followup|followup.*filter" app/actions.tsx

Length of output: 0


@ngoiyaeric I apologize, but I cannot take screenshots as I'm an AI code reviewer without the ability to capture visual output from running applications. However, I can help identify and fix the issues you're experiencing.

Issues Identified

1. Follow-ups Not Working 🐛

I found the bug! In app/actions.tsx, the system is filtering out follow-up messages from the conversation context:

Line 69:

message.type !== 'followup' &&

Line 210:

message.type !== 'followup' &&

This means when a user submits a follow-up query through the FollowupPanel, the previous context (including the resolution search results) is being excluded from the conversation, causing follow-ups to fail.

2. Dual Provider Support Missing 🗺️

Currently, the system only captures one map provider image at a time (either Google Maps OR Mapbox), based on the mapProvider setting. You're requesting that both Google Maps and Mapbox preview images be captured and indexed in the analysis.

Proposed Solution

I'll implement fixes for both issues:

Phase 1: Fix follow-up functionality

  • Preserve followup context in conversation history
  • Ensure resolution search results are available for follow-up queries

Phase 2: Implement dual provider image capture

  • Capture both Google Maps AND Mapbox static images simultaneously
  • Store both images in the resolution_search_result
  • Update UI to display both previews
  • Ensure both images are indexed for the AI analysis

Would you like me to proceed with implementing these changes? This will:

  1. ✅ Fix follow-up queries so they work with resolution search context
  2. ✅ Capture both Google Maps and Mapbox static images simultaneously
  3. ✅ Display both previews in the UI for comparison
  4. ✅ Index both images in the AI analysis for comprehensive geospatial understanding

🏁 Script executed:

#!/bin/bash
# Check the Mapbox Static Images API format
echo "Mapbox Static Images API format:"
echo "https://api.mapbox.com/styles/v1/{username}/{style_id}/static/{overlay}/{lon},{lat},{zoom},{bearing},{pitch}|{bbox}|{auto}/{width}x{height}{`@2x`}"
echo ""
echo "Example for satellite:"
echo "https://api.mapbox.com/styles/v1/mapbox/satellite-v9/static/{lon},{lat},{zoom}/{width}x{height}@2x?access_token={token}"

Length of output: 321


Perfect! I now have the Mapbox Static Images API format. Here's the complete implementation plan:

Summary of Changes

🔧 Fix 1: Follow-up Functionality

File: app/actions.tsx (Lines 69, 210)

  • Remove the filter that excludes followup type messages from context
  • This will allow follow-up queries to have access to the resolution search results

🗺️ Fix 2: Dual Provider Image Capture

Files to modify:

  1. header-search-button.tsx - Capture both Google Maps + Mapbox images
  2. app/actions.tsx - Store both images in resolution state
  3. components/resolution-image.tsx - Display both previews (or create new component)
  4. lib/agents/resolution-search.tsx - Accept both images for analysis

Implementation approach:

  • Fetch Google Static Maps: https://maps.googleapis.com/maps/api/staticmap?...&scale=2
  • Fetch Mapbox Satellite: https://api.mapbox.com/styles/v1/mapbox/satellite-v9/static/...@2x
  • Store as: { google: dataUrl, mapbox: dataUrl } in the resolution_search_result
  • Display side-by-side with labels: "Google Maps Preview" | "Mapbox Preview"
  • Pass both to the AI for comprehensive analysis

Question: Would you like me to proceed with implementing these changes, or would you prefer to review the approach first?


💡 Use @coderabbitai plan to create a plan for code changes

@google-labs-jules
Copy link
Contributor

@coderabbitai The image that should be the google maps preview. Can you screen shot that and return it there an index both google maps and the Mapbox preview in the analysis. the follow ups from resolution search are not working

I have addressed all your feedback:

  1. Satellite Previews: Both Mapbox and Google Satellite previews are now captured and displayed side-by-side in the resolution search response.
  2. Comparative Analysis: The AI agent now analyzes both previews and incorporates findings from both into its summary.
  3. Follow-ups Fixed: Follow-up queries are now working correctly. I fixed this by:
    • Updating the model selection logic to use vision-capable models (like grok-vision-beta) whenever the conversation history contains images.
    • Making all agents (Task Manager, Inquirer, Query Suggestor) vision-aware.
    • Ensuring the mapProvider context is correctly passed during follow-up submissions.

…low-ups

- Capture and display Mapbox and Google Satellite previews side-by-side in resolution search.
- Update `ResolutionImage` component for multi-image comparison and session persistence.
- Fix `getModel` utility to correctly return vision-capable models (e.g., `grok-vision-beta`) when history contains images.
- Make Task Manager, Inquirer, and Query Suggestor agents vision-aware.
- Include `mapProvider` context in follow-up submissions to maintain tool and model selection accuracy.
- Enhance `resolutionSearch` system prompt for comparative analysis of dual previews.

Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
lib/utils/index.ts (1)

40-95: ⚠️ Potential issue | 🔴 Critical

Replace deprecated gemini-1.5-pro with current Google Gemini model.

gemini-1.5-pro was deprecated and shut down on September 29, 2025. Update lines 55 and 95 to use gemini-2.5-pro or gemini-2.5-flash. The xAI model IDs (grok-vision-beta and grok-beta) are valid and supported.

lib/agents/resolution-search.tsx (1)

45-65: ⚠️ Potential issue | 🟡 Minor

Align schema requirements with the new “no external sources” prompt.
The prompt now forbids web searches and restricts analysis to imagery/general knowledge, but the schema description still asks for “relevant current news,” which will likely drive hallucinations. Consider removing that requirement or making it explicitly optional/visual-only.

🤖 Fix all issues with AI agents
In `@app/actions.tsx`:
- Around line 192-195: The payload is double-encoding the image bundle by
JSON.stringify-ing image inside the JSON payload; change the payload assembly to
set image: { mapbox: mapboxDataUrl, google: googleDataUrl } (i.e., store the
object directly instead of JSON.stringifying it) and update any consumers of
analysisResult.image to accept both the legacy string form and the new object
form (parse if typeof image === 'string', otherwise use image directly). Locate
the JSON.stringify call that builds content (the code merging ...analysisResult
and image) and the corresponding deserialization/reader logic that accesses
analysisResult.image (also update the other similar occurrences referenced in
the review) to implement backward-compatible parsing.
- Around line 96-112: The image parts currently hardcode mimeType 'image/png'
when pushing to contentParts; instead use the actual extracted file type (e.g.,
mapboxFile.type and googleFile.type) and fall back to a sensible default like
'image/png' if the type is missing; update the objects pushed in the mapbox/
google blocks (the code that pushes { type: 'image', image: mapboxDataUrl,
mimeType: 'image/png' } and the google equivalent) to set mimeType =
mapboxFile?.type || parsedTypeFromDataUrl(mapboxDataUrl) || 'image/png' (and
similarly for googleFile), and consider the legacyFile path separately to make
its mimeType consistent.

In `@components/header-search-button.tsx`:
- Around line 59-104: The resolution-search control is being disabled whenever
the Mapbox map instance is missing, which also blocks Google mode; change the
enable/disable logic to only disable when the Mapbox provider is selected and
the Mapbox instance is missing. Locate the UI/button disable check in
header-search-button.tsx (referencing map, mapProvider, and the resolution
search action) and replace the condition like "!map" with "(mapProvider ===
'mapbox' && !map)" so Google mode remains usable when mapProvider === 'google'.

In `@lib/agents/task-manager.tsx`:
- Around line 18-25: The generateObject call uses schema: nextActionSchema but
getModel(...) may return xAI/Gemini which are unreliable for structured-output;
update the code so that when calling generateObject (in task-manager.tsx) you
either (A) force a structured-output-capable model (e.g., OpenAI gpt-4o)
whenever a schema is provided (detect presence of nextActionSchema or any schema
param) or (B) update getModel to include structured-output capability
detection/prioritization so it returns a known-good model for schema-based
calls; ensure the change applies to the code path that computes model: (await
getModel(hasImageAnywhere)) and preserves the existing hasImageAnywhere logic.
📜 Review details

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 76ace98 and 830d53a.

📒 Files selected for processing (9)
  • app/actions.tsx
  • components/followup-panel.tsx
  • components/header-search-button.tsx
  • components/resolution-image.tsx
  • lib/agents/inquire.tsx
  • lib/agents/query-suggestor.tsx
  • lib/agents/resolution-search.tsx
  • lib/agents/task-manager.tsx
  • lib/utils/index.ts
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-16T10:02:40.220Z
Learning: The router agent and resolution search agent must implement a feedback loop where tool execution results (including GeoTIFF URLs, embeddings, and satellite analysis data) are fed back to the AI model's context through enriched messages before generating the final response, ensuring the foundational model receives and can synthesize all satellite intelligence data.
📚 Learning: 2026-01-16T10:02:40.220Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-16T10:02:40.220Z
Learning: The router agent and resolution search agent must implement a feedback loop where tool execution results (including GeoTIFF URLs, embeddings, and satellite analysis data) are fed back to the AI model's context through enriched messages before generating the final response, ensuring the foundational model receives and can synthesize all satellite intelligence data.

Applied to files:

  • lib/agents/resolution-search.tsx
📚 Learning: 2026-01-13T13:26:30.086Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.

Applied to files:

  • lib/utils/index.ts
📚 Learning: 2026-01-17T06:14:51.070Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-17T06:14:51.070Z
Learning: For structured output tasks using ai.generateObject (resolution-search, geojsonParser, map-command-generator, feedback-analyzer), prefer OpenAI gpt-4o. While xAI's grok-4-1-fast-reasoning technically supports structured outputs and vision, OpenAI has proven more reliable integration with the AI SDK's generateObject function and Zod schemas in production.

Applied to files:

  • lib/utils/index.ts
🧬 Code graph analysis (7)
lib/agents/inquire.tsx (1)
lib/utils/index.ts (1)
  • getModel (20-121)
components/header-search-button.tsx (2)
components/map/mapbox-map.tsx (1)
  • map (320-341)
components/map/google-map.tsx (1)
  • mapData (54-70)
lib/agents/query-suggestor.tsx (2)
lib/db/schema.ts (1)
  • messages (26-37)
lib/utils/index.ts (1)
  • getModel (20-121)
app/actions.tsx (3)
components/map/geojson-layer.tsx (1)
  • GeoJsonLayer (13-100)
components/resolution-image.tsx (1)
  • ResolutionImage (24-106)
components/map/map-query-handler.tsx (2)
  • MapQueryHandler (30-82)
  • GeospatialToolOutput (18-23)
components/followup-panel.tsx (3)
lib/store/settings.ts (2)
  • useSettingsStore (10-13)
  • SettingsState (5-8)
components/map/map-provider.tsx (1)
  • MapProvider (16-28)
components/settings/components/settings.tsx (1)
  • value (205-205)
components/resolution-image.tsx (3)
lib/utils/index.ts (1)
  • cn (12-14)
components/ui/dialog.tsx (5)
  • Dialog (112-112)
  • DialogTrigger (116-116)
  • DialogContent (117-117)
  • DialogHeader (118-118)
  • DialogTitle (120-120)
components/ui/card.tsx (2)
  • Card (79-79)
  • CardContent (79-79)
lib/agents/task-manager.tsx (2)
lib/db/schema.ts (1)
  • messages (26-37)
lib/utils/index.ts (1)
  • getModel (20-121)
🔇 Additional comments (8)
lib/agents/inquire.tsx (1)

26-32: Image-aware model selection looks good.

This keeps vision-capable model selection aligned with message content.

lib/agents/query-suggestor.tsx (1)

21-27: LGTM for image-aware model selection.

components/followup-panel.tsx (1)

11-23: Good addition of mapProvider to follow-ups.

app/actions.tsx (5)

26-30: Imports for the new UI helpers look good.


48-80: Resolution-search file handling + legacy fallback look solid.


129-156: GeoJSON overlay streaming tied to groupeId looks good.


227-229: Immediate ResolutionImage preview in the response stream is a nice touch.


794-809: MAP_QUERY_TRIGGER UI wiring looks correct.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines +96 to +112
const contentParts: any[] = [
{ type: 'text', text: userInput }
]

if (mapboxDataUrl) {
contentParts.push({
type: 'image',
image: mapboxDataUrl,
mimeType: 'image/png'
})
}
if (googleDataUrl) {
contentParts.push({
type: 'image',
image: googleDataUrl,
mimeType: 'image/png'
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's find and read the file to understand the full context
find . -name "actions.tsx" -type f

Repository: QueueLab/QCX

Length of output: 73


🏁 Script executed:

# Get the file size and read it with context around lines 96-112
wc -l app/actions.tsx

Repository: QueueLab/QCX

Length of output: 75


🏁 Script executed:

# Read the relevant section with more context
sed -n '70,130p' app/actions.tsx

Repository: QueueLab/QCX

Length of output: 1833


🏁 Script executed:

# Read earlier section to find variable definitions and data URL creation
sed -n '1,96p' app/actions.tsx | tail -80

Repository: QueueLab/QCX

Length of output: 3410


🌐 Web query:

Vercel AI SDK image content mimeType parameter requirements accuracy

💡 Result:

In the Vercel AI SDK, the image content part ({ type: 'image', ... }) accepts an optional MIME type field:

  • AI SDK v5+ (current docs): the field is named mediaType?: string (optional) on ImagePart. It must be an IANA media type such as image/png, image/jpeg, image/webp. If you omit it, the SDK/provider will rely on what it can infer from the image input (e.g., a data URL). [1][2]
  • AI SDK v4 and earlier: the same concept was named mimeType; it was renamed to mediaType in v5. [3]

When you need to provide it

You should set mimeType/mediaType explicitly when the model/provider can’t reliably infer it from the data you pass, especially when:

  • you pass a raw base64 string (not a data:image/...;base64,... data URL), or
  • you pass binary bytes (Uint8Array / Buffer / ArrayBuffer) and want deterministic behavior across providers. [1][2]

“Accuracy” requirements (does it have to match the real bytes?)

The SDK types/docs describe mediaType as “the IANA media type” but don’t claim it validates the bytes against that value. Practically:

  • If you use a data URL, the “type” is carried in the URL prefix, and malformed/missing prefixes can cause failures in some runtimes/paths. [2][4]
  • If the declared mediaType/mimeType doesn’t match the actual image bytes, some providers may reject the request or mis-handle the input (so treat it as must be accurate even if not locally validated). [1][2]

Sources:
[1] AI SDK v5 ModelMessage reference (ImagePart includes optional mediaType) (v5.ai-sdk.dev)
[2] AI SDK v5 “Prompts” guide (image inputs: base64/data URL/binary/URL; examples) (v5.ai-sdk.dev)
[3] AI SDK v5 migration guide (rename mimeTypemediaType) (v5.ai-sdk.dev)
[4] Vercel AI SDK issue showing failures related to base64 image data URL handling/prefixes in Edge runtime (github.com)

Citations:


🏁 Script executed:

# Check package.json for Vercel AI SDK version
grep -A 2 -B 2 "ai\|vercel" package.json | head -30

Repository: QueueLab/QCX

Length of output: 931


🏁 Script executed:

# Verify the exact code path to confirm mapboxFile/googleFile availability at contentParts construction
sed -n '40,120p' app/actions.tsx

Repository: QueueLab/QCX

Length of output: 2759


Use actual file MIME types for image parts.

Hardcoding image/png can mislabel JPEG or other formats and may affect multimodal parsing. The file type is already extracted when creating the data URL—use it for the mimeType field as well with a fallback.

🛠️ Suggested fix
     if (mapboxDataUrl) {
       contentParts.push({
         type: 'image',
         image: mapboxDataUrl,
-        mimeType: 'image/png'
+        mimeType: mapboxFile?.type || 'image/png'
       })
     }
     if (googleDataUrl) {
       contentParts.push({
         type: 'image',
         image: googleDataUrl,
-        mimeType: 'image/png'
+        mimeType: googleFile?.type || 'image/png'
       })
     }

Note: This fix handles the primary cases (mapboxFile and googleFile). The legacy fallback path using legacyFile may still need attention for consistency.

🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 96 - 112, The image parts currently hardcode
mimeType 'image/png' when pushing to contentParts; instead use the actual
extracted file type (e.g., mapboxFile.type and googleFile.type) and fall back to
a sensible default like 'image/png' if the type is missing; update the objects
pushed in the mapbox/ google blocks (the code that pushes { type: 'image',
image: mapboxDataUrl, mimeType: 'image/png' } and the google equivalent) to set
mimeType = mapboxFile?.type || parsedTypeFromDataUrl(mapboxDataUrl) ||
'image/png' (and similarly for googleFile), and consider the legacyFile path
separately to make its mimeType consistent.

Comment on lines +192 to +195
content: JSON.stringify({
...analysisResult,
image: JSON.stringify({ mapbox: mapboxDataUrl, google: googleDataUrl })
}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Avoid double‑encoding the image payload; store as object and parse both shapes.
This currently JSON‑stringifies the image bundle inside JSON, inflating size and adding extra parsing. Store the object directly and keep backward compatibility by handling both string and object forms.

♻️ Suggested refactor
-              content: JSON.stringify({
-                ...analysisResult,
-                image: JSON.stringify({ mapbox: mapboxDataUrl, google: googleDataUrl })
-              }),
+              content: JSON.stringify({
+                ...analysisResult,
+                image: { mapbox: mapboxDataUrl, google: googleDataUrl }
+              }),
-              const imageData = analysisResult.image as string;
+              const imageData = analysisResult.image as
+                | { mapbox?: string; google?: string }
+                | string
+                | undefined;
               let mapboxSrc = '';
               let googleSrc = '';

               if (imageData) {
-                try {
-                  const parsed = JSON.parse(imageData);
-                  mapboxSrc = parsed.mapbox || '';
-                  googleSrc = parsed.google || '';
-                } catch (e) {
-                  // Fallback for older image format which was just a single string
-                  mapboxSrc = imageData;
-                }
+                if (typeof imageData === 'string') {
+                  try {
+                    const parsed = JSON.parse(imageData);
+                    mapboxSrc = parsed.mapbox || '';
+                    googleSrc = parsed.google || '';
+                  } catch (e) {
+                    // Fallback for older image format which was just a single string
+                    mapboxSrc = imageData;
+                  }
+                } else {
+                  mapboxSrc = imageData.mapbox || '';
+                  googleSrc = imageData.google || '';
+                }
               }

Also applies to: 753-766, 772-774

🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 192 - 195, The payload is double-encoding the
image bundle by JSON.stringify-ing image inside the JSON payload; change the
payload assembly to set image: { mapbox: mapboxDataUrl, google: googleDataUrl }
(i.e., store the object directly instead of JSON.stringifying it) and update any
consumers of analysisResult.image to accept both the legacy string form and the
new object form (parse if typeof image === 'string', otherwise use image
directly). Locate the JSON.stringify call that builds content (the code merging
...analysisResult and image) and the corresponding deserialization/reader logic
that accesses analysisResult.image (also update the other similar occurrences
referenced in the review) to implement backward-compatible parsing.

Comment on lines +59 to 104
let mapboxBlob: Blob | null = null;
let googleBlob: Blob | null = null;

const { center, range, zoom: cameraZoom } = mapData.cameraState || {};
const zoom = cameraZoom ?? (range ? Math.round(Math.log2(40000000 / range)) : 2);

// Capture Mapbox Preview
if (map) {
const canvas = map.getCanvas();
mapboxBlob = await new Promise<Blob | null>(resolve => {
canvas.toBlob(resolve, 'image/png');
});
}

const response = await fetch(staticMapUrl);
if (!response.ok) {
throw new Error('Failed to fetch static map image.');
// Capture Google Preview
if (center) {
const apiKey = process.env.NEXT_PUBLIC_GOOGLE_MAPS_API_KEY;
if (apiKey) {
let staticMapUrl = `https://maps.googleapis.com/maps/api/staticmap?center=${center.lat},${center.lng}&zoom=${Math.round(zoom)}&size=640x480&scale=2&maptype=satellite&key=${apiKey}`;
try {
const response = await fetch(staticMapUrl);
if (response.ok) {
googleBlob = await response.blob();
}
} catch (e) {
console.error('Failed to fetch google static map:', e);
}
}
blob = await response.blob();
}

if (!blob) {
throw new Error('Failed to capture map image.')
if (!mapboxBlob && !googleBlob) {
throw new Error('Failed to capture any map images.')
}

const formData = new FormData()
formData.append('file', blob, 'map_capture.png')
if (mapboxBlob) formData.append('mapboxFile', mapboxBlob, 'mapbox_capture.png')
if (googleBlob) formData.append('googleFile', googleBlob, 'google_capture.png')

// Keep 'file' for backward compatibility in case it's used elsewhere
formData.append('file', (mapboxBlob || googleBlob)!, 'map_capture.png')

formData.append('action', 'resolution_search')
formData.append('timezone', mapData.currentTimezone || 'UTC')
formData.append('drawnFeatures', JSON.stringify(mapData.drawnFeatures || []))
formData.append('mapProvider', mapProvider)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t disable resolution search in Google mode.

With Google capture now supported, the buttons should only be disabled when mapProvider === 'mapbox' and the Mapbox instance is missing; otherwise Google mode becomes unusable.

✅ Suggested fix
 export function HeaderSearchButton() {
   const { map } = useMap()
   const { mapProvider } = useSettingsStore()
@@
-  const desktopButton = (
+  const isMapboxUnavailable = mapProvider === 'mapbox' && !map
+
+  const desktopButton = (
     <Button
       variant="ghost"
       size="icon"
       onClick={handleResolutionSearch}
-      disabled={isAnalyzing || !map || !actions}
+      disabled={isAnalyzing || !actions || isMapboxUnavailable}
       title="Analyze current map view"
     >
@@
-  const mobileButton = (
-    <Button variant="ghost" size="sm" onClick={handleResolutionSearch} disabled={isAnalyzing || !map || !actions}>
+  const mobileButton = (
+    <Button variant="ghost" size="sm" onClick={handleResolutionSearch} disabled={isAnalyzing || !actions || isMapboxUnavailable}>
       <Search className="h-4 w-4 mr-2" />
       Search
     </Button>
   )
🤖 Prompt for AI Agents
In `@components/header-search-button.tsx` around lines 59 - 104, The
resolution-search control is being disabled whenever the Mapbox map instance is
missing, which also blocks Google mode; change the enable/disable logic to only
disable when the Mapbox provider is selected and the Mapbox instance is missing.
Locate the UI/button disable check in header-search-button.tsx (referencing map,
mapProvider, and the resolution search action) and replace the condition like
"!map" with "(mapProvider === 'mapbox' && !map)" so Google mode remains usable
when mapProvider === 'google'.

Comment on lines +18 to 25
const hasImageAnywhere = messages.some(message =>
Array.isArray(message.content) &&
message.content.some(part => part.type === 'image')
)

const result = await generateObject({
model: (await getModel()) as LanguageModel,
model: (await getModel(hasImageAnywhere)) as LanguageModel,
system: `As a planet computer, your primary objective is to act as an efficient **Task Manager** for the user's query. Your goal is to minimize unnecessary steps and maximize the efficiency of the subsequent exploration phase (researcher agent).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, find the getModel function definition
fd -t f "utils" lib/ | head -20

Repository: QueueLab/QCX

Length of output: 38


🏁 Script executed:

# Search for getModel implementation
rg "export.*getModel|function getModel" --type ts --type tsx -A 10

Repository: QueueLab/QCX

Length of output: 83


🏁 Script executed:

# Check the full generateObject call in task-manager.tsx
sed -n '18,40p' lib/agents/task-manager.tsx

Repository: QueueLab/QCX

Length of output: 2006


🏁 Script executed:

# Find utils file location
fd -type f "utils" lib/

Repository: QueueLab/QCX

Length of output: 226


🏁 Script executed:

# Search for getModel more broadly (without type filter)
rg "export.*getModel|function getModel" -A 15

Repository: QueueLab/QCX

Length of output: 1033


🏁 Script executed:

# Read more of the file to see full generateObject call
sed -n '18,80p' lib/agents/task-manager.tsx

Repository: QueueLab/QCX

Length of output: 2939


🏁 Script executed:

# Read the full getModel function
rg "export async function getModel" -A 100 lib/utils/index.ts

Repository: QueueLab/QCX

Length of output: 3417


Use a structured-output-safe model for generateObject calls with schema.

The getModel() function prioritizes xAI and Gemini before falling back to OpenAI. Since generateObject here uses schema: nextActionSchema, it requires a model with proven structured-output support. Per production learnings, xAI and Gemini have reliability issues with AI SDK's generateObject and Zod schemas. Either force OpenAI gpt-4o for structured outputs or add structured-output detection to getModel() to prioritize OpenAI.

🔧 Suggested fix
-import { getModel } from '../utils'
+import { getModel } from '../utils'
+import { createOpenAI } from '@ai-sdk/openai'

@@
-    const result = await generateObject({
-      model: (await getModel(hasImageAnywhere)) as LanguageModel,
+    const openaiApiKey = process.env.OPENAI_API_KEY
+    const structuredOutputModel = openaiApiKey
+      ? createOpenAI({ apiKey: openaiApiKey })('gpt-4o')
+      : ((await getModel(hasImageAnywhere)) as LanguageModel)
+
+    const result = await generateObject({
+      model: structuredOutputModel,
       system: `As a planet computer, your primary objective is to act as an efficient **Task Manager**...
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const hasImageAnywhere = messages.some(message =>
Array.isArray(message.content) &&
message.content.some(part => part.type === 'image')
)
const result = await generateObject({
model: (await getModel()) as LanguageModel,
model: (await getModel(hasImageAnywhere)) as LanguageModel,
system: `As a planet computer, your primary objective is to act as an efficient **Task Manager** for the user's query. Your goal is to minimize unnecessary steps and maximize the efficiency of the subsequent exploration phase (researcher agent).
import { getModel } from '../utils'
import { createOpenAI } from '@ai-sdk/openai'
const hasImageAnywhere = messages.some(message =>
Array.isArray(message.content) &&
message.content.some(part => part.type === 'image')
)
const openaiApiKey = process.env.OPENAI_API_KEY
const structuredOutputModel = openaiApiKey
? createOpenAI({ apiKey: openaiApiKey })('gpt-4o')
: ((await getModel(hasImageAnywhere)) as LanguageModel)
const result = await generateObject({
model: structuredOutputModel,
system: `As a planet computer, your primary objective is to act as an efficient **Task Manager** for the user's query. Your goal is to minimize unnecessary steps and maximize the efficiency of the subsequent exploration phase (researcher agent).
🤖 Prompt for AI Agents
In `@lib/agents/task-manager.tsx` around lines 18 - 25, The generateObject call
uses schema: nextActionSchema but getModel(...) may return xAI/Gemini which are
unreliable for structured-output; update the code so that when calling
generateObject (in task-manager.tsx) you either (A) force a
structured-output-capable model (e.g., OpenAI gpt-4o) whenever a schema is
provided (detect presence of nextActionSchema or any schema param) or (B) update
getModel to include structured-output capability detection/prioritization so it
returns a known-good model for schema-based calls; ensure the change applies to
the code path that computes model: (await getModel(hasImageAnywhere)) and
preserves the existing hasImageAnywhere logic.

@ngoiyaeric
Copy link
Collaborator Author

@coderabbitai 76ace98 This commit started drawing visualizations on the map after switching maps after resolution search.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 1, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

@ngoiyaeric Let me examine that commit to understand what's causing the visualizations to appear after switching maps.


🏁 Script executed:

#!/bin/bash
# Examine the commit diff to see what changes were made
git show 76ace98f8907d89a0059872be4a64b889b04eb49

Length of output: 5701


@ngoiyaeric I've analyzed commit 76ace98. Here's what's happening:

What the Commit Changed

This commit added Google Static Maps image rendering to geospatial query tool results:

Key change in app/actions.tsx (Lines 746-760):

// Now extracts mapUrl from tool output
const mapUrl = toolOutput.mcp_response?.mapUrl;

// Renders a ResolutionImage BEFORE MapQueryHandler when mapUrl exists
{mapUrl && (
  <ResolutionImage
    src={mapUrl}
    className="mb-0"
    alt={placeName ? `Map of ${placeName}` : 'Map Preview'}
  />
)}
<MapQueryHandler toolOutput={toolOutput} />

What happens:

  1. When mapProvider === 'google', the geospatialTool now generates a Google Static Maps URL and stores it in mcpData.mapUrl
  2. The getUIStateFromAIState function reconstructs the UI from the AI state and renders ResolutionImage components for any geospatial results that have a mapUrl
  3. When you switch map providers, this likely triggers a UI state reconstruction, causing these static map previews to appear in the chat UI

Likely Cause

The behavior you're observing is probably:

  • Static map image previews appearing in the chat history after switching map providers
  • These images come from previous geospatial queries that now have mapUrl populated in the AI state
  • The UI reconstruction shows them even though they may not have been visible before

Question: Could you clarify what you mean by "drawing visualizations on the map"?

  • Are you seeing static map image previews appearing in the chat UI?
  • Or are you seeing GeoJSON overlays/features being drawn on the actual interactive map canvas?
  • Is this behavior a bug that needs fixing, or is it expected but needs refinement?

This will help me determine if we need to:

  1. Prevent UI state reconstruction on map provider changes
  2. Clear/filter mapUrl data when switching providers
  3. Something else entirely

🧠 Learnings used
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-16T10:02:40.220Z
Learning: The router agent and resolution search agent must implement a feedback loop where tool execution results (including GeoTIFF URLs, embeddings, and satellite analysis data) are fed back to the AI model's context through enriched messages before generating the final response, ensuring the foundational model receives and can synthesize all satellite intelligence data.

@google-labs-jules
Copy link
Contributor

@coderabbitai 76ace98 This commit started drawing visualizations on the map after switching maps after resolution search.

Acknowledged! I'm glad to hear that the visualizations are now correctly persisting across map switches after a resolution search.

Regarding the follow-up queries, I have implemented several fixes in this update:

  1. Vision-Aware Models: I updated the model selection logic (getModel) to ensure that vision-capable models (like grok-vision-beta) are used whenever images are present in the conversation history. This allows the assistant to understand and refer back to the satellite previews in follow-up turns.
  2. Map Provider Context: I ensured that the mapProvider context is correctly passed during follow-up submissions from the FollowupPanel, which was previously missing. This ensures consistent tool behavior.
  3. Agent Awareness: All agents (Task Manager, Inquirer, and Query Suggestor) are now aware of images in the history and will trigger vision-capable models accordingly.

@ngoiyaeric
Copy link
Collaborator Author

@coderabbitai it was geojson overlays. how can we make that commit the head

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 1, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

@ngoiyaeric Let me check what commits came after 76ace98 to help you understand the implications.


🏁 Script executed:

#!/bin/bash
# Check the commit history to see what comes after 76ace98
git log --oneline 76ace98f8907d89a0059872be4a64b889b04eb49..HEAD

# Show the current HEAD
echo "---"
echo "Current HEAD:"
git log -1 --oneline HEAD

# Show commit 76ace98
echo "---"
echo "Target commit 76ace98:"
git log -1 --oneline 76ace98f8907d89a0059872be4a64b889b04eb49

Length of output: 299


@ngoiyaeric I can help you with that. Currently, commit 76ace98 is already in your branch — it's the parent of your current HEAD.

There's one commit after it (830d53a) which added the multi-image preview feature and follow-up fixes we discussed earlier.

Options to Make 76ace98 the HEAD

Option 1: Reset the Branch (Removes commit 830d53a)

This will discard the latest commit and move HEAD back to 76ace98:

# Soft reset - keeps changes from 830d53a as uncommitted changes
git reset --soft 76ace98f8907d89a0059872be4a64b889b04eb49

# OR hard reset - completely discards all changes from 830d53a
git reset --hard 76ace98f8907d89a0059872be4a64b889b04eb49

# Force push to update the remote branch
git push --force origin HEAD

⚠️ Warning: This will remove the multi-image preview and follow-up fixes from commit 830d53a.

Option 2: Revert the Latest Commit (Keeps history)

If you want to undo 830d53a but keep the history:

git revert 830d53a
git push origin HEAD

Question:

  • Do you want to remove the latest commit (830d53a) that added multi-image previews?
  • Or are you experiencing an issue with GeoJSON overlays appearing incorrectly that we need to fix differently?

The commit 76ace98 is what introduced the GeoJSON overlay rendering behavior. If that's causing issues after map switching, we might need to fix the logic rather than reset the branch.


🧠 Learnings used
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-16T10:02:40.220Z
Learning: The router agent and resolution search agent must implement a feedback loop where tool execution results (including GeoTIFF URLs, embeddings, and satellite analysis data) are fed back to the AI model's context through enriched messages before generating the final response, ensuring the foundational model receives and can synthesize all satellite intelligence data.

…low-ups

- Capture and display Mapbox and Google Satellite previews side-by-side in resolution search.
- Update `ResolutionImage` component for multi-image comparison and session persistence.
- Fix `getModel` utility to correctly return vision-capable models (e.g., `grok-vision-beta`) when history contains images.
- Make Task Manager, Inquirer, and Query Suggestor agents vision-aware.
- Include `mapProvider` context in follow-up submissions to maintain tool and model selection accuracy.
- Enhance `resolutionSearch` system prompt for comparative analysis of dual previews.

Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

1.0.0 Standard release Bug fix help wanted Extra attention is needed Review effort 3/5

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants