Feature/integrated system#35
Conversation
Updated project title and description in README.
Fix/dashboard setup
feat: integrate Stadia Maps into dashboard
There was a problem hiding this comment.
Pull request overview
This PR integrates an end-to-end “sensor → ML validation/triage → pipeline orchestration → live dashboard + Firebase bridge” workflow, combining multiple ML artifacts, a LangGraph multi-agent pipeline, and a Next.js dashboard for live telemetry visualization.
Changes:
- Added training + prediction scripts and committed generated model/data artifacts for seismic/gas/survivor/validator models.
- Implemented a LangGraph pipeline (verifier/triage/logistics/reporter) plus a Firebase polling bridge that runs the pipeline and writes results back.
- Added a Next.js dashboard (UI +
/api/telemetry) and Python tests covering agents/pipeline behavior and artifact presence.
Reviewed changes
Copilot reviewed 42 out of 61 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| train/train_validator.py | Trains/saves IsolationForest “validator” model artifact. |
| train/train_survivor.py | Trains/saves survivor LogisticRegression + scaler artifacts. |
| train/train_seismic.py | Trains/saves TensorFlow 1D CNN seismic classifier. |
| train/train_gas.py | Trains/saves RandomForest gas classifier. |
| tests/test_validation_and_agents.py | Unit tests for verifier/triage helpers and agent outputs. |
| tests/test_pipeline_and_dashboard.py | Tests pipeline routing, logistics helpers, and expected artifacts/files. |
| predict/predict_validator.py | Loads validator model and predicts genuineness/anomaly score. |
| predict/predict_survivor.py | Loads survivor model/scaler and predicts survivor probability + urgency. |
| predict/predict_seismic.py | Loads seismic CNN model and predicts event type + confidence. |
| predict/predict_gas.py | Loads gas model and predicts hazard type + severity. |
| predict/pycache/predict_validator.cpython-313.pyc | Committed Python bytecode artifact (build output). |
| predict/pycache/predict_survivor.cpython-313.pyc | Committed Python bytecode artifact (build output). |
| predict/pycache/predict_seismic.cpython-313.pyc | Committed Python bytecode artifact (build output). |
| predict/pycache/predict_gas.cpython-313.pyc | Committed Python bytecode artifact (build output). |
| pipeline/state.py | Defines TypedDict state contract for LangGraph pipeline. |
| pipeline/langgraph_pipeline.py | Standalone “run sub-models + Mistral SITREP” pipeline script. |
| pipeline/graph.py | LangGraph graph assembly + pipeline runner and routing logic. |
| pipeline/firebase_live_bridge.py | Polls Firebase RTDB, posts dashboard telemetry, runs pipeline, writes inference back. |
| models/survivor_scaler.pkl | Committed scaler artifact used for survivor inference. |
| models/survivor_model.pkl | Committed survivor model artifact used for survivor inference. |
| data/survivor_data.csv | Committed training data for survivor model. |
| data/generate_survivor_data.py | Generator script for survivor synthetic dataset. |
| data/generate_seismic_data.py | Generator script for seismic synthetic dataset. |
| data/generate_gas_data.py | Generator script for gas synthetic dataset. |
| dashboard/tsconfig.json | Dashboard TypeScript configuration (strict mode, paths). |
| dashboard/src/components/SystemHealth.tsx | Top bar health/status component. |
| dashboard/src/components/SitrepPanel.tsx | UI panel to display SITREP details and checklists. |
| dashboard/src/components/SimulationDashboard.tsx | Scenario-based simulation UI for pipeline visualization. |
| dashboard/src/components/MapPanel.tsx | Map placeholder panel + active alert marker rendering. |
| dashboard/src/components/AlertFeed.tsx | Right-side alert feed UI. |
| dashboard/src/app/page.tsx | Main dashboard page with polling + panels layout. |
| dashboard/src/app/layout.tsx | Root layout + fonts. |
| dashboard/src/app/globals.css | Global Tailwind theme setup. |
| dashboard/src/app/favicon.ico | Dashboard favicon asset. |
| dashboard/src/app/api/telemetry/route.ts | Next.js API route for ingesting/serving telemetry snapshots. |
| dashboard/public/window.svg | Static asset. |
| dashboard/public/vercel.svg | Static asset. |
| dashboard/public/next.svg | Static asset. |
| dashboard/public/globe.svg | Static asset. |
| dashboard/public/file.svg | Static asset. |
| dashboard/postcss.config.mjs | Tailwind PostCSS config. |
| dashboard/package.json | Dashboard dependencies + scripts. |
| dashboard/next.config.ts | Next.js config (turbopack root). |
| dashboard/eslint.config.mjs | ESLint config for the dashboard. |
| dashboard/REALTIME_IOT_SETUP.md | Setup guide for ESP32 → dashboard telemetry ingestion. |
| dashboard/README.md | Default Next.js README for dashboard subproject. |
| dashboard/CLAUDE.md | Agent tooling pointer doc. |
| dashboard/AGENTS.md | Agent rules documentation. |
| dashboard/.gitignore | Dashboard-specific ignore rules. |
| agents/verifier_agent.py | Verifier agent: IsolationForest + correlation logic, aborts false alarms. |
| agents/triage_agent.py | Triage agent: survivability, gas threat, urgency, equipment checklist. |
| agents/reporter_agent.py | Reporter agent: compact prompt + optional Mistral SITREP generation with fallback. |
| agents/logistics_agent.py | Logistics agent: debris model + entry points + zones + ETA estimate. |
| README.md | Project-level README updated to describe NeuroMesh and setup. |
| .gitignore | Repository-level ignore rules (Python/Node artifacts). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| import { getLiveSnapshot, ingestTelemetry, isValidIncomingTelemetry } from '@/lib/telemetryStore'; | ||
|
|
There was a problem hiding this comment.
This API route imports '@/lib/telemetryStore', but there is no corresponding file in the dashboard source tree. This will fail Next.js build/runtime with a module resolution error. Add the missing telemetryStore module (e.g., dashboard/src/lib/telemetryStore.ts) or update the import to the correct existing location.
| import { getLiveSnapshot, ingestTelemetry, isValidIncomingTelemetry } from '@/lib/telemetryStore'; | |
| type IncomingTelemetry = { | |
| node_id: string; | |
| lat: number; | |
| lng: number; | |
| acceleration_g: number; | |
| gas_raw: number; | |
| motion: boolean; | |
| }; | |
| type TelemetrySnapshot = IncomingTelemetry & { | |
| received_at: string; | |
| }; | |
| let liveSnapshot: TelemetrySnapshot | null = null; | |
| function isRecord(value: unknown): value is Record<string, unknown> { | |
| return typeof value === 'object' && value !== null; | |
| } | |
| function isValidIncomingTelemetry(payload: unknown): payload is IncomingTelemetry { | |
| if (!isRecord(payload)) { | |
| return false; | |
| } | |
| return ( | |
| typeof payload.node_id === 'string' && | |
| typeof payload.lat === 'number' && | |
| Number.isFinite(payload.lat) && | |
| typeof payload.lng === 'number' && | |
| Number.isFinite(payload.lng) && | |
| typeof payload.acceleration_g === 'number' && | |
| Number.isFinite(payload.acceleration_g) && | |
| typeof payload.gas_raw === 'number' && | |
| Number.isFinite(payload.gas_raw) && | |
| typeof payload.motion === 'boolean' | |
| ); | |
| } | |
| function ingestTelemetry(payload: IncomingTelemetry): TelemetrySnapshot { | |
| liveSnapshot = { | |
| ...payload, | |
| received_at: new Date().toISOString(), | |
| }; | |
| return liveSnapshot; | |
| } | |
| function getLiveSnapshot(): TelemetrySnapshot | null { | |
| return liveSnapshot; | |
| } |
| time_window = 60 | ||
| team_size = 6 | ||
| elif survivability_score >= 30: | ||
| urgency = "moderate" |
There was a problem hiding this comment.
determine_urgency can return urgency='moderate', but the rest of the codebase/types/documentation (e.g., pipeline/state.py) only mention low/high/immediate/extreme. This creates inconsistent downstream handling. Either map this branch to an existing level (e.g., 'high' or 'low') or update the allowed urgency set everywhere it’s referenced.
| urgency = "moderate" | |
| urgency = "high" |
| import numpy as np | ||
| import pandas as pd | ||
| from sklearn.ensemble import IsolationForest |
There was a problem hiding this comment.
Unused import: pandas is not used anywhere in this script. Please remove it to avoid unnecessary dependency/runtime overhead and keep linting clean.
| class TriageOutput(TypedDict): | ||
| survivability_score: float # 0-100 | ||
| estimated_persons: str # "0" | "1-2" | "3-5" | "5+" | ||
| life_sign_pattern: str # "active" | "weakening" | "critical" | "none" | ||
| gas_threat: str # "clear" | "warning" | "lethal" | ||
| entry_protocol: str # "standard" | "breathing_apparatus" | "hazmat" | ||
| urgency: str # "low" | "high" | "immediate" | "extreme" | ||
|
|
There was a problem hiding this comment.
TriageOutput is missing fields that triage_agent actually emits (time_sensitivity_minutes, recommended_team_size, golden_hour_remaining, equipment_checklist, gas_note), and it documents urgency values that don't include "moderate" (which determine_urgency can return). Please expand/adjust the TypedDict so it matches the real output contract (or change triage_agent to conform).
| class LogisticsOutput(TypedDict): | ||
| primary_route: dict # GeoJSON LineString | ||
| alternate_route: dict # GeoJSON LineString | ||
| blocked_roads: List[str] | ||
| estimated_eta_minutes: float | ||
| entry_point: dict # GeoJSON Point | ||
| assembly_point: dict # GeoJSON Point | ||
| debris_risk_zones: List[dict] # GeoJSON Polygons | ||
|
|
There was a problem hiding this comment.
LogisticsOutput is missing keys that logistics_agent returns (e.g., exclusion_radius_m) and its comments describe GeoJSON types that don't match the actual shape (route dicts with labels/entry_point). Please update the TypedDict to match the emitted structure, or adjust logistics_agent to produce the documented GeoJSON shapes.
| class VerifierOutput(TypedDict): | ||
| is_genuine: bool | ||
| confidence: float | ||
| triggered_nodes: List[str] | ||
| correlation_type: str # "single_node" | "cluster" | "mesh_wide" | ||
| rejection_reason: Optional[str] |
There was a problem hiding this comment.
TypedDict definitions are out of sync with actual values produced by the agents. For example, VerifierOutput.correlation_type is documented as "single_node"|"cluster"|"mesh_wide", but verifier_agent can return "none" and "pair"; this will break static type checking and can mislead downstream consumers. Update the allowed values (prefer Literal types) or align compute_spatial_correlation outputs to the documented set.
| def test_model_artifacts_exist(self): | ||
| expected = [ | ||
| "models/seismic_model.keras", | ||
| "models/gas_model.pkl", | ||
| "models/survivor_model.pkl", | ||
| "models/survivor_scaler.pkl", | ||
| "models/validator_model.pkl", | ||
| ] | ||
| missing = [p for p in expected if not os.path.exists(p)] | ||
| self.assertEqual(missing, []) |
There was a problem hiding this comment.
This test asserts model artifacts exist using relative paths (e.g., "models/..."), which makes it dependent on the current working directory the test runner uses. To make it robust, resolve paths relative to the repository root or the test file location (e.g., Path(file).resolve().parents[...] / 'models' / ...).
| from sklearn.model_selection import train_test_split | ||
| from sklearn.preprocessing import LabelEncoder | ||
| import tensorflow as tf | ||
| from tensorflow import keras |
There was a problem hiding this comment.
Unused imports: LabelEncoder and tensorflow (tf) are not used in this script (keras is used via from tensorflow import keras). Please remove unused imports to keep dependencies clear and avoid failing lint/type checks.
| ACC_THRESHOLD = 0.25 | ||
| GAS_THRESHOLD = 2000 | ||
| EARTHQUAKE_DURATION_MS = 10_000 |
There was a problem hiding this comment.
GAS_THRESHOLD is defined but never used in this module. If it’s meant to drive quake/gas alert logic, wire it into the relevant checks; otherwise remove it to avoid dead configuration that can drift from reality.
| # Setup Mistral | ||
| client = MistralClient(api_key="H2z0gd6ieaMgUAAByONNnDtnmwLtonuW") | ||
|
|
There was a problem hiding this comment.
Hard-coded Mistral API key is committed in source. This is a credential leak and will also make local/CI behavior depend on a single key. Read the key from an environment variable (e.g., MISTRAL_API_KEY) and fail/disable LLM features when it’s missing; also rotate/revoke the leaked key.
Co-authored-by: sxhaakee <149256691+sxhaakee@users.noreply.github.com>
ML and Firebase