Fix/dashboard setup#31
Conversation
Updated project title and description in README.
There was a problem hiding this comment.
Pull request overview
This PR adds an end-to-end “NeuroMesh” demo: training scripts and prediction modules for multiple sensor models, an orchestration pipeline that generates an LLM SITREP, and a Next.js dashboard to simulate scenarios and visualize the pipeline.
Changes:
- Added training scripts + synthetic data generators for seismic/gas/survivor/validator models, plus saved model artifacts.
- Added prediction modules and a Python pipeline that runs all models and generates a SITREP via Mistral.
- Added a Next.js “SimulationDashboard” UI to run predefined scenarios and display outputs.
Reviewed changes
Copilot reviewed 27 out of 45 changed files in this pull request and generated 17 comments.
Show a summary per file
| File | Description |
|---|---|
| train/train_validator.py | Trains an IsolationForest “validator” model and saves it. |
| train/train_survivor.py | Trains a logistic regression survivor model + scaler and saves them. |
| train/train_seismic.py | Trains a small 1D CNN seismic classifier and saves it. |
| train/train_gas.py | Trains a RandomForest gas classifier and saves it. |
| predict/predict_validator.py | Loads validator model and predicts authenticity/anomaly info. |
| predict/predict_survivor.py | Loads survivor model+scaler and predicts survivor probability/urgency. |
| predict/predict_seismic.py | Loads seismic Keras model and predicts class/magnitude/is_crisis. |
| predict/predict_gas.py | Loads gas model and predicts hazard type/severity/entry safety. |
| predict/pycache/predict_validator.cpython-313.pyc | Committed compiled Python artifact. |
| predict/pycache/predict_survivor.cpython-313.pyc | Committed compiled Python artifact. |
| predict/pycache/predict_seismic.cpython-313.pyc | Committed compiled Python artifact. |
| predict/pycache/predict_gas.cpython-313.pyc | Committed compiled Python artifact. |
| pipeline/langgraph_pipeline.py | Orchestrates model inference and calls Mistral to generate SITREP. |
| models/survivor_scaler.pkl | Committed trained scaler artifact. |
| models/survivor_model.pkl | Committed trained model artifact. |
| data/survivor_data.csv | Committed generated training dataset. |
| data/generate_survivor_data.py | Generates survivor training dataset. |
| data/generate_seismic_data.py | Generates seismic training dataset. |
| data/generate_gas_data.py | Generates gas training dataset. |
| dashboard/tsconfig.json | Adds TS config for the dashboard app. |
| dashboard/src/components/SimulationDashboard.tsx | Main dashboard UI + scenario simulation/pipeline visualization. |
| dashboard/src/app/page.tsx | Renders the dashboard component. |
| dashboard/src/app/layout.tsx | Next.js layout wrapper + font setup. |
| dashboard/src/app/globals.css | Global styles / Tailwind setup. |
| dashboard/src/app/favicon.ico | Dashboard favicon asset. |
| dashboard/public/window.svg | Next.js template public asset. |
| dashboard/public/vercel.svg | Next.js template public asset. |
| dashboard/public/next.svg | Next.js template public asset. |
| dashboard/public/globe.svg | Next.js template public asset. |
| dashboard/public/file.svg | Next.js template public asset. |
| dashboard/postcss.config.mjs | PostCSS config for Tailwind. |
| dashboard/package.json | Dashboard dependencies/scripts. |
| dashboard/next.config.ts | Next.js config (turbopack root). |
| dashboard/eslint.config.mjs | ESLint config for the dashboard. |
| dashboard/README.md | Next.js template README for the dashboard. |
| dashboard/CLAUDE.md | Links to agent rules doc. |
| dashboard/AGENTS.md | Agent rules / warning doc. |
| dashboard/.gitignore | Dashboard-specific gitignore. |
| README.md | Removes the repo’s root README template content. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| import { useState, useEffect } from 'react'; | ||
| import { motion, AnimatePresence } from 'framer-motion'; | ||
| import { Radio, AlertTriangle, ShieldCheck, Flame, Users, CheckCircle2, Activity, MapPin, Orbit, Zap, Wind, Database } from 'lucide-react'; |
There was a problem hiding this comment.
useEffect and Wind are imported but never used. Remove unused imports to keep the module clean and avoid lint failures.
| import { useState, useEffect } from 'react'; | |
| import { motion, AnimatePresence } from 'framer-motion'; | |
| import { Radio, AlertTriangle, ShieldCheck, Flame, Users, CheckCircle2, Activity, MapPin, Orbit, Zap, Wind, Database } from 'lucide-react'; | |
| import { useState } from 'react'; | |
| import { motion, AnimatePresence } from 'framer-motion'; | |
| import { Radio, AlertTriangle, ShieldCheck, Flame, Users, CheckCircle2, Activity, MapPin, Orbit, Zap, Database } from 'lucide-react'; |
|
|
||
| <PipelineStage | ||
| label="Sensor Ingestion" icon={<Activity />} | ||
| state={pipelineState === 'transmitting' ? 'active' : 'done'} |
There was a problem hiding this comment.
When pipelineState is 'idle', this stage is rendered as 'done', so the UI shows “Sensor Ingestion” completed even before a scenario runs. Consider mapping 'idle' to 'idle' here and only showing 'done' after the pipeline has progressed past this step.
| state={pipelineState === 'transmitting' ? 'active' : 'done'} | |
| state={pipelineState === 'transmitting' ? 'active' : (['models', 'validating', 'llm', 'complete'].includes(pipelineState) ? 'done' : 'idle')} |
| from sklearn.model_selection import train_test_split | ||
| from sklearn.preprocessing import LabelEncoder | ||
| import tensorflow as tf |
There was a problem hiding this comment.
LabelEncoder is imported but never used. Remove it (or actually encode labels if needed) to keep dependencies minimal.
| import tensorflow as tf | ||
| from tensorflow import keras |
There was a problem hiding this comment.
tensorflow as tf is imported but never used (you only use keras from tensorflow). Remove the unused import to avoid lint noise and clarify dependencies.
| model = joblib.load('models/validator_model.pkl') | ||
|
|
There was a problem hiding this comment.
Models are loaded at module import time. This makes importing the module fail if the model file isn't present and increases cold-start latency. Prefer lazy-loading inside the function (optionally with a cached singleton) and provide a clear error if the model file is missing.
| import numpy as np | ||
| import pandas as pd | ||
| from sklearn.ensemble import IsolationForest |
There was a problem hiding this comment.
pandas is imported but never used. Remove the unused import to avoid unnecessary dependency/overhead and lint noise.
| pir_count,time_since_event_mins,magnitude,survivor_present | ||
| 15,0.6556794712879388,3.2188075050696052,1 | ||
| 14,3.5655650546979523,4.903594689121354,1 |
There was a problem hiding this comment.
This large generated dataset is committed directly to the repo. If it’s reproducible via data/generate_survivor_data.py, consider not committing the CSV (or store it via Git LFS) to keep the repository size and diff noise manageable.
| # Import all prediction functions | ||
| import sys | ||
| sys.path.append('.') | ||
| from predict.predict_seismic import predict_seismic | ||
| from predict.predict_gas import predict_gas | ||
| from predict.predict_survivor import predict_survivor | ||
| from predict.predict_validator import predict_validator |
There was a problem hiding this comment.
sys.path.append('.') is a brittle import workaround that can break when the working directory changes and can also cause module shadowing. Prefer proper packaging (e.g., make the repo a package / use relative imports) or invoke via python -m ... so imports resolve without mutating sys.path.
| model = joblib.load('models/gas_model.pkl') | ||
| LABELS = ['safe', 'LPG_leak', 'smoke_fire'] |
There was a problem hiding this comment.
Models are loaded at module import time. This makes importing the module fail if the model file isn't present and increases cold-start latency. Prefer lazy-loading inside the function (optionally with a cached singleton) and provide a clear error if the model file is missing.
| model = joblib.load('models/survivor_model.pkl') | ||
| scaler = joblib.load('models/survivor_scaler.pkl') | ||
|
|
There was a problem hiding this comment.
Models are loaded at module import time. This makes importing the module fail if the model file isn't present and increases cold-start latency. Prefer lazy-loading inside the function (optionally with a cached singleton) and provide a clear error if the model file is missing.
created a dashboard for the ml model