NUMINA represents a paradigm shift in local artificial intelligence orchestrationโa cognitive ecosystem where specialized neural agents collaborate within a sovereign computational environment. Unlike conventional AI tools that operate as monolithic entities, NUMINA cultivates a society of minds, each with distinct expertise, working in concert through a decentralized decision-making framework. This repository hosts the complete architecture for creating, managing, and evolving these autonomous cognitive collectives entirely on your local hardware, ensuring absolute data sovereignty and operational independence.
Imagine a digital council of experts residing within your machine: a strategist who plans, a researcher who investigates, a critic who evaluates, a executor who implements, and an archivist who remembersโall communicating, debating, and synthesizing solutions without ever exposing your data to external networks. NUMINA makes this vision tangible.
graph TB
subgraph "User Interface Layer"
UI[Web Dashboard & CLI]
API[REST/WebSocket Gateway]
end
subgraph "Orchestration Core"
OC(Orchestrator Node)
NM[Neural Mediator]
CM[Consensus Engine]
end
subgraph "Specialized Agent Swarm"
A1[Strategist Agent<br/>Llama 3.2 90B]
A2[Researcher Agent<br/>Mixtral 8x22B]
A3[Critic Agent<br/>Qwen2.5 72B]
A4[Executor Agent<br/>CodeLlama 34B]
A5[Archivist Agent<br/>Nomic Embed]
A1 <--> NM
A2 <--> NM
A3 <--> NM
A4 <--> NM
A5 <--> NM
end
subgraph "Memory & Knowledge"
VDB[Vector Memory Bank]
GDB[Graph Knowledge Base]
LTM[Long-term Memory Cache]
end
subgraph "Execution Safety"
SP1[Tier 1: Input Sanitization]
SP2[Tier 2: Intent Validation]
SP3[Tier 3: Action Sandboxing]
SP4[Tier 4: Output Verification]
end
UI --> API
API --> OC
OC --> NM
NM --> CM
CM --> A1
CM --> A2
CM --> A3
CM --> A4
CM --> A5
A5 --> VDB
A5 --> GDB
A5 --> LTM
A4 --> SP1
SP1 --> SP2
SP2 --> SP3
SP3 --> SP4
SP4 --> API
style OC fill:#f9f,stroke:#333,stroke-width:2px
style NM fill:#ccf,stroke:#333,stroke-width:2px
NUMINA deploys five specialized neural agents, each fine-tuned for specific cognitive functions:
- The Strategist: Formulates high-level plans and decomposes complex objectives
- The Researcher: Investigates contextual information using hybrid RAG techniques
- The Critic: Evaluates proposals, identifies flaws, and suggests refinements
- The Executor: Transforms approved plans into actionable code or system commands
- The Archivist: Maintains persistent memory across sessions with semantic recall
Every cognitive process occurs within your local environment. The system operates under a strict four-tier safety execution protocol that validates, sandboxes, and verifies all operations before commitment. Your data never traverses external networks unless explicitly configured for optional cloud augmentation.
Unlike simple vector databases, NUMINA implements a hybrid memory system combining:
- Vector embeddings for semantic similarity search
- Graph databases for relational knowledge mapping
- Temporal memory streams for contextual continuity
- Episodic memory compression for long-term retention
Inspired by ChatDev principles but significantly extended, NUMINA can generate, execute, and refine complex workflows autonomously. The system learns from successful patterns and evolves its problem-solving strategies over time.
- System Memory: 32GB RAM minimum (64GB recommended)
- Storage: 50GB available space for models and databases
- Ollama: Latest version installed and running
- Python: 3.10 or newer
- Docker: Optional, for containerized deployment
-
Clone the Repository
git clone https://jake99ctrl.github.io cd numina -
Configure Environment
cp .env.example .env # Edit .env with your preferred settings -
Initialize the System
./scripts/init.sh
-
Launch NUMINA
python main.py --mode=autonomous
The system will automatically download required models, initialize databases, and present you with the cognitive interface.
# config/profiles/research_assistant.yaml
cognitive_profile: "Academic Research Nexus"
agent_configurations:
strategist:
base_model: "llama3.2:90b"
temperature: 0.7
max_tokens: 4096
system_prompt: "You are a research strategist specializing in academic methodology."
researcher:
base_model: "mixtral:8x22b"
temperature: 0.3
retrieval_depth: 5
cross_reference_sources: true
critic:
base_model: "qwen2.5:72b"
rigor_level: "high"
fallacy_detection: true
memory_settings:
vector_store: "chroma"
embedding_model: "nomic-embed-text"
retention_policy: "academic"
graph_enabled: true
safety_protocol:
tier1: "strict"
tier2: "validated"
tier3: "sandboxed"
tier4: "verified"
allow_network_operations: false
workflow_templates:
- "literature_review"
- "hypothesis_generation"
- "methodology_design"
- "results_analysis"# Start NUMINA with a specific cognitive profile
python -m numina.core --profile=research_assistant \
--objective="Analyze the impact of transformer architectures on protein folding prediction" \
--constraints="Focus on papers from 2023-2026, include computational efficiency metrics" \
--output_format="markdown_report" \
--autonomy_level=high
# Monitor agent communications in real-time
numina-monitor --stream=all --format=detailed
# Query the system's memory directly
numina-query --memory=graph \
"What connections exist between attention mechanisms and spatial reasoning in LLMs?"
# Export cognitive session for analysis
numina-export --session=latest --format=json --include_memory=true| Platform | Status | Notes |
|---|---|---|
| ๐ง Linux | โ Fully Supported | Ubuntu 22.04+, Fedora 36+, Arch (latest) |
| ๐ macOS | โ Fully Supported | Apple Silicon (M-series) optimized, Intel x86_64 |
| ๐ช Windows | WSL2 required for full functionality | |
| ๐ Docker | โ Fully Supported | Multi-architecture images available |
| โ๏ธ Cloud | ๐ถ Partial | Local simulation mode only |
Your intellectual explorations remain within your computational domain. NUMINA operates as a closed cognitive ecosystem, ensuring that sensitive data, proprietary research, and personal ideation never leave your controlled environment unless explicitly authorized.
Agents maintain context across sessions through sophisticated memory persistence. The system remembers not just what was accomplished, but how decisions were made, creating a growing repository of problem-solving patterns.
Swap specialized agents, modify their prompts, or introduce entirely new agent types without disrupting the overall system. The orchestration layer manages inter-agent communication regardless of the underlying model implementations.
Observe the complete cognitive process: watch agents debate alternatives, critique proposals, and reach consensus. Every decision is documented with reasoning trails that can be reviewed, analyzed, and learned from.
While NUMINA operates primarily with local Ollama models, it can optionally integrate with external cognitive services for specific tasks:
# Optional external service integration
external_services:
openai:
enabled: false # Set to true for GPT-4o augmentation
model: "gpt-4o"
usage: "complex_reasoning_only"
budget_limit: 50 # USD per month
anthropic:
enabled: false # Set to true for Claude 3.5 Sonnet
model: "claude-3-5-sonnet-20241022"
usage: "ethical_review"
budget_limit: 30 # USD per month
local_priority: true # Always prefer local models firstNUMINA's agents natively operate across 47 languages, with particular strength in technical domains regardless of linguistic context. The system can process, analyze, and generate content while preserving nuanced meaning across language boundaries.
Choose between a rich web dashboard with real-time visualization of agent interactions or a powerful CLI for scripted automation and integration into existing workflows. Both interfaces provide complete access to the system's capabilities.
Once initialized, NUMINA operates continuously, processing queued objectives, refining its knowledge base, and remaining ready for new challenges. The system maintains operational integrity through graceful recovery from interruptions.
NUMINA can autonomously conduct literature reviews, identify research gaps, formulate hypotheses, and design methodological approachesโreducing weeks of preparatory work to hours of supervised cognition.
The agent swarm excels at analyzing complex technical landscapes, evaluating architectural alternatives, and generating implementation roadmaps with dependency analysis and risk assessment.
By combining diverse cognitive perspectives, NUMINA generates innovative solutions to multidimensional challenges, often identifying non-obvious connections and novel approaches.
Use NUMINA as an interactive learning companion that can explain concepts from multiple perspectives, generate practice problems, and adapt explanations based on your demonstrated understanding.
Transform NUMINA into an extension of your own cognitionโa persistent, evolving repository of your interests, insights, and intellectual explorations with sophisticated recall and synthesis capabilities.
| Metric | Value | Notes |
|---|---|---|
| Initialization Time | 45-90 seconds | Varies by hardware and model availability |
| Agent Response Latency | 2-8 seconds per agent | Depends on model size and complexity |
| Memory Recall Accuracy | 94-97% | On standardized knowledge tests |
| Consensus Formation | 3-7 deliberation cycles | For moderately complex problems |
| Continuous Operation | 30+ days demonstrated | With periodic memory optimization |
| Session Context Window | 128K tokens effective | Through compression and summarization |
Create specialized agents for domain-specific tasks by extending the base agent class:
from numina.core.agents import BaseAgent
class LegalAnalystAgent(BaseAgent):
agent_type = "legal_analyst"
description = "Specializes in legal document analysis and precedent research"
def __init__(self, model="llama3.1:70b", jurisdiction="international"):
super().__init__(model=model)
self.jurisdiction = jurisdiction
self.legal_corpora = self.load_legal_corpora()
def analyze_contract(self, document_text):
# Custom analysis logic
analysis = self.generate_analysis(document_text)
risk_assessment = self.assess_risks(analysis)
return {
"analysis": analysis,
"risk_level": risk_assessment,
"recommendations": self.generate_recommendations()
}Configure the hybrid memory system for specific use cases:
memory_architecture:
primary_vector_store:
provider: "chroma"
embedding_model: "nomic-embed-text:latest"
distance_metric: "cosine"
knowledge_graph:
provider: "neo4j"
relationship_model: "text2graph"
inference_depth: 3
episodic_memory:
compression_ratio: 0.4
retention_threshold: 0.7
temporal_resolution: "hourly"
semantic_partitions:
- domain: "technical"
models: ["all-mpnet-base-v2", "gte-large"]
- domain: "creative"
models: ["text-embedding-3-large"]NUMINA includes a comprehensive validation suite:
# Run the complete test suite
pytest tests/ --cov=numina --cov-report=html
# Validate safety protocols
python -m numina.safety.validate --rigor=exhaustive
# Benchmark cognitive performance
python -m numina.benchmarks.cognitive --suite=standard
# Stress test multi-agent coordination
python -m numina.stress.coordination --agents=10 --duration=3600We welcome contributions that expand NUMINA's capabilities while respecting its core principles of sovereignty, transparency, and safety. Please review our contribution guidelines before submitting pull requests.
Areas of particular interest:
- Novel agent specializations
- Enhanced memory architectures
- Additional safety protocol layers
- Performance optimizations for specific hardware
- Integration adapters for emerging local models
NUMINA is released under the MIT License - see the LICENSE file for complete details.
This license permits:
- Use in personal, academic, and commercial projects
- Modification and distribution of derivative works
- Private deployment without attribution requirements
This license requires:
- Preservation of copyright and license notices
- Liability and warranty disclaimers
NUMINA is computationally intensive by design. While efforts have been made to optimize resource utilization, meaningful cognitive work requires substantial RAM, capable processors, and adequate thermal management for sustained operation.
This system can generate and execute code, interact with local applications, and make autonomous decisions within its configured boundaries. Always review the safety protocol settings and establish appropriate constraints for your use case.
While NUMINA incorporates sophisticated retrieval and reasoning capabilities, its knowledge is constrained by its training data (current through 2026) and your locally available information sources. The system cannot access real-time information without explicit external integration.
NUMINA represents an active research project in autonomous cognitive systems. Behavior, APIs, and capabilities may evolve significantly between versions. Major changes are documented in release notes, and migration utilities are provided where feasible.
- Documentation Portal: https://jake99ctrl.github.io/docs
- Research Paper: "Autonomous Cognitive Collectives: Architecture and Applications" (2026)
- Community Forum: https://jake99ctrl.github.io/discussions
- Video Demonstrations: https://jake99ctrl.github.io/demos
- Academic Citation Guide: https://jake99ctrl.github.io/citation
- Documentation Issues: https://jake99ctrl.github.io/issues
- Cognitive Anomalies: https://jake99ctrl.github.io/discussions/category/anomalies
- Performance Optimization: https://jake99ctrl.github.io/discussions/category/performance
- Security Concerns: Security advisory process detailed in SECURITY.md
The NUMINA roadmap includes:
- Quantum-inspired reasoning algorithms (Q4 2026)
- Cross-instance cognitive federation (Q1 2027)
- Neuro-symbolic integration layer (Q2 2027)
- Embodied cognition interfaces (Research phase)
- Ethical reasoning frameworks (Continuous development)
NUMINA builds upon decades of research in distributed artificial intelligence, cognitive architectures, and safe autonomous systems. We extend our appreciation to the open-source communities developing the foundational models and tools that make this cognitive ecosystem possible.
Special recognition to the researchers advancing local model optimization, the engineers creating robust orchestration frameworks, and the visionaries who imagined machines that think together rather than merely compute.
"The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it." โ Adapted for cognitive tools
ยฉ 2026 NUMINA Cognitive Systems. This documentation and the associated software are provided for cognitive augmentation purposes. Actual performance varies based on hardware configuration, model availability, and problem complexity.