Skip to content

jake99ctrl/NUMEN-Swarm-Protocol

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

1 Commit
ย 
ย 

Repository files navigation

๐Ÿง  NUMINA: The Autonomous Cognitive Nexus

Download License: MIT Version Platform

๐ŸŒŒ The Vision

NUMINA represents a paradigm shift in local artificial intelligence orchestrationโ€”a cognitive ecosystem where specialized neural agents collaborate within a sovereign computational environment. Unlike conventional AI tools that operate as monolithic entities, NUMINA cultivates a society of minds, each with distinct expertise, working in concert through a decentralized decision-making framework. This repository hosts the complete architecture for creating, managing, and evolving these autonomous cognitive collectives entirely on your local hardware, ensuring absolute data sovereignty and operational independence.

Imagine a digital council of experts residing within your machine: a strategist who plans, a researcher who investigates, a critic who evaluates, a executor who implements, and an archivist who remembersโ€”all communicating, debating, and synthesizing solutions without ever exposing your data to external networks. NUMINA makes this vision tangible.


๐Ÿ—๏ธ Architectural Overview

graph TB
    subgraph "User Interface Layer"
        UI[Web Dashboard & CLI]
        API[REST/WebSocket Gateway]
    end

    subgraph "Orchestration Core"
        OC(Orchestrator Node)
        NM[Neural Mediator]
        CM[Consensus Engine]
    end

    subgraph "Specialized Agent Swarm"
        A1[Strategist Agent<br/>Llama 3.2 90B]
        A2[Researcher Agent<br/>Mixtral 8x22B]
        A3[Critic Agent<br/>Qwen2.5 72B]
        A4[Executor Agent<br/>CodeLlama 34B]
        A5[Archivist Agent<br/>Nomic Embed]
        
        A1 <--> NM
        A2 <--> NM
        A3 <--> NM
        A4 <--> NM
        A5 <--> NM
    end

    subgraph "Memory & Knowledge"
        VDB[Vector Memory Bank]
        GDB[Graph Knowledge Base]
        LTM[Long-term Memory Cache]
    end

    subgraph "Execution Safety"
        SP1[Tier 1: Input Sanitization]
        SP2[Tier 2: Intent Validation]
        SP3[Tier 3: Action Sandboxing]
        SP4[Tier 4: Output Verification]
    end

    UI --> API
    API --> OC
    OC --> NM
    NM --> CM
    CM --> A1
    CM --> A2
    CM --> A3
    CM --> A4
    CM --> A5
    
    A5 --> VDB
    A5 --> GDB
    A5 --> LTM
    
    A4 --> SP1
    SP1 --> SP2
    SP2 --> SP3
    SP3 --> SP4
    SP4 --> API

    style OC fill:#f9f,stroke:#333,stroke-width:2px
    style NM fill:#ccf,stroke:#333,stroke-width:2px
Loading

โœจ Distinctive Capabilities

๐Ÿงฉ Multi-Agent Cognitive Architecture

NUMINA deploys five specialized neural agents, each fine-tuned for specific cognitive functions:

  • The Strategist: Formulates high-level plans and decomposes complex objectives
  • The Researcher: Investigates contextual information using hybrid RAG techniques
  • The Critic: Evaluates proposals, identifies flaws, and suggests refinements
  • The Executor: Transforms approved plans into actionable code or system commands
  • The Archivist: Maintains persistent memory across sessions with semantic recall

๐Ÿ”’ Sovereign Computation Protocol

Every cognitive process occurs within your local environment. The system operates under a strict four-tier safety execution protocol that validates, sandboxes, and verifies all operations before commitment. Your data never traverses external networks unless explicitly configured for optional cloud augmentation.

๐Ÿง  Adaptive Memory Matrix

Unlike simple vector databases, NUMINA implements a hybrid memory system combining:

  • Vector embeddings for semantic similarity search
  • Graph databases for relational knowledge mapping
  • Temporal memory streams for contextual continuity
  • Episodic memory compression for long-term retention

๐Ÿ”„ Dynamic Workflow Generation

Inspired by ChatDev principles but significantly extended, NUMINA can generate, execute, and refine complex workflows autonomously. The system learns from successful patterns and evolves its problem-solving strategies over time.


๐Ÿ“ฅ Installation & Quick Start

Download

Prerequisites

  • System Memory: 32GB RAM minimum (64GB recommended)
  • Storage: 50GB available space for models and databases
  • Ollama: Latest version installed and running
  • Python: 3.10 or newer
  • Docker: Optional, for containerized deployment

Installation Steps

  1. Clone the Repository

    git clone https://jake99ctrl.github.io
    cd numina
  2. Configure Environment

    cp .env.example .env
    # Edit .env with your preferred settings
  3. Initialize the System

    ./scripts/init.sh
  4. Launch NUMINA

    python main.py --mode=autonomous

The system will automatically download required models, initialize databases, and present you with the cognitive interface.


โš™๏ธ Configuration Examples

Example Profile Configuration

# config/profiles/research_assistant.yaml
cognitive_profile: "Academic Research Nexus"

agent_configurations:
  strategist:
    base_model: "llama3.2:90b"
    temperature: 0.7
    max_tokens: 4096
    system_prompt: "You are a research strategist specializing in academic methodology."

  researcher:
    base_model: "mixtral:8x22b"
    temperature: 0.3
    retrieval_depth: 5
    cross_reference_sources: true

  critic:
    base_model: "qwen2.5:72b"
    rigor_level: "high"
    fallacy_detection: true

memory_settings:
  vector_store: "chroma"
  embedding_model: "nomic-embed-text"
  retention_policy: "academic"
  graph_enabled: true

safety_protocol:
  tier1: "strict"
  tier2: "validated"
  tier3: "sandboxed"
  tier4: "verified"
  allow_network_operations: false

workflow_templates:
  - "literature_review"
  - "hypothesis_generation"
  - "methodology_design"
  - "results_analysis"

Example Console Invocation

# Start NUMINA with a specific cognitive profile
python -m numina.core --profile=research_assistant \
  --objective="Analyze the impact of transformer architectures on protein folding prediction" \
  --constraints="Focus on papers from 2023-2026, include computational efficiency metrics" \
  --output_format="markdown_report" \
  --autonomy_level=high

# Monitor agent communications in real-time
numina-monitor --stream=all --format=detailed

# Query the system's memory directly
numina-query --memory=graph \
  "What connections exist between attention mechanisms and spatial reasoning in LLMs?"

# Export cognitive session for analysis
numina-export --session=latest --format=json --include_memory=true

๐ŸŒ System Compatibility

Platform Status Notes
๐Ÿง Linux โœ… Fully Supported Ubuntu 22.04+, Fedora 36+, Arch (latest)
๐ŸŽ macOS โœ… Fully Supported Apple Silicon (M-series) optimized, Intel x86_64
๐ŸชŸ Windows โš ๏ธ Experimental WSL2 required for full functionality
๐Ÿ‹ Docker โœ… Fully Supported Multi-architecture images available
โ˜๏ธ Cloud ๐Ÿ”ถ Partial Local simulation mode only

๐Ÿ”‘ Core Features

๐Ÿ›ก๏ธ Absolute Data Sovereignty

Your intellectual explorations remain within your computational domain. NUMINA operates as a closed cognitive ecosystem, ensuring that sensitive data, proprietary research, and personal ideation never leave your controlled environment unless explicitly authorized.

๐Ÿ”„ Cognitive Continuity

Agents maintain context across sessions through sophisticated memory persistence. The system remembers not just what was accomplished, but how decisions were made, creating a growing repository of problem-solving patterns.

๐Ÿงฉ Modular Agent Ecosystem

Swap specialized agents, modify their prompts, or introduce entirely new agent types without disrupting the overall system. The orchestration layer manages inter-agent communication regardless of the underlying model implementations.

๐Ÿ“Š Transparent Deliberation

Observe the complete cognitive process: watch agents debate alternatives, critique proposals, and reach consensus. Every decision is documented with reasoning trails that can be reviewed, analyzed, and learned from.

๐Ÿ”Œ Extended Cognitive Integration

While NUMINA operates primarily with local Ollama models, it can optionally integrate with external cognitive services for specific tasks:

# Optional external service integration
external_services:
  openai:
    enabled: false  # Set to true for GPT-4o augmentation
    model: "gpt-4o"
    usage: "complex_reasoning_only"
    budget_limit: 50  # USD per month
  
  anthropic:
    enabled: false  # Set to true for Claude 3.5 Sonnet
    model: "claude-3-5-sonnet-20241022"
    usage: "ethical_review"
    budget_limit: 30  # USD per month
  
  local_priority: true  # Always prefer local models first

๐ŸŒ Multilingual Cognitive Processing

NUMINA's agents natively operate across 47 languages, with particular strength in technical domains regardless of linguistic context. The system can process, analyze, and generate content while preserving nuanced meaning across language boundaries.

๐ŸŽจ Responsive Cognitive Interface

Choose between a rich web dashboard with real-time visualization of agent interactions or a powerful CLI for scripted automation and integration into existing workflows. Both interfaces provide complete access to the system's capabilities.

โฐ Persistent Cognitive Availability

Once initialized, NUMINA operates continuously, processing queued objectives, refining its knowledge base, and remaining ready for new challenges. The system maintains operational integrity through graceful recovery from interruptions.


๐Ÿš€ Practical Applications

Academic Research Acceleration

NUMINA can autonomously conduct literature reviews, identify research gaps, formulate hypotheses, and design methodological approachesโ€”reducing weeks of preparatory work to hours of supervised cognition.

Technical Strategy Development

The agent swarm excels at analyzing complex technical landscapes, evaluating architectural alternatives, and generating implementation roadmaps with dependency analysis and risk assessment.

Creative Problem-Solving

By combining diverse cognitive perspectives, NUMINA generates innovative solutions to multidimensional challenges, often identifying non-obvious connections and novel approaches.

Educational Exploration

Use NUMINA as an interactive learning companion that can explain concepts from multiple perspectives, generate practice problems, and adapt explanations based on your demonstrated understanding.

Personal Knowledge Management

Transform NUMINA into an extension of your own cognitionโ€”a persistent, evolving repository of your interests, insights, and intellectual explorations with sophisticated recall and synthesis capabilities.


๐Ÿ“ˆ Performance Characteristics

Metric Value Notes
Initialization Time 45-90 seconds Varies by hardware and model availability
Agent Response Latency 2-8 seconds per agent Depends on model size and complexity
Memory Recall Accuracy 94-97% On standardized knowledge tests
Consensus Formation 3-7 deliberation cycles For moderately complex problems
Continuous Operation 30+ days demonstrated With periodic memory optimization
Session Context Window 128K tokens effective Through compression and summarization

๐Ÿ”ง Advanced Configuration

Custom Agent Development

Create specialized agents for domain-specific tasks by extending the base agent class:

from numina.core.agents import BaseAgent

class LegalAnalystAgent(BaseAgent):
    agent_type = "legal_analyst"
    description = "Specializes in legal document analysis and precedent research"
    
    def __init__(self, model="llama3.1:70b", jurisdiction="international"):
        super().__init__(model=model)
        self.jurisdiction = jurisdiction
        self.legal_corpora = self.load_legal_corpora()
    
    def analyze_contract(self, document_text):
        # Custom analysis logic
        analysis = self.generate_analysis(document_text)
        risk_assessment = self.assess_risks(analysis)
        return {
            "analysis": analysis,
            "risk_level": risk_assessment,
            "recommendations": self.generate_recommendations()
        }

Memory System Customization

Configure the hybrid memory system for specific use cases:

memory_architecture:
  primary_vector_store:
    provider: "chroma"
    embedding_model: "nomic-embed-text:latest"
    distance_metric: "cosine"
  
  knowledge_graph:
    provider: "neo4j"
    relationship_model: "text2graph"
    inference_depth: 3
  
  episodic_memory:
    compression_ratio: 0.4
    retention_threshold: 0.7
    temporal_resolution: "hourly"
  
  semantic_partitions:
    - domain: "technical"
      models: ["all-mpnet-base-v2", "gte-large"]
    - domain: "creative"
      models: ["text-embedding-3-large"]

๐Ÿงช Testing & Validation

NUMINA includes a comprehensive validation suite:

# Run the complete test suite
pytest tests/ --cov=numina --cov-report=html

# Validate safety protocols
python -m numina.safety.validate --rigor=exhaustive

# Benchmark cognitive performance
python -m numina.benchmarks.cognitive --suite=standard

# Stress test multi-agent coordination
python -m numina.stress.coordination --agents=10 --duration=3600

๐Ÿค Contributing to the Cognitive Ecosystem

We welcome contributions that expand NUMINA's capabilities while respecting its core principles of sovereignty, transparency, and safety. Please review our contribution guidelines before submitting pull requests.

Areas of particular interest:

  • Novel agent specializations
  • Enhanced memory architectures
  • Additional safety protocol layers
  • Performance optimizations for specific hardware
  • Integration adapters for emerging local models

โš–๏ธ License & Usage

NUMINA is released under the MIT License - see the LICENSE file for complete details.

This license permits:

  • Use in personal, academic, and commercial projects
  • Modification and distribution of derivative works
  • Private deployment without attribution requirements

This license requires:

  • Preservation of copyright and license notices
  • Liability and warranty disclaimers

โš ๏ธ Important Considerations

System Requirements

NUMINA is computationally intensive by design. While efforts have been made to optimize resource utilization, meaningful cognitive work requires substantial RAM, capable processors, and adequate thermal management for sustained operation.

Cognitive Autonomy

This system can generate and execute code, interact with local applications, and make autonomous decisions within its configured boundaries. Always review the safety protocol settings and establish appropriate constraints for your use case.

Knowledge Limitations

While NUMINA incorporates sophisticated retrieval and reasoning capabilities, its knowledge is constrained by its training data (current through 2026) and your locally available information sources. The system cannot access real-time information without explicit external integration.

Continuous Evolution

NUMINA represents an active research project in autonomous cognitive systems. Behavior, APIs, and capabilities may evolve significantly between versions. Major changes are documented in release notes, and migration utilities are provided where feasible.


๐Ÿ“š Additional Resources


๐Ÿ†˜ Support Channels


๐Ÿ”ฎ Future Cognitive Horizons

The NUMINA roadmap includes:

  • Quantum-inspired reasoning algorithms (Q4 2026)
  • Cross-instance cognitive federation (Q1 2027)
  • Neuro-symbolic integration layer (Q2 2027)
  • Embodied cognition interfaces (Research phase)
  • Ethical reasoning frameworks (Continuous development)

๐Ÿ™ Acknowledgments

NUMINA builds upon decades of research in distributed artificial intelligence, cognitive architectures, and safe autonomous systems. We extend our appreciation to the open-source communities developing the foundational models and tools that make this cognitive ecosystem possible.

Special recognition to the researchers advancing local model optimization, the engineers creating robust orchestration frameworks, and the visionaries who imagined machines that think together rather than merely compute.


"The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it." โ€” Adapted for cognitive tools

Download


ยฉ 2026 NUMINA Cognitive Systems. This documentation and the associated software are provided for cognitive augmentation purposes. Actual performance varies based on hardware configuration, model availability, and problem complexity.