Skip to content

Blize/acon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

acon logo

acon

A conversational AI framework built in Rust.

Plug in topic knowledge, let the LLM decide what's relevant, and deliver structured, context-aware conversations — perfect for customer support bots, info assistants, and beyond.

Rust License Status


✨ What is acon?

acon is a lightweight framework that gives an LLM the ability to read topic files, retrieve RAG information, and call external APIs in order to hold more structured, grounded conversations.

Instead of relying solely on the model's training data, acon dynamically selects relevant knowledge at runtime — meaning your bot stays accurate, on-topic, and easy to extend.

Use Cases

Scenario Example
🎧 Customer Support "My SIM card isn't working" → acon pulls in SIM troubleshooting docs and guides the user step-by-step.
🎮 Info Assistant "Tell me about champion abilities" → acon retrieves champion data and responds with precise stats.
📚 Knowledge Base Chat "How do I configure the router?" → acon matches the WLAN topic and serves relevant instructions.
🤖 Anything you can think of Any scenario any idea, try out and find out if it works

🏗️ Architecture

acon architecture diagram

acon is composed of three core components that work together:

Orchestrator

The brain of the system. It receives user input, manages conversation history, coordinates with the Advisor and Executor, and ultimately generates the response via the LLM.

  • Handles the full conversation lifecycle
  • Maintains multi-turn context (history)
  • Decides when to use plain conversation vs. RAG-augmented responses

Advisor

The knowledge layer. The Advisor manages a directory of topic folders on disk and uses an LLM classifier to determine which topics are relevant to the current conversation.

Each topic folder can contain:

File Purpose
trigger.md Describes when this topic should activate — the LLM reads these to decide relevance.
rag.md The actual knowledge content injected into the conversation as context.
api.txt (Planned) API definitions the Executor can call for dynamic data retrieval.

Executor

The action layer. A lightweight MCP-style server that lets the Orchestrator call external functions and APIs when the conversation requires live data or side effects. (Currently in development.)

🚀 Quick Start

Prerequisites

  • Rust (2024 edition) — install via rustup
  • An OpenAI API key (other providers coming soon)

1. Clone the repository

git clone https://github.com/Blize/acon.git

cd acon/examples/game_info_bot

or

cd acon/examples/phone_support_bot

2. Set your API key

export LLM_API_KEY="sk-..."

3. Run the example chat bot

cargo r

You'll get an interactive terminal session:

> My SIM card isn't working, what should I do?
🤖 I'd recommend trying these steps: First, remove and reinsert your SIM card...

> exit

Example Bot:

This is the code from the game_info_bot.

use acon::advisor::Advisor;
use acon::core::llm;
use acon::orchestrator::Orchestrator;
use log::info;
use std::io::{self, Write};

const BOT_INFO_PROMPT: &str = "You are a helpful League of legends info bot. You only anwser facts questions about League of legends.";

#[tokio::main]
async fn main() {
    env_logger::init();

    let advisor = Advisor::new(".".to_string(), None);

    let mut orchestrator = Orchestrator::new(
        std::env::var("LLM_API_KEY").expect("OPENAI_API_KEY not set"),
        llm::Provider::OpenAI(llm::OpenAIModel::Gpt4o),
        advisor,
        Some(BOT_INFO_PROMPT.to_string()),
    );

    info!("Orchestrator initialized");

    loop {
        // Prompt
        print!("You: ");
        io::stdout().flush().expect("Failed to flush stdout");

        // Read input
        let mut input = String::new();
        io::stdin()
            .read_line(&mut input)
            .expect("Failed to read line");

        let input = input.trim();

        // Exit condition
        if input.eq_ignore_ascii_case("exit") {
            println!("Goodbye!");
            break;
        }

        info!("User input: {}", input);

        // Process input through the orchestrator
        let response = orchestrator.handle_customer_input(input).await;
        println!("Bot: {}", response);
    }
}

🧩 Adding Your Own Topics

Creating a new topic is as simple as adding a folder. For example, to add info's about your website:

1. Create a directory inside your advisor folder:

advisor/advisor_jobs/

2. Add a trigger.md — tell the LLM when to activate this topic:

This topic covers where your job page is, what it will show you and how to apply for jobs at the company.

3. Add a rag.md — provide the knowledge:

## Jobs

### Where can the page be found?
- www.yourcompany.com/jobs

### What will the page show you?
- All jobs available
- Information about the company
- Benefits and perks
- How to apply and where to send it to

That's it. The next time a user asks "How can I apply for the Job?", the Advisor will match the topic and inject the knowledge into the conversation automatically.


🔌 Supported LLM Providers

Provider Status
OpenAI ✅ Supported
Anthropic 🔜 Planned
Google 🔜 Planned

🗺️ Roadmap

  • Orchestrator with multi-turn conversation history
  • Advisor with trigger-based topic matching
  • RAG context injection from topic folders
  • OpenAI provider integration
  • Crawl links in RAG.md for more info's
  • Executor module — external API calling (api.txt)
  • Support for other LLM's

📄 License

This project is licensed under the AGPL V3 License. See the LICENSE file for details.


Built with 🦀 and ❤️

About

A conversational AI framework built in Rust.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages