PromptBlueprint is an Unreal Engine plugin that enables seamless communication with Large Language Models (LLMs) directly from Blueprints and C++.
It allows you to send prompts to any LLM API endpoint (such as OpenRouter, OpenAI-compatible APIs, or custom servers) and receive responses asynchronously via callbacks.
This makes it ideal for building AI-driven gameplay systems, NPC dialogue, developer tools, procedural generation, and in-editor virtual assistants—without writing complex networking code.
- ✅ Blueprint & C++ Accessible: Communicate with LLMs directly from your game logic.
- ✅ Universal API Support: Send prompts to any LLM URL (OpenAI, OpenRouter, custom servers).
- ✅ Asynchronous Requests: Non-blocking HTTP requests keep your game running smoothly.
- ✅ Simple Callback System: Clear success and failure handling for every API call.
- ✅ JSON Field Extraction: Easily parse responses using ordered key paths (e.g.,
choices[0].message.content). - ✅ Versatile Use: Ideal for AI chat, tool-calling, gameplay logic, and automation.
- ✅ Lightweight & Modular: Built on native Unreal Engine components for performance.
- AI-Powered NPC Dialogue Systems (dynamic, context-aware conversations)
- Developer Assistant Tools inside Unreal Editor (e.g., generating code snippets, summarizing documentation)
- Procedural Generation of story beats, quests, or dialogue trees at runtime.
- QA Bots and Debugging Assistants for in-editor testing.
- Runtime Prompt-Based AI Logic for emergent gameplay.
- Clone or download this repository.
- Place the plugin inside your Unreal Engine Project's
Pluginsdirectory:<YourProjectName>/Plugins/PromptBlueprint/ - Ensure the plugin is enabled in your Unreal Editor's Plugins window.
- If building from source, you may need to add
"PromptBlueprint"to thePublicDependencyModuleNamesin your project'sBuild.csfile.
The core functionality is exposed through a single, easy-to-use Blueprint node: Send LLM Request.
On any Blueprint graph (e.g., in a Widget, Actor, or Blueprint Function Library):
- Target URL: The API endpoint for your LLM (e.g.,
https://api.openai.com/v1/chat/completions). - API Key: Your authorization token (often passed in the
Authorizationheader). - Model Name: The specific model you want to use (e.g.,
gpt-4o,llama3-70b). - Prompt/Messages: The text input for the LLM.
The node executes asynchronously and will branch to one of two pins once the network request is complete:
| Pin | Description |
|---|---|
| On Success | Fires when a response is received. Provides the raw Response JSON and the extracted Content String. |
| On Failure | Fires on network errors, invalid URLs, or HTTP error codes. Provides the Error Message. |
The Send LLM Request node automatically handles extracting the useful response content.
- Extraction Path: Use a key path like
choices[0].message.content(standard for OpenAI-compatible APIs) to directly retrieve the final generated text into a simple Content String variable. This avoids manual JSON parsing for the most common use case.
For C++ users, the functionality is exposed via the UPromptBlueprintFunctionLibrary class.
// Example C++ call
FString URL = TEXT("Your API URL");
FString Key = TEXT("Your API Key");
FString Payload = TEXT("{\"model\": \"gpt-3.5-turbo\", \"messages\": [{\"role\": \"user\", \"content\": \"What is Unreal Engine?\"}]}");
FString ContentPath = TEXT("choices[0].message.content");
// Function signature (simplified)
UPromptBlueprintFunctionLibrary::SendLLMRequest(
URL,
Key,
Payload,
ContentPath,
// Success/Failure handling via delegates in C++
...
);