Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 24 additions & 0 deletions .github/AGENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Agent Instructions

## Package Management

This project uses **pnpm** exclusively for package management in the frontend (`invokeai/frontend/web/`).

- ✅ Use `pnpm` commands (e.g., `pnpm install`, `pnpm run`)
- ❌ Never use `npm` or `yarn` commands
- ❌ Never suggest creating or using `package-lock.json` or `yarn.lock`
- ✅ The lock file is `pnpm-lock.yaml`

Use the following pnpm commands for typical operations:

- pnpm -C invokeai/frontend/web install
- pnpm -C invokeai/frontend/web build
- pnpm -C invokeai/frontend/web lint:tsc
- pnpm -C invokeai/frontend/web lint:dpdm
- pnpm -C invokeai/frontend/web lint:eslint
- pnpm -C invokeai/frontend/web lint:prettier

## Project Structure

- Backend: Python in `invokeai/`
- Frontend: TypeScript/React in `invokeai/frontend/web/` (uses pnpm)
11 changes: 11 additions & 0 deletions .github/workflows/frontend-checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,17 @@ jobs:
steps:
- uses: actions/checkout@v4

- name: Fail if package-lock.json is added/modified (pnpm only)
shell: bash
working-directory: .
run: |
set -euo pipefail
git fetch --no-tags --prune --depth=1 origin "${{ github.base_ref }}"
if git diff --name-only "origin/${{ github.base_ref }}...HEAD" | grep -E '(^|/)package-lock\.json$'; then
echo "::error::package-lock.json was added or modified. This repo uses pnpm only."
exit 1
fi

- name: check for changed frontend files
if: ${{ inputs.always_run != true }}
id: changed-files
Expand Down
20 changes: 20 additions & 0 deletions docs-old/features/gallery.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,26 @@ The settings button opens a list of options.
Below these two buttons, you'll see the Search Boards text entry area. You use this to search for specific boards by the name of the board.
Next to it is the Add Board (+) button which lets you add new boards. Boards can be renamed by clicking on the name of the board under its thumbnail and typing in the new name.

### Virtual Boards by Date

In addition to the regular user-created boards, the Gallery can show **virtual boards** that group your images automatically by their creation date. Virtual boards are not stored in the database — they are computed on the fly from existing image metadata, so enabling or disabling them never moves or modifies your images.

#### Enabling Virtual Boards

Open the boards settings popover (the gear icon next to the boards search field) and toggle **Show Virtual Boards**. A new collapsible **By Date** section then appears in the boards list, with one entry per day on which images were generated (e.g. `2026-03-18`).

Each virtual board entry shows:

- a cover thumbnail (the most recent image of that day)
- the number of generated **images** on that date
- the number of uploaded **assets** on that date

Selecting a virtual board filters the gallery to show only the images from that day. Search, category filters (Images / Assets), starred-first sorting and sort direction all work the same way as on regular boards.

!!! note "Read-only"

Virtual boards are a view over your existing images. You cannot rename, delete or auto-assign to them, and images cannot be "moved into" a virtual board — they appear there automatically based on their creation date. To organize images permanently, use regular boards.

### Board Thumbnail Menu

Each board has a context menu (ctrl+click / right-click).
Expand Down
50 changes: 50 additions & 0 deletions docs/features/prompt-tools.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# LLM Prompt Tools

InvokeAI includes two built-in tools that use local language models to help you write better prompts. Both tools appear as small buttons in the top-right corner of the positive prompt area and are only visible when you have a compatible model installed.

## Expand Prompt

Takes your short prompt and expands it into a detailed, vivid description suitable for image generation.

**How to use:**

1. Type a brief prompt (e.g. "a cat in a garden")
2. Click the sparkle button in the prompt area
3. Select a Text LLM model from the dropdown
4. Click **Expand**
5. Your prompt is replaced with the expanded version

**Compatible models:** Any HuggingFace model with a `ForCausalLM` architecture. Recommended options:

| Model | Size | HuggingFace ID |
|-------|------|----------------|
| Qwen2.5 1.5B Instruct | ~3 GB | `Qwen/Qwen2.5-1.5B-Instruct` |
| Phi-3 Mini Instruct | ~7.5 GB | `microsoft/Phi-3-mini-4k-instruct` |
| TinyLlama Chat | ~2 GB | `TinyLlama/TinyLlama-1.1B-Chat-v1.0` |

Install by pasting the HuggingFace ID into the Model Manager. The model is automatically detected as a **Text LLM** type.

## Image to Prompt

Upload an image and generate a descriptive prompt from it using a vision-language model.

**How to use:**

1. Click the image button in the prompt area
2. Select a LLaVA OneVision model from the dropdown
3. Click **Upload Image** and select an image
4. Click **Generate Prompt**
5. The generated description is set as your prompt

**Compatible models:** LLaVA OneVision models (already supported by InvokeAI).

## Undo

Both tools overwrite your current prompt. You can undo this change:

- Press **Ctrl+Z** (or **Cmd+Z** on macOS) in the prompt textarea within 30 seconds
- The undo state is cleared when you start typing manually

## Workflow Node

A **Text LLM** node is also available in the workflow editor for use in automated pipelines. It accepts a prompt string and model selection as inputs and outputs the expanded text as a string.
154 changes: 154 additions & 0 deletions docs/nodes/creatingNodePack.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
# Creating a Node Pack for the Custom Node Manager

This guide explains how to structure your Git repository so it can be installed via InvokeAI's Custom Node Manager.

## Repository Structure

Your repository **is** the node pack. When a user installs it, the entire repo is cloned into the `nodes` directory.

### Minimum Required Structure

```
my-node-pack/
├── __init__.py # Required: imports your node classes
├── my_node.py # Your node implementation(s)
└── README.md # Recommended: describe your nodes
```

The `__init__.py` at the root is **mandatory**. Without it, the pack will not be loaded.

### Recommended Structure

```
my-node-pack/
├── __init__.py # Imports all node classes
├── requirements.txt # Python dependencies (user-installed)
├── README.md # Description, usage, examples
├── node_one.py # Node implementation
├── node_two.py # Node implementation
├── utils.py # Shared utilities
└── workflows/ # Optional: workflow files
├── example_workflow.json
└── advanced_workflow.json
```

## The `__init__.py` File

This file must import all invocation classes you want to register. Only classes imported here will be available in InvokeAI.

```python
from .node_one import MyFirstInvocation
from .node_two import MySecondInvocation
```

If you have nodes in subdirectories:

```python
from .nodes.image_tools import CropInvocation, ResizeInvocation
from .nodes.text_tools import ConcatInvocation
```

## Dependencies (`requirements.txt` or `pyproject.toml`)

If your nodes require additional Python packages, list them in a `requirements.txt` (or `pyproject.toml`) at the repository root:

```
numpy>=1.24
opencv-python>=4.8
```

The Custom Node Manager **does not** install these dependencies automatically — auto-installing into the running InvokeAI environment risks pulling in incompatible versions and breaking the application. After install, the UI shows the user a toast telling them that manual installation is required, and your README should document the exact install command (e.g. `pip install -r requirements.txt` from inside an activated InvokeAI environment).

**Important:** Avoid pinning versions too tightly. InvokeAI has its own dependencies, and version conflicts can cause issues. Use minimum version constraints (`>=`) where possible.

## Including Workflows

If your repository contains workflow `.json` files, they will be **automatically imported** into the user's workflow library during installation.

### Workflow Detection

The installer recursively scans your repository for `.json` files. A file is recognized as a workflow if it contains both `nodes` and `edges` keys at the top level.

### Tagging

Imported workflows are automatically tagged with `node-pack:<your-repo-name>` so users can filter for them in the workflow library. When the node pack is uninstalled, these workflows are also removed.

### Workflow Format

Workflows should follow the standard InvokeAI workflow format:

```json
{
"name": "My Example Workflow",
"author": "Your Name",
"description": "Demonstrates how to use MyFirstInvocation",
"version": "1.0.0",
"contact": "",
"tags": "example, my-node-pack",
"notes": "",
"meta": {
"version": "3.0.0",
"category": "user"
},
"exposedFields": [],
"nodes": [...],
"edges": [...]
}
```

**Tip:** The easiest way to create a workflow file is to build the workflow in InvokeAI's workflow editor, then export it via **Save As** and copy the `.json` file into your repository.

## Node Implementation

Each node is a Python class decorated with `@invocation()`. Here's a minimal example:

```python
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import InputField, OutputField
from invokeai.invocation_api import BaseInvocationOutput, invocation_output


@invocation_output("my_output")
class MyOutput(BaseInvocationOutput):
result: str = OutputField(description="The result")


@invocation(
"my_node",
title="My Node",
tags=["example", "custom"],
category="custom",
version="1.0.0",
)
class MyInvocation(BaseInvocation):
"""Does something useful."""

input_text: str = InputField(default="", description="Input text")

def invoke(self, context) -> MyOutput:
return MyOutput(result=f"Processed: {self.input_text}")
```

For full details on the invocation API, see the [Invocation API documentation](invocation-api.md).

## Best Practices

- **Use a descriptive repository name** — it becomes the pack name shown in the UI
- **Include a README.md** with description, screenshots, and usage instructions
- **Version your nodes** using semver in the `@invocation()` decorator
- **Don't include large binary files** in your repository (models, weights, etc.)
- **Test your nodes** by placing the repo in the `nodes` directory before publishing
- **Include example workflows** so users can get started quickly
- **Tag your GitHub repository** with `invokeai-node` for discoverability
- **Avoid name collisions** — choose unique invocation type strings (e.g. `my_pack_resize` instead of just `resize`)

## Testing Your Pack

Before publishing, verify your pack works with the Custom Node Manager:

1. Create a Git repository with your node pack
2. Push it to GitHub (or any Git host)
3. In InvokeAI, go to the Nodes tab and install it via the Git URL
4. Verify your nodes appear in the workflow editor
5. Verify any included workflows are imported
6. Test uninstalling — nodes and workflows should be removed
78 changes: 78 additions & 0 deletions docs/nodes/customNodeManager.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Custom Node Manager

The Custom Node Manager allows you to install, manage, and remove community node packs directly from the InvokeAI UI — no manual file copying required.

## Accessing the Node Manager

Click the **Nodes** tab (circuit icon) in the left sidebar, between Models and Queue.

## Installing a Node Pack

1. Navigate to the **Nodes** tab
2. On the right panel, select the **Git Repository URL** tab
3. Paste the Git URL of the node pack (e.g. `https://github.com/user/my-node-pack.git`)
4. Click **Install**

The installer will:

- Clone the repository into your `nodes` directory
- Load the nodes immediately — no restart needed
- Import any workflow `.json` files found in the repository into your workflow library (tagged with `node-pack:<name>` for easy filtering)

The install progress and results are shown in the **Install Log** at the bottom of the panel.

### Installing Python Dependencies

The installer does **not** automatically run `pip install` for `requirements.txt` or `pyproject.toml`. Auto-installing dependencies into the running InvokeAI environment can pull in incompatible package versions and break the application.

If a node pack ships a `requirements.txt` or `pyproject.toml`, you'll see a warning toast after installation. Install the dependencies yourself by following the instructions in the node pack's documentation (typically `pip install -r requirements.txt` from inside an activated InvokeAI environment, but check the pack's README first). After installing, click the **Reload** button so the new dependencies take effect.

### Security Warning

Custom nodes execute arbitrary Python code on your system. **Only install node packs from authors you trust.** Malicious nodes could harm your system or compromise your data.

## Managing Installed Nodes

The left panel shows all installed node packs with:

- **Pack name**
- **Number of nodes** provided
- **Individual node types** as badges
- **File path** on disk

### Reloading Nodes

Click the **Reload** button to re-scan the nodes directory. This picks up any node packs that were manually added to the directory without using the installer.

### Uninstalling a Node Pack

Click the **Uninstall** button on any node pack. This will:

- Remove the node pack directory
- Unregister the nodes from the system immediately
- Remove any workflows that were imported from the pack
- Update the workflow editor so the nodes are no longer available

No restart is required.

## Scan Folder Tab

The **Scan Folder** tab shows the location of your nodes directory. Node packs placed there manually (e.g. via `git clone`) are automatically detected at startup. Use the **Reload** button to detect newly added packs without restarting.

## Troubleshooting

### Node pack fails to install

- Verify the Git URL is correct and accessible
- Check that the repository contains an `__init__.py` file at the top level
- Review the Install Log for error details

### Nodes don't appear after install

- Click the **Reload** button
- Check that the node pack's `__init__.py` imports its node classes
- Check the server console for error messages

### Workflows show errors after uninstalling

If you have user-created workflows that reference nodes from an uninstalled pack, those workflows will show errors for the missing node types. Reinstall the pack or remove the affected nodes from the workflow.
Loading
Loading