From 71881eaa19249524b6ddcfb0ad551463ef4a0c8a Mon Sep 17 00:00:00 2001 From: James Date: Wed, 29 Apr 2026 02:38:36 +0000 Subject: [PATCH] docs(guides): add HyperFrames MCP guide MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Adds a hardlinkable doc at /guides/mcp covering setup, available tools, prompting tips, and debugging for the HyperFrames MCP. Not added to docs.json navigation yet — accessible via direct URL only until the MCP launches. Co-Authored-By: Claude Opus 4.7 (1M context) --- docs/guides/mcp.mdx | 309 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 309 insertions(+) create mode 100644 docs/guides/mcp.mdx diff --git a/docs/guides/mcp.mdx b/docs/guides/mcp.mdx new file mode 100644 index 000000000..8907086e4 --- /dev/null +++ b/docs/guides/mcp.mdx @@ -0,0 +1,309 @@ +--- +title: HyperFrames MCP +description: "Author, preview, and render HyperFrames videos directly inside Claude.ai and ChatGPT — no local install required." +--- + +The HyperFrames MCP is a hosted [Model Context Protocol](https://modelcontextprotocol.io) server that lets you create, edit, preview, and render HyperFrames video compositions from inside Claude.ai or ChatGPT. + + + **In beta.** Features and pricing may change. Found a bug or have feedback? [File an issue on GitHub](https://github.com/heygen-com/hyperframes/issues). + + +## What you can do + +- Create compositions from natural-language prompts +- Edit existing compositions conversationally — *"make the title 2x bigger"*, *"add hype-style captions"* +- Preview the result inline in the chat with a video player widget +- Render to `mp4`, `webm`, or `mov` — output URL streamed back to chat +- Revisit compositions you've created previously +- Check your render credits + +The agent behind `compose` has 25+ HyperFrames-specific skills baked in — typography, color palettes, motion principles, GSAP effects, audio-reactive animation, captions, voice generation. You don't have to specify any of this directly; describe the video you want and the agent picks the right tools. + + + Looking for the open-source CLI? See the [Quickstart](/quickstart). The MCP is a hosted product for zero-install authoring inside an LLM chat. The CLI gives you full control of rendering and runtime; the MCP gives you instant authoring with cloud rendering. + + +## Setup + +### 1. Get a HeyGen account + +The MCP requires a HeyGen account for authentication and credits. Sign up at [heygen.com](https://heygen.com) if you don't have one. + +### 2. Add the connector + + + + + + In Claude.ai web or desktop: **Settings → Connectors → Add custom connector**. + + + Paste: + ``` + https://mcp.heygen.com/mcp/hyperframes + ``` + + + OAuth opens in a new window. Authorize the HyperFrames connector to access your HeyGen account. + + + Open a new Claude.ai chat. Try: + + > Make me a 10-second product intro for [your product] with bouncy captions and a high-energy soundtrack. + + + + + + + In ChatGPT: **Settings → Apps & Connectors → Add MCP server**. + + + Paste: + ``` + https://mcp.heygen.com/mcp/hyperframes + ``` + + + Authorize via OAuth. + + + Open a new chat. Same prompts work as in Claude.ai. + + + + + +## Available tools + +The MCP exposes six tools to the LLM. You don't call them directly — the model picks the right tool based on your message. + +| Tool | What it does | Cost | +|---|---|---| +| `compose` | Create a new composition or edit an existing one | Author credits | +| `list_compositions` | List your previously created compositions | Free | +| `get_composition` | Open a specific composition with an inline player | Free | +| `render_video` | Submit a cloud render to mp4 / webm / mov | Render credits | +| `get_render_status` | Poll a long-running render job | Free | +| `get_credits` | Check your remaining credits and tier | Free | + +### compose + +Authors a new composition or applies an edit to an existing one. The HyperFrames agent handles voice selection, captions, blocks, layout, transitions, color, and timing internally based on your natural-language prompt. + +**Triggered by prompts like:** + +- "Make a 30-second product intro about [topic]" → creates a fresh composition +- "Change the title font to a bold serif" → edits the most recent composition +- "Add a flash transition before the call-to-action" → applies a structured edit + +**Returns:** A composition reference (id, title, thumbnail) plus an inline player widget. Progress notifications stream during the run so you can see what the agent is doing — *"Drafting outline...", "Selecting voice and style...", "Generating HTML...", "Rendering preview frame..."* + +### list_compositions + +Lists compositions you've previously created. Newest first, paginated. + +**Triggered by prompts like:** *"show me my recent videos"*, *"what did I work on yesterday?"* + +### get_composition + +Fetches metadata for a single composition along with an inline player widget. + +**Triggered by prompts like:** *"open that video again"*, *"show me the one I made about [topic]"* + +### render_video + +Submits a cloud render. Defaults to `mp4` at `30fps`. + +**Format options:** + +| Format | Codec | Use it for | +|---|---|---| +| `mp4` (default) | H.264 | Broadest compatibility, social media, web | +| `webm` | VP9 | Smaller files; supports alpha channel for transparent overlays | +| `mov` | ProRes | Lossless quality for editing pipelines | + +**Frame rate options:** `24`, `30` (default), `60`. + +**Returns:** Either the rendered video URL (if the render finishes within 25 seconds) or a `job_id` for polling. Either way, an inline render-progress widget shows live status. + +**Triggered by prompts like:** *"render this"*, *"export to webm"*, *"render at 60fps for editing"*. + +### get_render_status + +Polls an in-progress render. Used internally by the model when `render_video` returns a `job_id` (long renders). + +### get_credits + +Returns your tier and remaining credits. + +**Triggered by:** *"how many renders do I have left?"*, *"what's my plan?"* + +## Prompting tips + +### Be specific about what you want + +The agent has lots of creative latitude — give it enough direction to use it well. + +| Less effective | More effective | +|---|---| +| "make a video" | "make a 15-second TikTok hook about home composting with bouncy captions and a warm earthy palette" | +| "add captions" | "add hype-style captions in my brand color #FF6A00" | +| "make it shorter" | "trim to 10 seconds total — cut the third scene" | +| "more energetic" | "swap to a neon-electric palette and tighten all transitions to 200ms" | + +### Iterate conversationally + +Once a composition exists, the agent loads the current state and applies edits in place. Keep talking to it. + +``` +You: "20-second product intro for my app, dark theme, hype style" +Agent: [composition + player widget appears] + +You: "make the logo bigger and add a pulse animation on the CTA" +Agent: [updated player widget] + +You: "render to webm with alpha" +Agent: [render-progress widget, then player widget with download link] +``` + +### Reference your existing HeyGen assets + +If you've uploaded logos, brand voices, fonts, or other assets to your HeyGen account, the agent can use them. Just say *"use my logo"* or *"use my Sarah brand voice."* The agent resolves the asset by name and recency. + +### Pick the right format up front + +Mention the output format if you have a specific use case: + +- *"render to mp4"* — default, social media +- *"render to webm with alpha"* — transparent overlay you'll composite later +- *"render to mov for After Effects"* — ProRes for editing + +## Debugging + +### Inspect the MCP with MCP Inspector + +For developers building integrations or debugging tool responses, the MCP Inspector lets you see exactly what tools are exposed and what they return: + +```bash +npx @modelcontextprotocol/inspector npx -y mcp-remote https://mcp.heygen.com/mcp/hyperframes +``` + +Complete the OAuth flow in the inspector's browser tab. Once authenticated, you can call each tool with custom parameters and see raw responses, including `tool_data` and `widget_data`. + +### Watch progress notifications + +The MCP emits MCP `notifications/progress` events during long-running `compose` and `render_video` calls. The host (Claude.ai or ChatGPT) displays them inline: + +``` +You: "make me a 30-second product intro" +Agent: [calls compose] + ↳ "Drafting outline..." ← progress notification + ↳ "Selecting voice and style..." ← progress notification + ↳ "Generating HTML..." ← progress notification + ↳ "Rendering preview frame..." ← progress notification + ↳ [composition + player widget] +Agent: "Here's your video — [player]" +``` + +If progress stops mid-flow, the run failed. The agent's next message should explain what went wrong. + +### Common issues + + + + Verify you're using the production URL: `https://mcp.heygen.com/mcp/hyperframes`. The dev URL (`mcp.dev.heygen.com`) only accepts dev accounts. + + If OAuth completes but you see "not authorized" errors, your HeyGen account may not have access to the MCP — contact support or check your tier. + + + + Renders normally finish in 10-90 seconds depending on length, fps, and format. + + If `get_render_status` shows `status: rendering` for more than 5 minutes, something has stuck. Try: + + 1. Start a new chat thread + 2. Run `compose("regenerate this composition")` — the underlying composition may reference an asset that's failing to load + 3. If the issue persists, file an issue at [github.com/heygen-com/hyperframes/issues](https://github.com/heygen-com/hyperframes/issues) including the `job_id` from `render_video` + + + + The `composition_id` is owned by the HeyGen space (account) that created it. If you're signed into a different space, you can't access compositions from another. Run `list_compositions` to see what's available from your current account. + + + + Usually a transient connection issue between the widget and `mcp.heygen.com`. Refresh the chat or call `get_composition` again. + + If it persists, your composition may reference a media asset that failed to upload — recreate the composition with `compose("regenerate this")`. + + + + The MCP returns an error with an upgrade URL when your credits are exhausted. Visit [heygen.com/pricing](https://heygen.com/pricing) to upgrade your tier. + + + + `compose` runs can take 30+ seconds for complex prompts. If your client times out before the response comes back, the underlying run is likely still progressing — wait 30 seconds and run `list_compositions`. The composition is probably already created. + + + + The agent prefers structural decisions (palette, layout, motion) over fine-grained pixel positioning. If a specific edit doesn't land, try rephrasing more directively: + + - Less effective: *"the title is a little off"* + - More effective: *"move the title 40px down and increase its weight to 800"* + + For pixel-precise control, use the [open-source CLI](/quickstart) — the MCP is optimized for fast natural-language iteration. + + + +### Reporting issues + +For bugs or feature requests, file an issue at [github.com/heygen-com/hyperframes/issues](https://github.com/heygen-com/hyperframes/issues). Include: + +- The prompt you used +- The `composition_id` and `job_id` (if applicable) — these are visible in the tool response details +- A description of what you expected vs. what happened +- The host (Claude.ai web/desktop, ChatGPT, etc.) + +## Limitations + +- **Cloud-only rendering.** All renders run on HeyGen infrastructure. Use the [CLI](/quickstart) if you need local rendering. +- **Single-user.** Each composition is owned by one HeyGen account. No team sharing in v1. +- **No binary uploads from chat.** You can reference assets you've already uploaded to HeyGen via the web UI, but the MCP does not currently accept new file uploads through chat. Upload via [app.heygen.com](https://app.heygen.com) first, then reference the asset by name. +- **Aspect ratios:** `16:9`, `9:16`, `1:1`, `4:5`. Other ratios fall back to the closest match. +- **No fine-grained editing tools.** Edits go through the agent. For pixel-precise control, use the [CLI](/quickstart) and a coding agent like Claude Code or Cursor. +- **Widget rendering requires a host that supports MCP widgets.** Claude.ai web/desktop and ChatGPT (Apps SDK) support widgets today. Text-only MCP clients (Claude Code CLI, Cursor, Windsurf) will see a clickable preview URL instead — full text-mode support is on the roadmap. + +## How this relates to the open-source framework + +[HyperFrames itself is open source](https://github.com/heygen-com/hyperframes) — the HTML composition format, CLI, renderer, and player. You can use HyperFrames locally without the MCP. + +The MCP is a HeyGen-hosted product that wraps: + +- The HyperFrames composition agent (the LLM that authors compositions) +- HeyGen's cloud rendering pipeline +- HeyGen's voice / TTS / asset libraries +- OAuth, credits, and tier management + +| You want… | Use… | +|---|---| +| Zero local install, fast natural-language authoring | The MCP | +| Pixel-precise control, custom rendering, self-hosting | The [CLI](/quickstart) | +| Both | Author with the MCP, then download and refine with the CLI (planned export feature) | + +## Next steps + + + + Try HyperFrames locally with the open-source CLI. + + + Tips for getting the best results when working with AI agents. + + + Browse 50+ ready-to-use blocks the agent draws from. + + + Reference compositions you can clone. + +