diff --git a/docs/README.skills.md b/docs/README.skills.md
index c5d3fdc11..3e08573ea 100644
--- a/docs/README.skills.md
+++ b/docs/README.skills.md
@@ -165,6 +165,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to
| [nano-banana-pro-openrouter](../skills/nano-banana-pro-openrouter/SKILL.md) | Generate or edit images via OpenRouter with the Gemini 3 Pro Image model. Use for prompt-only image generation, image edits, and multi-image compositing; supports 1K/2K/4K output. | `assets/SYSTEM_TEMPLATE`
`scripts/generate_image.py` |
| [napkin](../skills/napkin/SKILL.md) | Visual whiteboard collaboration for Copilot CLI. Creates an interactive whiteboard that opens in your browser — draw, sketch, add sticky notes, then share everything back with Copilot. Copilot sees your drawings and text, and responds with analysis, suggestions, and ideas. | `assets/napkin.html`
`assets/step1-activate.svg`
`assets/step2-whiteboard.svg`
`assets/step3-draw.svg`
`assets/step4-share.svg`
`assets/step5-response.svg` |
| [next-intl-add-language](../skills/next-intl-add-language/SKILL.md) | Add new language to a Next.js + next-intl application | None |
+| [noizai-voice-workflow](../skills/noizai-voice-workflow/SKILL.md) | Build human-like text-to-speech workflows with style controls, local/cloud backends, and delivery-ready audio outputs. | None |
| [noob-mode](../skills/noob-mode/SKILL.md) | Plain-English translation layer for non-technical Copilot CLI users. Translates every approval prompt, error message, and technical output into clear, jargon-free English with color-coded risk indicators. | `references/examples.md`
`references/glossary.md` |
| [nuget-manager](../skills/nuget-manager/SKILL.md) | Manage NuGet packages in .NET projects/solutions. Use this skill when adding, removing, or updating NuGet package versions. It enforces using `dotnet` CLI for package management and provides strict procedures for direct file edits only when updating versions. | None |
| [openapi-to-application-code](../skills/openapi-to-application-code/SKILL.md) | Generate a complete, production-ready application from an OpenAPI specification | None |
diff --git a/skills/noizai-voice-workflow/SKILL.md b/skills/noizai-voice-workflow/SKILL.md
new file mode 100644
index 000000000..6650d6708
--- /dev/null
+++ b/skills/noizai-voice-workflow/SKILL.md
@@ -0,0 +1,39 @@
+---
+name: noizai-voice-workflow
+description: Build human-like text-to-speech workflows with style controls, local/cloud backends, and delivery-ready audio outputs.
+---
+
+# NoizAI Voice Workflow
+
+Use this skill when the user asks for practical text-to-speech workflows that should sound natural and be ready for downstream delivery.
+
+## Source Repository
+
+- https://github.com/NoizAI/skills
+
+## When to use
+
+- The user asks for more human-like TTS output
+- The user needs emotional tone, filler style, or pacing control
+- The user wants local-first or cloud-backed TTS fallback options
+- The user needs generated audio prepared for app messaging or broadcast workflows
+
+## Suggested flow
+
+1. Clarify target scenario and voice style.
+2. Choose backend mode (local for privacy, cloud for speed/features).
+3. Generate short samples to validate style before long renders.
+4. Render final audio and check format, clipping, and duration.
+5. Prepare output for downstream publishing or app delivery.
+
+## Quick commands
+
+```bash
+npx skills add NoizAI/skills --list --full-depth
+npx skills add NoizAI/skills --full-depth --skill tts -y
+```
+
+## Notes
+
+- Keep language factual and avoid exaggerated claims.
+- If a backend is unavailable, provide a compatible fallback path.