skill: add tx402.ai — EU LLM inference via x402 micropayments#141
skill: add tx402.ai — EU LLM inference via x402 micropayments#141shanemort1982 wants to merge 1 commit intoBlockRunAI:mainfrom
Conversation
tx402.ai is an x402 payment gateway for agent-native LLM inference. 20+ models, USDC on Base, OpenAI-compatible, zero data retention. Adds skills/tx402ai/SKILL.md with: - Quick start code using @x402/fetch - Model list with pricing - Model aliases - Endpoint reference - Payment flow documentation
📝 WalkthroughWalkthroughA new documentation file was added describing Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Warning |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
skills/tx402ai/SKILL.md (2)
66-71: Consider qualifying the performance claim.Line 71 states "~2 seconds including payment verification" as a definitive round-trip time. This may vary significantly based on network conditions, blockchain congestion, model availability, and geographic location. Consider adding context like "typically ~2 seconds" or "as low as ~2 seconds" to set appropriate expectations.
📝 Suggested rewording
-4. Total round-trip: ~2 seconds including payment verification +4. Total round-trip: typically ~2 seconds including payment verification (may vary based on network conditions)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/tx402ai/SKILL.md` around lines 66 - 71, The performance claim in the "How Payment Works" section stating "~2 seconds including payment verification" should be qualified; update the phrasing in SKILL.md (the "How Payment Works" paragraph/list) to indicate variability by using wording like "typically ~2 seconds" or "as low as ~2 seconds" and optionally add a short qualifier mentioning factors (network, blockchain congestion, model availability, geography) so readers understand this is an approximate best-case metric.
30-42: Minor inconsistency in model naming format.The Quick Start example (line 24) uses
"deepseek/deepseek-v3.2"while the table shows "DeepSeek V3.2". Consider aligning the table's "Model" column to show the exact identifiers used in API calls (e.g.,deepseek/deepseek-v3.2) for clarity, or add a "Model ID" column.♻️ Suggested improvement: Add model identifier column
-| Model | Type | Est. Cost/Request | -|-------|------|-------------------| -| DeepSeek V3.2 | Chat | ~$0.0003 | +| Model | Model ID | Type | Est. Cost/Request | +|-------|----------|------|-------------------| +| DeepSeek V3.2 | deepseek/deepseek-v3.2 | Chat | ~$0.0003 |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/tx402ai/SKILL.md` around lines 30 - 42, The "Available Models (20+)" table in SKILL.md uses display names (e.g., "DeepSeek V3.2") that don't match the Quick Start model identifier "deepseek/deepseek-v3.2"; update the table to include exact API model IDs (either replace the display names or add a new "Model ID" column) so entries like deepseek/deepseek-v3.2, qwen/qwen3-235b, and llama/llama-4-maverick match the identifiers used in the Quick Start example and the live model list.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@skills/tx402ai/SKILL.md`:
- Around line 66-71: The performance claim in the "How Payment Works" section
stating "~2 seconds including payment verification" should be qualified; update
the phrasing in SKILL.md (the "How Payment Works" paragraph/list) to indicate
variability by using wording like "typically ~2 seconds" or "as low as ~2
seconds" and optionally add a short qualifier mentioning factors (network,
blockchain congestion, model availability, geography) so readers understand this
is an approximate best-case metric.
- Around line 30-42: The "Available Models (20+)" table in SKILL.md uses display
names (e.g., "DeepSeek V3.2") that don't match the Quick Start model identifier
"deepseek/deepseek-v3.2"; update the table to include exact API model IDs
(either replace the display names or add a new "Model ID" column) so entries
like deepseek/deepseek-v3.2, qwen/qwen3-235b, and llama/llama-4-maverick match
the identifiers used in the Quick Start example and the live model list.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 1c2a9d51-7d55-405d-a3d1-2cfba2734eb4
📒 Files selected for processing (1)
skills/tx402ai/SKILL.md
Add tx402.ai Skill
Adds a skill for tx402.ai — an x402 payment gateway for agent-native EU LLM inference.
Why this is useful for ClawRouter users
tx402.ai provides an alternative x402-compatible LLM endpoint with:
@x402/fetch, same wallet, USDC on Base/v1/chat/completionsllms.txt,ai-plugin.json,.well-known/x402, OpenAPI 3.1Live & Verified
Files Added
skills/tx402ai/SKILL.md— skill definition with quick start, model list, pricing, aliases, and endpoint referenceSummary by CodeRabbit
tx402.aiAgent-Native LLM Gateway service, including TypeScript quick-start examples, available models with per-request cost estimates, supported endpoints for chat, completions, embeddings, and models, OpenAI-compatible API, streaming support, and USDC payment flow on Base without requiring API keys or accounts.