Skip to content

skill: add tx402.ai — EU LLM inference via x402 micropayments#141

Open
shanemort1982 wants to merge 1 commit intoBlockRunAI:mainfrom
shanemort1982:add-tx402ai-skill
Open

skill: add tx402.ai — EU LLM inference via x402 micropayments#141
shanemort1982 wants to merge 1 commit intoBlockRunAI:mainfrom
shanemort1982:add-tx402ai-skill

Conversation

@shanemort1982
Copy link
Copy Markdown

@shanemort1982 shanemort1982 commented Apr 2, 2026

Add tx402.ai Skill

Adds a skill for tx402.ai — an x402 payment gateway for agent-native EU LLM inference.

Why this is useful for ClawRouter users

tx402.ai provides an alternative x402-compatible LLM endpoint with:

  • 20+ EU-hosted models — DeepSeek V3/R1, Qwen3-235B, Llama 4 Maverick, GLM-5, Mixtral, GPT-OSS
  • Same x402 protocol — works with @x402/fetch, same wallet, USDC on Base
  • OpenAI-compatible API — drop-in at /v1/chat/completions
  • EU-sovereign — GDPR-compliant, zero data retention, EU infrastructure
  • Dynamic pricing — per-model, auto-refreshed from Tensorix every 6h
  • Agent discoveryllms.txt, ai-plugin.json, .well-known/x402, OpenAPI 3.1

Live & Verified

Files Added

  • skills/tx402ai/SKILL.md — skill definition with quick start, model list, pricing, aliases, and endpoint reference

Summary by CodeRabbit

  • Documentation
    • Added comprehensive guide for the tx402.ai Agent-Native LLM Gateway service, including TypeScript quick-start examples, available models with per-request cost estimates, supported endpoints for chat, completions, embeddings, and models, OpenAI-compatible API, streaming support, and USDC payment flow on Base without requiring API keys or accounts.

tx402.ai is an x402 payment gateway for agent-native LLM inference.
20+ models, USDC on Base, OpenAI-compatible, zero data retention.

Adds skills/tx402ai/SKILL.md with:
- Quick start code using @x402/fetch
- Model list with pricing
- Model aliases
- Endpoint reference
- Payment flow documentation
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 2, 2026

📝 Walkthrough

Walkthrough

A new documentation file was added describing tx402.ai, an x402-based EU-sovereign LLM gateway service that enables USDC micropayments on Base to authenticate inference requests without API keys or accounts.

Changes

Cohort / File(s) Summary
Documentation - tx402.ai Service
skills/tx402ai/SKILL.md
New documentation introducing the Agent-Native LLM Gateway service with TypeScript integration examples, supported models and pricing information, API endpoint specifications (chat, completions, embeddings, models), and payment flow details using x402-based USDC micropayments on Base.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'skill: add tx402.ai — EU LLM inference via x402 micropayments' is directly related to the main change, which is adding a new skill documentation file for tx402.ai with details about EU LLM inference service using x402 micropayments.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Warning

⚠️ This pull request might be slop. It has been flagged by CodeRabbit slop detection and should be reviewed carefully.

@shanemort1982 shanemort1982 marked this pull request as ready for review April 4, 2026 17:53
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
skills/tx402ai/SKILL.md (2)

66-71: Consider qualifying the performance claim.

Line 71 states "~2 seconds including payment verification" as a definitive round-trip time. This may vary significantly based on network conditions, blockchain congestion, model availability, and geographic location. Consider adding context like "typically ~2 seconds" or "as low as ~2 seconds" to set appropriate expectations.

📝 Suggested rewording
-4. Total round-trip: ~2 seconds including payment verification
+4. Total round-trip: typically ~2 seconds including payment verification (may vary based on network conditions)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/tx402ai/SKILL.md` around lines 66 - 71, The performance claim in the
"How Payment Works" section stating "~2 seconds including payment verification"
should be qualified; update the phrasing in SKILL.md (the "How Payment Works"
paragraph/list) to indicate variability by using wording like "typically ~2
seconds" or "as low as ~2 seconds" and optionally add a short qualifier
mentioning factors (network, blockchain congestion, model availability,
geography) so readers understand this is an approximate best-case metric.

30-42: Minor inconsistency in model naming format.

The Quick Start example (line 24) uses "deepseek/deepseek-v3.2" while the table shows "DeepSeek V3.2". Consider aligning the table's "Model" column to show the exact identifiers used in API calls (e.g., deepseek/deepseek-v3.2) for clarity, or add a "Model ID" column.

♻️ Suggested improvement: Add model identifier column
-| Model | Type | Est. Cost/Request |
-|-------|------|-------------------|
-| DeepSeek V3.2 | Chat | ~$0.0003 |
+| Model | Model ID | Type | Est. Cost/Request |
+|-------|----------|------|-------------------|
+| DeepSeek V3.2 | deepseek/deepseek-v3.2 | Chat | ~$0.0003 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/tx402ai/SKILL.md` around lines 30 - 42, The "Available Models (20+)"
table in SKILL.md uses display names (e.g., "DeepSeek V3.2") that don't match
the Quick Start model identifier "deepseek/deepseek-v3.2"; update the table to
include exact API model IDs (either replace the display names or add a new
"Model ID" column) so entries like deepseek/deepseek-v3.2, qwen/qwen3-235b, and
llama/llama-4-maverick match the identifiers used in the Quick Start example and
the live model list.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@skills/tx402ai/SKILL.md`:
- Around line 66-71: The performance claim in the "How Payment Works" section
stating "~2 seconds including payment verification" should be qualified; update
the phrasing in SKILL.md (the "How Payment Works" paragraph/list) to indicate
variability by using wording like "typically ~2 seconds" or "as low as ~2
seconds" and optionally add a short qualifier mentioning factors (network,
blockchain congestion, model availability, geography) so readers understand this
is an approximate best-case metric.
- Around line 30-42: The "Available Models (20+)" table in SKILL.md uses display
names (e.g., "DeepSeek V3.2") that don't match the Quick Start model identifier
"deepseek/deepseek-v3.2"; update the table to include exact API model IDs
(either replace the display names or add a new "Model ID" column) so entries
like deepseek/deepseek-v3.2, qwen/qwen3-235b, and llama/llama-4-maverick match
the identifiers used in the Quick Start example and the live model list.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 1c2a9d51-7d55-405d-a3d1-2cfba2734eb4

📥 Commits

Reviewing files that changed from the base of the PR and between 7a7c282 and 5609e19.

📒 Files selected for processing (1)
  • skills/tx402ai/SKILL.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant