Skip to content

fix(core): Improve Vercel AI SDK instrumentation attributes#19717

Open
RulaKhaled wants to merge 7 commits intodevelopfrom
vercelai-issues
Open

fix(core): Improve Vercel AI SDK instrumentation attributes#19717
RulaKhaled wants to merge 7 commits intodevelopfrom
vercelai-issues

Conversation

@RulaKhaled
Copy link
Member

@RulaKhaled RulaKhaled commented Mar 9, 2026

This PR introduces some attributes and fixes to Vercel AI SDK:

Closes #19574

@linear-code
Copy link

linear-code bot commented Mar 9, 2026

@github-actions
Copy link
Contributor

github-actions bot commented Mar 9, 2026

size-limit report 📦

⚠️ Warning: Base artifact is not the latest one, because the latest workflow run is not done yet. This may lead to incorrect results. Try to re-run all tests to get up to date results.

Path Size % Change Change
@sentry/browser 25.64 kB +0.05% +12 B 🔺
@sentry/browser - with treeshaking flags 24.14 kB +0.03% +7 B 🔺
@sentry/browser (incl. Tracing) 42.44 kB +0.02% +6 B 🔺
@sentry/browser (incl. Tracing, Profiling) 47.1 kB +0.02% +8 B 🔺
@sentry/browser (incl. Tracing, Replay) 81.26 kB +0.02% +9 B 🔺
@sentry/browser (incl. Tracing, Replay) - with treeshaking flags 70.88 kB +0.02% +8 B 🔺
@sentry/browser (incl. Tracing, Replay with Canvas) 85.95 kB +0.02% +9 B 🔺
@sentry/browser (incl. Tracing, Replay, Feedback) 98.21 kB +0.01% +7 B 🔺
@sentry/browser (incl. Feedback) 42.44 kB +0.02% +7 B 🔺
@sentry/browser (incl. sendFeedback) 30.31 kB +0.04% +11 B 🔺
@sentry/browser (incl. FeedbackAsync) 35.36 kB +0.04% +11 B 🔺
@sentry/browser (incl. Metrics) 26.8 kB +0.04% +9 B 🔺
@sentry/browser (incl. Logs) 26.95 kB +0.03% +8 B 🔺
@sentry/browser (incl. Metrics & Logs) 27.62 kB +0.04% +9 B 🔺
@sentry/react 27.39 kB +0.04% +9 B 🔺
@sentry/react (incl. Tracing) 44.78 kB +0.02% +8 B 🔺
@sentry/vue 30.09 kB +0.04% +10 B 🔺
@sentry/vue (incl. Tracing) 44.31 kB +0.03% +9 B 🔺
@sentry/svelte 25.66 kB +0.04% +9 B 🔺
CDN Bundle 28.18 kB +0.04% +9 B 🔺
CDN Bundle (incl. Tracing) 43.27 kB +0.03% +11 B 🔺
CDN Bundle (incl. Logs, Metrics) 29.02 kB +0.04% +9 B 🔺
CDN Bundle (incl. Tracing, Logs, Metrics) 44.11 kB +0.03% +10 B 🔺
CDN Bundle (incl. Replay, Logs, Metrics) 68.1 kB +0.02% +8 B 🔺
CDN Bundle (incl. Tracing, Replay) 80.15 kB +0.02% +12 B 🔺
CDN Bundle (incl. Tracing, Replay, Logs, Metrics) 81.01 kB +0.02% +10 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback) 85.66 kB +0.02% +9 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics) 86.54 kB +0.02% +10 B 🔺
CDN Bundle - uncompressed 82.38 kB +0.04% +26 B 🔺
CDN Bundle (incl. Tracing) - uncompressed 128.09 kB +0.03% +26 B 🔺
CDN Bundle (incl. Logs, Metrics) - uncompressed 85.21 kB +0.04% +26 B 🔺
CDN Bundle (incl. Tracing, Logs, Metrics) - uncompressed 130.93 kB +0.02% +26 B 🔺
CDN Bundle (incl. Replay, Logs, Metrics) - uncompressed 208.88 kB +0.02% +26 B 🔺
CDN Bundle (incl. Tracing, Replay) - uncompressed 244.98 kB +0.02% +26 B 🔺
CDN Bundle (incl. Tracing, Replay, Logs, Metrics) - uncompressed 247.8 kB +0.02% +26 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed 257.89 kB +0.02% +26 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics) - uncompressed 260.7 kB +0.01% +26 B 🔺
@sentry/nextjs (client) 47.19 kB +0.02% +9 B 🔺
@sentry/sveltekit (client) 42.9 kB +0.02% +8 B 🔺
@sentry/node-core 52.27 kB +0.07% +34 B 🔺
@sentry/node 175.14 kB +0.25% +431 B 🔺
@sentry/node - without tracing 97.44 kB +0.06% +53 B 🔺
@sentry/aws-serverless 113.24 kB +0.05% +48 B 🔺

View base workflow run

@github-actions
Copy link
Contributor

github-actions bot commented Mar 9, 2026

node-overhead report 🧳

Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
⚠️ Warning: Base artifact is not the latest one, because the latest workflow run is not done yet. This may lead to incorrect results. Try to re-run all tests to get up to date results.

Scenario Requests/s % of Baseline Prev. Requests/s Change %
GET Baseline 10,701 - 9,336 +15%
GET With Sentry 1,940 18% 1,774 +9%
GET With Sentry (error only) 7,439 70% 6,276 +19%
POST Baseline 1,279 - 1,263 +1%
POST With Sentry 602 47% 619 -3%
POST With Sentry (error only) 1,125 88% 1,108 +2%
MYSQL Baseline 3,460 - 3,323 +4%
MYSQL With Sentry 453 13% 509 -11%
MYSQL With Sentry (error only) 2,930 85% 2,723 +8%

View base workflow run

@RulaKhaled RulaKhaled changed the title fix(core): Resolve fix(core): Add output messages, tool description attributes, and fix media type stripping Mar 10, 2026
@RulaKhaled RulaKhaled changed the title fix(core): Add output messages, tool description attributes, and fix media type stripping fix(core): Improve Vercel AI SDK instrumentation attributes Mar 10, 2026
@RulaKhaled RulaKhaled marked this pull request as ready for review March 10, 2026 11:21
Comment on lines +135 to +144
function hasVercelImageData(part: NonNullable<unknown>): part is { type: 'image'; image: string; mimeType: string } {
return (
'type' in part &&
part.type === 'image' &&
'image' in part &&
typeof part.image === 'string' &&
'mimeType' in part &&
typeof part.mimeType === 'string'
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The hasVercelImageData function incorrectly identifies HTTP/HTTPS URLs as media to be stripped, causing telemetry data loss when the URL is replaced with a placeholder.
Severity: MEDIUM

Suggested Fix

Update the hasVercelImageData function to verify that the image string is a data URL (e.g., by checking if it startsWith('data:')) or a base64 string, rather than just checking if its type is string. This will prevent it from incorrectly redacting standard HTTP/HTTPS URLs.

Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: packages/core/src/tracing/ai/mediaStripping.ts#L135-L144

Potential issue: The function `hasVercelImageData` checks if the `image` property is a
string, but does not differentiate between a URL and a base64-encoded data string. This
causes the media stripping logic in `stripInlineMediaFromSingleMessage` to incorrectly
redact HTTP/HTTPS URLs by replacing them with `[Blob substitute]`. This behavior leads
to the loss of valuable telemetry data, as the URL reference to the image is removed,
preventing users from seeing which images were part of the AI operation. The intended
behavior is to only strip binary data like base64 strings or data URLs, while preserving
standard URLs.

Did we get this right? 👍 / 👎 to inform future reviews.

Comment on lines +316 to +319
// eslint-disable-next-line @typescript-eslint/no-dynamic-delete
delete attributes[AI_RESPONSE_TEXT_ATTRIBUTE];
// eslint-disable-next-line @typescript-eslint/no-dynamic-delete
delete attributes[AI_RESPONSE_TOOL_CALLS_ATTRIBUTE];
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Source attributes AI_RESPONSE_TEXT_ATTRIBUTE and AI_RESPONSE_TOOL_CALLS_ATTRIBUTE are deleted even if buildOutputMessages fails to process them, leading to data loss in telemetry.
Severity: MEDIUM

Suggested Fix

The deletion of AI_RESPONSE_TEXT_ATTRIBUTE and AI_RESPONSE_TOOL_CALLS_ATTRIBUTE should be conditional. Only delete these attributes if the buildOutputMessages function successfully generates and sets the GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE.

Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: packages/core/src/tracing/vercel-ai/index.ts#L316-L319

Potential issue: In `processEndedVercelAiSpan`, the attributes
`AI_RESPONSE_TEXT_ATTRIBUTE` and `AI_RESPONSE_TOOL_CALLS_ATTRIBUTE` are unconditionally
deleted after `buildOutputMessages` is called. However, `buildOutputMessages` may not
generate any output if it receives an empty string for `responseText` and invalid JSON
for `responseToolCalls`. In this scenario, the source attributes are deleted without
being captured in the `gen_ai.output.messages` attribute, resulting in silent and
permanent loss of telemetry data.

Did we get this right? 👍 / 👎 to inform future reviews.

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: V6 tests missing new output messages attribute assertions
    • Added explicit GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE assertions (and import) across the v6 span expectations so gen_ai.output.messages is now validated for text and tool-call outputs.

Create PR

Or push these changes by commenting:

@cursor push 8e0d6cceb7
Preview (8e0d6cceb7)
diff --git a/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts b/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
--- a/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
@@ -4,6 +4,7 @@
 import {
   GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
   GEN_AI_OPERATION_NAME_ATTRIBUTE,
+  GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE,
   GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
   GEN_AI_REQUEST_MODEL_ATTRIBUTE,
   GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
@@ -97,6 +98,8 @@
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
           [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
           [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
@@ -129,6 +132,8 @@
           'vercel.ai.response.id': expect.any(String),
           'vercel.ai.response.timestamp': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
           [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
           [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
@@ -231,6 +236,8 @@
           'vercel.ai.prompt': '[{"role":"user","content":"Where is the first span?"}]',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the first span?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"First span here!"}],"finish_reason":"stop"}]',
           'vercel.ai.response.finishReason': 'stop',
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
@@ -257,6 +264,8 @@
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]:
             '[{"role":"user","content":[{"type":"text","text":"Where is the first span?"}]}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"First span here!"}],"finish_reason":"stop"}]',
           'vercel.ai.response.finishReason': 'stop',
           'vercel.ai.response.id': expect.any(String),
           'vercel.ai.response.model': 'mock-model-id',
@@ -289,6 +298,8 @@
           'vercel.ai.prompt': '[{"role":"user","content":"Where is the second span?"}]',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           'vercel.ai.response.finishReason': 'stop',
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
@@ -324,6 +335,8 @@
           'vercel.ai.response.id': expect.any(String),
           'vercel.ai.response.timestamp': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
           [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
           [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
@@ -346,6 +359,8 @@
           'vercel.ai.prompt': '[{"role":"user","content":"What is the weather in San Francisco?"}]',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the weather in San Francisco?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"tool_call","id":"call-1","name":"getWeather","arguments":"{\\"location\\":\\"San Francisco\\"}"}],"finish_reason":"tool-calls"}]',
           'vercel.ai.response.finishReason': 'tool-calls',
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
@@ -371,6 +386,8 @@
           'vercel.ai.pipeline.name': 'generateText.doGenerate',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"tool_call","id":"call-1","name":"getWeather","arguments":"{\\"location\\":\\"San Francisco\\"}"}],"finish_reason":"tool-calls"}]',
           'vercel.ai.prompt.toolChoice': expect.any(String),
           [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: EXPECTED_AVAILABLE_TOOLS_JSON,
           'vercel.ai.response.finishReason': 'tool-calls',
This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.

'vercel.ai.prompt': '[{"role":"user","content":"Where is the second span?"}]',
'vercel.ai.request.headers.user-agent': expect.any(String),
'vercel.ai.response.finishReason': 'stop',
[GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: expect.any(String),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

V6 tests missing new output messages attribute assertions

Medium Severity

The v6 tests remove GEN_AI_RESPONSE_TEXT_ATTRIBUTE and GEN_AI_RESPONSE_TOOL_CALLS_ATTRIBUTE assertions but never add corresponding GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE assertions (it's not even imported). The v4 and v5 tests properly add the new attribute assertions. Since v6 tests use expect.objectContaining throughout, the new gen_ai.output.messages behavior is completely unverified for v6, even though the v6 mock model does emit ai.response.text. This violates the rule to check that tests actually test newly added behavior.

Additional Locations (1)

Fix in Cursor Fix in Web

Triggered by project rule: PR Review Guidelines for Cursor Bot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Fix Vercel AI Node.js tests

1 participant