[APPSEC-61649] Add Sensitive Data Scanning to AI Guard docs#35122
Open
[APPSEC-61649] Add Sensitive Data Scanning to AI Guard docs#35122
Conversation
Contributor
Preview links (active after the
|
Add Sensitive Data Protection as a third capability in the AI Guard overview and document the new sensitive data scanning feature in the onboarding guide, including how to enable it, what it scans, and the sds_findings field in the API response. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
a09e8c7 to
49119a0
Compare
cswatt
approved these changes
Mar 9, 2026
emmlejeail
reviewed
Mar 10, 2026
| {{< /site-region >}} | ||
|
|
||
| AI Guard helps secure your AI apps and agents in real time against prompt injection, jailbreaking, tool misuse, and sensitive data exfiltration attacks. This page describes how to set it up so you can keep your data secure against these AI-based threats. | ||
| AI Guard helps secure your AI apps and agents in real time against prompt injection, jailbreaking, tool misuse, sensitive data exfiltration attacks, and exposure of sensitive data such as PII and secrets. This page describes how to set it up so you can keep your data secure against these AI-based threats. |
Contributor
There was a problem hiding this comment.
Right now I think it's a little bit of a stretch to say we protect against exposure of PII since we don't offer redaction capabilities. Let's reframe around detection for now?
Once we add the redaction for DASH we will be able to claim the protection aspect.
emmlejeail
approved these changes
Mar 10, 2026
smola
reviewed
Mar 10, 2026
|
|
||
| AI Guard can detect personally identifiable information (PII) such as email addresses, phone numbers, and SSNs, as well as secrets such as API keys and tokens, in LLM conversations. To enable sensitive data scanning, go to **AI Guard** > **Settings** > [**Sensitive Data Scanning**][22] for your services. | ||
|
|
||
| When enabled, AI Guard scans the last message in each evaluation call, including user prompts, assistant responses, tool call arguments, and tool call results. Findings are returned in the evaluation response as an `sds_findings` array, where each finding includes the rule name, category (`pii` or `secrets`), matched text, and its location within the message. Findings also appear on APM traces for visibility. Sensitive data scanning is detection-only — findings do not independently trigger blocking. |
Member
There was a problem hiding this comment.
Suggested change
| When enabled, AI Guard scans the last message in each evaluation call, including user prompts, assistant responses, tool call arguments, and tool call results. Findings are returned in the evaluation response as an `sds_findings` array, where each finding includes the rule name, category (`pii` or `secrets`), matched text, and its location within the message. Findings also appear on APM traces for visibility. Sensitive data scanning is detection-only — findings do not independently trigger blocking. | |
| When enabled, AI Guard scans the last message in each evaluation call, including user prompts, assistant responses, tool call arguments, and tool call results. Findings are returned in the evaluation response as an `sds` array, where each finding includes the rule name, category (`pii` or `secrets`), matched text, and its location within the message. Findings also appear on APM traces for visibility. Sensitive data scanning is detection-only — findings do not independently trigger blocking. |
Member
There was a problem hiding this comment.
In the SDK, the field name is sds, see https://github.com/DataDog/system-tests/pull/6445/changes
But I'd probably wait for that until we release at least one language with access to SDS reasults from the SDS, and then we can also add API usage examples here.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do? What is the motivation?
Fixes APPSEC-61649
Updates the AI Guard public documentation to include Sensitive Data Scanning (SDS) as a third protection capability alongside Prompt Protection and Tool Protection. SDS detects PII (email addresses, phone numbers, SSNs) and secrets (API keys, tokens) in LLM conversations.
Changes
_index.md(Overview)onboarding.md(Setup guide)sds_findingsresponse field, and detection-only behaviorMerge instructions
Merge readiness:
Additional notes
Follows the pattern of PRs #34867 and #34888. The generic API response example is intentionally kept simple (just
actionandreason), consistent with how it already omits other response fields liketags,tag_probs, andis_blocking_enabled. Thesds_findingsfield is described in the new SDS subsection instead.🤖 Generated with Claude Code