Skip to content

[APPSEC-61649] Add Sensitive Data Scanning to AI Guard docs#35122

Open
rzxdeng wants to merge 1 commit intomasterfrom
rdeng/APPSEC-61649-ai-guard-sds-docs
Open

[APPSEC-61649] Add Sensitive Data Scanning to AI Guard docs#35122
rzxdeng wants to merge 1 commit intomasterfrom
rdeng/APPSEC-61649-ai-guard-sds-docs

Conversation

@rzxdeng
Copy link

@rzxdeng rzxdeng commented Mar 9, 2026

What does this PR do? What is the motivation?

Fixes APPSEC-61649

Updates the AI Guard public documentation to include Sensitive Data Scanning (SDS) as a third protection capability alongside Prompt Protection and Tool Protection. SDS detects PII (email addresses, phone numbers, SSNs) and secrets (API keys, tokens) in LLM conversations.

Changes

_index.md (Overview)

  • Added Sensitive Data Protection as a third capability
  • Added sentence about PII and secrets detection

onboarding.md (Setup guide)

  • Updated intro to mention sensitive data
  • Added "Sensitive data scanning" subsection under "Configure AI Guard policies" (step 6), describing what it scans, how to enable it, the sds_findings response field, and detection-only behavior
  • Added link reference [22] for the SDS settings page

Merge instructions

Merge readiness:

  • Ready for merge

Additional notes

Follows the pattern of PRs #34867 and #34888. The generic API response example is intentionally kept simple (just action and reason), consistent with how it already omits other response fields like tags, tag_probs, and is_blocking_enabled. The sds_findings field is described in the new SDS subsection instead.

🤖 Generated with Claude Code

@rzxdeng rzxdeng requested a review from a team as a code owner March 9, 2026 19:39
@github-actions
Copy link
Contributor

github-actions bot commented Mar 9, 2026

Add Sensitive Data Protection as a third capability in the AI Guard
overview and document the new sensitive data scanning feature in the
onboarding guide, including how to enable it, what it scans, and
the sds_findings field in the API response.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@rzxdeng rzxdeng force-pushed the rdeng/APPSEC-61649-ai-guard-sds-docs branch from a09e8c7 to 49119a0 Compare March 9, 2026 20:07
{{< /site-region >}}

AI Guard helps secure your AI apps and agents in real time against prompt injection, jailbreaking, tool misuse, and sensitive data exfiltration attacks. This page describes how to set it up so you can keep your data secure against these AI-based threats.
AI Guard helps secure your AI apps and agents in real time against prompt injection, jailbreaking, tool misuse, sensitive data exfiltration attacks, and exposure of sensitive data such as PII and secrets. This page describes how to set it up so you can keep your data secure against these AI-based threats.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now I think it's a little bit of a stretch to say we protect against exposure of PII since we don't offer redaction capabilities. Let's reframe around detection for now?
Once we add the redaction for DASH we will be able to claim the protection aspect.


AI Guard can detect personally identifiable information (PII) such as email addresses, phone numbers, and SSNs, as well as secrets such as API keys and tokens, in LLM conversations. To enable sensitive data scanning, go to **AI Guard** > **Settings** > [**Sensitive Data Scanning**][22] for your services.

When enabled, AI Guard scans the last message in each evaluation call, including user prompts, assistant responses, tool call arguments, and tool call results. Findings are returned in the evaluation response as an `sds_findings` array, where each finding includes the rule name, category (`pii` or `secrets`), matched text, and its location within the message. Findings also appear on APM traces for visibility. Sensitive data scanning is detection-only — findings do not independently trigger blocking.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When enabled, AI Guard scans the last message in each evaluation call, including user prompts, assistant responses, tool call arguments, and tool call results. Findings are returned in the evaluation response as an `sds_findings` array, where each finding includes the rule name, category (`pii` or `secrets`), matched text, and its location within the message. Findings also appear on APM traces for visibility. Sensitive data scanning is detection-only — findings do not independently trigger blocking.
When enabled, AI Guard scans the last message in each evaluation call, including user prompts, assistant responses, tool call arguments, and tool call results. Findings are returned in the evaluation response as an `sds` array, where each finding includes the rule name, category (`pii` or `secrets`), matched text, and its location within the message. Findings also appear on APM traces for visibility. Sensitive data scanning is detection-only — findings do not independently trigger blocking.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the SDK, the field name is sds, see https://github.com/DataDog/system-tests/pull/6445/changes
But I'd probably wait for that until we release at least one language with access to SDS reasults from the SDS, and then we can also add API usage examples here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants