feat(evaluators): add ATR threat detection evaluator#170
Conversation
Add contrib evaluator using ATR (Agent Threat Rules) community rules: - 20 regex rules covering OWASP Agentic Top 10 - Configurable severity threshold and category filtering - Pure regex, no API keys, <5ms evaluation time - Auto-discovered via entry points Source: https://agentthreatrule.org (MIT)
- _match_rules now returns all matching rules, not just first match - Add super().__init__(config) call - _coerce_to_string scans all priority dict fields, not just first - Add multi-match test and dict field scanning test - Backward-compatible: single-match fields still in metadata
lan17
left a comment
There was a problem hiding this comment.
Thanks for contributing this. This is a great start, and I really like the idea of having a local/no-key ATR evaluator available as a contrib package.
I left a few comments below. The main things I think we need to tighten before merging are preserving ATR's field/context semantics, avoiding secret leakage through metadata, and putting a harder bound around regex evaluation latency.
| for rule in self._compiled_rules: | ||
| for pattern_entry in rule["patterns"]: | ||
| regex: re.Pattern[str] = pattern_entry["regex"] | ||
| match = regex.search(text) |
There was a problem hiding this comment.
I think this loses an important part of the ATR model. The upstream rules are field/context scoped, but here every regex is run over one flattened text blob. For example, ATR-2026-00061 is intended for MCP tool args/tool responses, but this implementation will match something benign like Please rotate the database credentials tomorrow. because the word credentials appears anywhere in the selected data. We should preserve the ATR fields/scan target semantics, or at least make the broad free-text behavior explicit and opt-in.
| "title": rule["title"], | ||
| "severity": rule["severity"], | ||
| "category": rule["category"], | ||
| "matched_text": match.group()[:200], |
There was a problem hiding this comment.
This should not include the literal matched secret in result metadata. For credential rules, matched_text can be the AWS key/token/etc. that we just detected, and the SDK observability path copies result metadata into events by default. I'd redact, hash, or omit the matched text by default and only expose a safe snippet/offset if we really need debugging detail.
| for rule in self._compiled_rules: | ||
| for pattern_entry in rule["patterns"]: | ||
| regex: re.Pattern[str] = pattern_entry["regex"] | ||
| match = regex.search(text) |
There was a problem hiding this comment.
This can also block the event loop for longer than the evaluator timeout. evaluate() is async, but the work here is CPU-bound synchronous regex scanning, so asyncio.wait_for in the engine cannot interrupt it until this function yields. On my machine, benign no-match input took about 66ms for 10KB, 664ms for 100KB, and 6.6s for 1MB. We need a hard input-length cap and/or a different execution strategy before this is safe on latency-sensitive paths.
| metadata={ | ||
| "findings": all_findings, | ||
| "count": len(all_findings), | ||
| "max_severity": all_findings[0]["severity"] if all_findings else None, |
There was a problem hiding this comment.
max_severity is currently just the first finding in rules-file order, not the maximum severity. I hit a case where a high-severity rule appeared before a critical credential rule, and the metadata reported max_severity='high'. This should compute the max using the severity ordering, and the legacy single-match fields probably should point at the highest-severity finding too.
| @echo " make lint-fix - run ruff check --fix" | ||
| @echo " make typecheck - run mypy" | ||
| @echo " make build - build package" | ||
|
|
There was a problem hiding this comment.
Since this is adding a new supported contrib package, I think we also need to wire it into the repo-level checks and release path. Right now the root test-extras target only runs Galileo, and the release/build scripts only publish the Galileo contrib evaluator. If ATR should be maintained here, it should have root atr-* targets, be included in contrib CI coverage, and be added to build/release/version metadata.
|
Stepping back, I think the package/entry-point/test skeleton is a good start, but I’d rework the core evaluator before iterating on smaller fixes. The main architectural issue is that this currently behaves like a flattened regex scanner, while ATR is really an event/field-aware rule format. I’d preserve ATR fields, scan targets, and condition logic in typed rule models, adapt Agent Control selector output into an explicit ATR event, and only run each condition against its intended field. That should also make it easier to keep metadata safe, cap runtime, and position this as complementary to So my recommendation is: keep the contrib evaluator idea, but redesign the evaluator core around ATR semantics instead of patching the current |
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
Addresses @lan17's 2026-04-26 architectural review on PR agentcontrol#170. The previous v0.1 evaluator flattened every input into a single string and scanned every rule's regex against it. @lan17's critique was that ATR is an event/field-aware rule format, not a flat regex scanner, and that the v0.1 design also leaked the raw matched text (including any secret that triggered the rule) through `metadata.matched_text`. ## What v0.2 changes * `models.py` (new) — typed `ATREvent`, `ATRRule`, `ATRCondition`, `RuleMatch` dataclasses. `ATR_FIELDS` is the canonical field vocabulary (`content`, `user_input`, `agent_output`, `tool_name`, `tool_args`, `tool_description`, `tool_response`, `agent_message`, `skill_manifest`). `ATR_CATEGORY_DEFAULT_FIELD` maps each ATR category to its default field for the legacy flat-pattern auto-upgrade path. * `ATREvent.from_agent_control_data()` adapts an Agent Control selector input into a typed event with explicit field semantics. Bare strings land in `content`. Dicts map known field names directly. Aliases (`input`/`output`/`text`/`message`) map to canonical fields. Unknown keys are JSON-serialised into `content` so broad content-targeted rules still have something to scan defensively. * `evaluator.py` (rewritten) — runs each rule condition only against the condition's intended field via `event.get_field(condition.field)`. Conditions with no field value short-circuit without compiling or searching. Rule-level `condition: any|all` is honoured. * `redact.py` (new) — Python port of the upstream ATR `redactMatchedValue()` helper (`agent-threat-rules@2.1.2` `src/redact.ts`). Recognises AWS access keys, GitHub PATs / OAuth tokens, Slack tokens, OpenAI / Anthropic API keys, Bearer credentials, JWTs, and PEM private keys. Every match is run through this before reaching `EvaluatorResult.metadata` — the v0.1 `matched_text` field is intentionally REMOVED and replaced with `redacted_excerpt`. * `_wall_clock_search` (new in `evaluator.py`) — bounds per-condition regex evaluation time with `signal.SIGALRM` (default 50 ms) so a pathological pattern cannot block the evaluator pipeline. Configurable via `ATRConfig.condition_budget_ms`. Falls back to a soft wall-clock check on Windows / worker threads. * `evaluator._normalise_rule` accepts both the modern field-aware `conditions: [{field, operator, value, ...}]` shape AND the legacy flat `patterns: [{pattern, description}]` shape. Legacy patterns are auto-upgraded to conditions targeting the category default field. Existing `rules.json` keeps working without regeneration. * `pyproject.toml` — version bump to `0.2.0` (minor: API change on the metadata field, but no plugin discovery shape change). ## Why this addresses each of @lan17's concrete asks | @lan17's review point | Addressed by | |---|---| | "preserve ATR fields, scan targets, and condition logic in typed rule models" | `models.py` ATRRule + ATRCondition + ATR_FIELDS | | "adapt Agent Control selector output into an explicit ATR event" | `ATREvent.from_agent_control_data` | | "only run each condition against its intended field" | `_evaluate_rule` dispatches via `event.get_field(condition.field)` | | "avoid metadata leakage" / "keep metadata safe" | `redact.py`; `metadata.matched_text` removed | | "put a harder bound around regex evaluation latency" | `_wall_clock_search` + `condition_budget_ms` | | "position complementary to yelp.detect_secrets" | The redacted-excerpt output structure makes the evaluator deliberately non-overlapping with secret scanners; rules only match attack patterns, the redactor only renders the match safely | ## Tests `tests/test_evaluator.py` rewritten on 2026-05-11 to match the new architecture: * Field-aware dispatch tests (field isolation, alias mapping, unknown key fallback). * Redacted-excerpt assertions on every match path — verifies no raw AWS key, GitHub PAT, or matched substring ever appears in `EvaluatorResult.metadata`. * Condition budget config validation. * Backwards-compat tests retained for severity / category / block_on_match / on_error policies, with input shapes updated to dicts where field semantics matter. ``` $ pytest evaluators/contrib/atr/tests/test_evaluator.py -v ============================== 31 passed in 0.21s ============================== ```
|
@lan17 — apologies for the slow turnaround on your 2026-04-26 review. Pushed a full rewrite in
A few notes on choices I made and would value your input on:
TestsThe test file is rewritten with explicit field-aware inputs (dicts with named fields) so every test exercises field isolation. The credential-leak regression test ( Version bump
Upstream pointersFor reviewers who want to compare the rewrite against the upstream ATR semantics:
ATR v2.1.2 shipped today with the same redaction helper as part of the standard library, so downstream consumers that import Ready for re-review on this branch when you have time. |
Summary
Add a contrib evaluator using ATR (Agent Threat Rules) for regex-based AI agent threat detection. No API keys required.
Resolves #169
Evaluator details
atr.threat_rulesagent_control.evaluatorsentry pointATRConfigwithmin_severity,categories,block_on_match,on_errorConfig options
Key design decisions
findingsarray + backward-compatible single-match fields._coerce_to_string: Scans all priority dict fields (content,input,output,text,message), not just the first.on_errorpolicy — fail-open returnserrorfield, fail-closed returnsmatched=True.Files
Test plan
make test/make lint/make typecheck