-
Notifications
You must be signed in to change notification settings - Fork 45
feat: add documentation for new ai risk hub #2624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,184 @@ | ||
| --- | ||
| description: The organization's AI Risk Hub dashboard provides an overview of all the AI issues detected in the repositories applied to the organization's AI Policy standard and your organization's risk level based on your AI practices. | ||
| --- | ||
|
|
||
| # AI Risk Hub | ||
|
|
||
| The **AI Risk Hub** gives you visibility into the AI usage, dependencies, and risks across your organization's repositories. It brings together AI policy compliance, risk assessment, and a detailed inventory of AI resources found in your codebase. | ||
| It also provides an overview of all the AI issues detected in the repositories applied to the organization's AI Policy standard and your organization's risk level based on your AI practices. Here, you can navigate through the issues detected in your repositories and filter them by severity and category. You can also filter the issues by selecting specific repositories or using [the segments that you have set up](segments.md). | ||
|
|
||
| !!! important | ||
| This dashboard is a Business tier feature, generally available until May 18. | ||
|
Check warning on line 11 in docs/organizations/ai-risk-hub.md
|
||
|
|
||
| To access the AI Risk Hub, select an organization from the top navigation bar and click on **AI Risk** on the left navigation sidebar. | ||
|
|
||
| Inside this hub, you can find the following pages to help you monitor the AI risk of your organization: | ||
|
|
||
| - [Overview](#overview) | ||
| - [AI Inventory](#ai-inventory) | ||
|
|
||
| --- | ||
|
|
||
| ## Overview | ||
|
|
||
| The **Overview** tab is the main dashboard for monitoring AI risk across your organization. It includes: | ||
|
|
||
| - [AI Policy Compliance](#ai-policy-compliance) | ||
| - [Risk Level](#risk-level) | ||
| - [AI Risk Checklist](#ai-risk-checklist) | ||
| - [Repositories with most AI issues](#repositories-with-most-ai-issues) | ||
| - [AI Inventory summary](#ai-inventory-summary) | ||
|
|
||
|  | ||
|
|
||
| ### AI Policy Compliance | ||
|
|
||
| This section shows whether your organization has an AI Policy enabled and how your repositories are performing against it. | ||
|
|
||
| The AI Policy is a curated set of rules designed to detect AI-related risks in your code. When enabled, Codacy applies AI-specific patterns to your repositories and enforces them on pull request checks. You can enable the policy directly from this section. | ||
|
|
||
| Once enabled, the section displays a breakdown of AI issues by **severity** and **category**. | ||
|
|
||
joanasteodoro marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| If you already have the AI Policy enabled, an **Edit** button lets you manage which repositories have the policy applied. | ||
|
|
||
| The AI Policy covers four categories of AI-specific risks: | ||
|
|
||
| #### Unapproved model calls | ||
|
|
||
| Detects usage of disallowed or non-compliant AI models in your codebase, giving you visibility into potential compliance violations. | ||
|
|
||
| #### AI Safety | ||
|
|
||
| Flags missing or incorrect safety practices when using AI-generated or AI-integrated code. | ||
|
|
||
| #### Hardcoded secrets | ||
|
|
||
| Detects hardcoded API keys, credentials, and secrets related to AI services. | ||
|
Check failure on line 56 in docs/organizations/ai-risk-hub.md
|
||
|
|
||
| #### Vulnerabilities (insecure dependencies / SCA) | ||
|
|
||
| Identifies vulnerable AI-related dependencies and packages through software composition analysis. | ||
|
|
||
|  | ||
|
|
||
| --- | ||
|
|
||
| ### Risk Level | ||
|
|
||
| This panel shows your organization's overall **AI Risk Level**: **High**, **Medium**, or **Low**. | ||
|
|
||
| The risk level is calculated based on whether essential AI safeguards have been enabled in Codacy. These safeguards are listed in the [AI Risk Checklist](#ai-risk-checklist). | ||
|
|
||
|  | ||
|
|
||
| --- | ||
|
|
||
| ### AI Risk Checklist | ||
|
|
||
| The AI Risk Checklist outlines the source code controls that Codacy recommends enabling across your organization: | ||
|
|
||
| - **AI Policy enabled:** Enable the AI Policy inside the AI Risk Hub tab. | ||
| - **Coverage enabled:** Set up code coverage for your repositories. | ||
| - **Enforced gates:** Add quality gates to your repositories and apply gate policies across your organization. | ||
| - **Protected pull requests:** Protect pull requests by enforcing quality gates in your Git workflow. | ||
| - **Daily vulnerability scans:** Enable Proactive SCA to protect your repositories from dependency vulnerabilities. | ||
| - **Applications scanned:** Enable App scanning to scan web applications and APIs for security vulnerabilities. | ||
|
Check failure on line 85 in docs/organizations/ai-risk-hub.md
|
||
|
|
||
| The more controls you have enabled, the lower your organization's AI risk level. | ||
|
|
||
|  | ||
|
|
||
| --- | ||
|
|
||
| ### Repositories with most AI issues | ||
|
|
||
| This panel shows your repositories ranked by number of open AI issues, in descending order. | ||
|
|
||
| You can filter the list by: | ||
|
|
||
| - **AI category** (unapproved model calls, AI safety, hardcoded secrets, vulnerabilities) | ||
|
Check failure on line 99 in docs/organizations/ai-risk-hub.md
|
||
| - **Severity** (critical, high, medium, low, info) | ||
| - **Checklist status** | ||
| - **Repository** or **segment** | ||
|
|
||
| Each entry shows how the repository's AI issue count has changed compared to the previous month. | ||
|
|
||
|  | ||
|
|
||
| --- | ||
|
|
||
| ### AI Inventory summary | ||
|
|
||
| This section shows a high-level view of the AI resources discovered across your repositories, broken down by provider. For each provider, you can see the number of resources and repositories involved, as well as a breakdown by resource type. | ||
|
|
||
| The section surfaces the top AI providers detected in your organization. You can click through to the full [AI Inventory](#ai-inventory) for a detailed view. | ||
|
|
||
|  | ||
|
|
||
| --- | ||
|
|
||
| ## AI Inventory | ||
|
|
||
| The **AI Inventory** tab gives you a detailed, searchable view of all AI resources discovered across your organization's repositories. Resources are detected through static analysis and represent actual AI usage found in the code — not just configuration. | ||
|
Check failure on line 122 in docs/organizations/ai-risk-hub.md
|
||
|
|
||
|  | ||
|
|
||
| ### Resource types | ||
|
|
||
| Codacy detects four types of AI resources: | ||
|
|
||
| | Type | Pattern ID | Description | | ||
| |------|------------|-------------| | ||
| | Model usage | `ai_model_usage` | Direct calls to AI model APIs | | ||
|
Check failure on line 132 in docs/organizations/ai-risk-hub.md
|
||
| | Dependency | `ai_dependency` | AI SDKs and packages included as dependencies | | ||
|
Check failure on line 133 in docs/organizations/ai-risk-hub.md
|
||
| | API key | `ai_key` | AI service API keys and credentials found in code | | ||
| | Endpoint / env variable | `ai_env_endpoint` | Environment variables and endpoint references for AI services | | ||
|
Check failure on line 135 in docs/organizations/ai-risk-hub.md
|
||
|
|
||
| ### Supported providers | ||
|
|
||
| Codacy detects resources from the following AI providers: | ||
|
|
||
| - OpenAI | ||
| - Anthropic | ||
| - Microsoft | ||
| - Amazon | ||
| - Mistral | ||
| - Cohere | ||
| - Groq | ||
|
Check failure on line 148 in docs/organizations/ai-risk-hub.md
|
||
| - Together AI | ||
| - Replicate | ||
| - DeepSeek | ||
| - Pinecone | ||
|
Check failure on line 152 in docs/organizations/ai-risk-hub.md
|
||
| - Community models | ||
|
|
||
| ### How it works | ||
|
|
||
| The inventory is built from static analysis of your repositories' source code. For each AI resource found, Codacy records: | ||
|
|
||
| - Which **provider** the resource belongs to (e.g. OpenAI, Anthropic) | ||
| - What **type** of resource it is (model usage, dependency, API key, endpoint) | ||
| - The **marker** that identifies it (e.g. model name, package name) | ||
| - How many **repositories** contain it | ||
| - How many total **references** to it exist | ||
|
|
||
| ### Navigating the inventory | ||
|
|
||
| Resources are listed as expandable entries. You can drill into each one to see: | ||
|
|
||
| 1. **Repositories** — which repositories contain the resource, with file counts and reference counts per repository | ||
| 2. **Files** — within each repository, the specific files where the resource appears | ||
| 3. **Lines** — within each file, the exact lines where the resource is referenced, with direct links to the file in your Git provider | ||
|
|
||
|  | ||
|
|
||
| ### Filtering | ||
|
|
||
| You can filter the inventory using the sidebar on the left: | ||
|
|
||
| - **Providers** — filter by one or more AI vendors | ||
| - **Resource types** — filter by resource type (model usage, dependency, API key, endpoint) | ||
| - **Repositories** — filter by specific repository names | ||
| - **Segments** — filter by repository segments if segmentation is enabled for your organization | ||
|
|
||
| You can reset all filters at once using the **Reset filters** button. | ||
This file was deleted.
Uh oh!
There was an error while loading. Please reload this page.