-
Notifications
You must be signed in to change notification settings - Fork 0
feat: Add strict grounding mode to prevent AI from fabricating answers when context is insufficient #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-authored-by: BukeLy <19304666+BukeLy@users.noreply.github.com>
Co-authored-by: BukeLy <19304666+BukeLy@users.noreply.github.com>
BukeLy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why there are only navie mode and global mode customized prompts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces a strict grounding mode feature to prevent LLM hallucination when the knowledge base lacks sufficient information to answer queries. The implementation adds enhanced prompt templates with explicit grounding rules that require the AI to acknowledge when context is insufficient rather than fabricating answers using general knowledge.
Key Changes:
- Added two enhanced prompt templates (
ENHANCED_RAG_RESPONSEandENHANCED_NAIVE_RAG_RESPONSE) with strict grounding instructions prominently positioned at the top - Implemented configurable strict grounding mode via environment variable (
LIGHTRAG_STRICT_GROUNDING) or per-tenant configuration (strict_grounding) - Maintained proper configuration priority: tenant custom prompts > environment custom prompts > strict grounding enhanced prompts > LightRAG defaults
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
src/prompt_manager.py |
Adds enhanced prompt templates with strict grounding rules and implements the configuration logic to enable/disable strict mode globally or per-tenant |
src/tenant_config.py |
Updates the configuration model documentation example to include the new strict_grounding, rag_response, and naive_rag_response options |
env.example |
Documents the new LIGHTRAG_STRICT_GROUNDING environment variable and custom prompt override options |
docs/strict_grounding_mode.md |
Comprehensive documentation explaining the problem, solution, usage, configuration priority, and effects with examples in Chinese |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| > 抱歉,根据当前知识库中的内容,我无法找到与您问题直接相关的信息。请尝试: | ||
| > - 重新表述您的问题 | ||
| > - 提供更多上下文信息 | ||
| > - 确认相关文档是否已上传到知识库 |
Copilot
AI
Dec 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "unable to answer" message is hardcoded in Chinese. This conflicts with the instruction on line 155 that states "The response MUST be in the same language as the user query."
Consider either:
- Making this message a template variable that can be localized, or
- Adding a note in the prompt that this specific fallback message is an exception to the language matching rule and should be translated to match the query language
src/prompt_manager.py
Outdated
| tenant_key="strict_grounding", | ||
| tenant_config=tenant_config | ||
| ) | ||
| return use_strict is not None and use_strict.lower() in ("true", "1", "yes", "on") |
Copilot
AI
Dec 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The string comparison with .lower() could raise an AttributeError if use_strict is not a string. The _get_prompt_value function returns str | None, but if someone passes a boolean value through tenant configuration (e.g., {"strict_grounding": true} instead of {"strict_grounding": "true"}), this will fail.
Consider adding type checking:
return use_strict is not None and str(use_strict).lower() in ("true", "1", "yes", "on")| return use_strict is not None and use_strict.lower() in ("true", "1", "yes", "on") | |
| return use_strict is not None and str(use_strict).lower() in ("true", "1", "yes", "on") |
| - Strictly adhere to the provided context from the **Context**; DO NOT invent, assume, or infer any information not explicitly stated. | ||
| - If the answer cannot be found in the **Context**, state that you do not have enough information to answer. Do not attempt to guess. |
Copilot
AI
Dec 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The grounding instructions are duplicated between the "Critical Grounding Rules" section (lines 31-55) and the "Content & Grounding" section (lines 68-70). While repetition can reinforce important instructions, this creates maintenance overhead and potential inconsistencies.
Consider:
- Keeping only the more detailed "Critical Grounding Rules" section and removing the redundant "Content & Grounding" section, or
- Adding a reference in "Content & Grounding" that points back to the critical rules rather than repeating them
This same issue exists in ENHANCED_NAIVE_RAG_RESPONSE (lines 113-137 vs 150-152).
| - Strictly adhere to the provided context from the **Context**; DO NOT invent, assume, or infer any information not explicitly stated. | |
| - If the answer cannot be found in the **Context**, state that you do not have enough information to answer. Do not attempt to guess. | |
| - See the **Critical Grounding Rules (MUST FOLLOW)** section above for all requirements regarding use of context and handling insufficient information. |
env.example
Outdated
| # 启用后,当知识库中没有足够信息回答问题时,AI 会明确拒绝回答 | ||
| # 而不是强行编造答案 | ||
| # 可选值: true, false(默认 false,不改变 LightRAG 原有行为) | ||
| LIGHTRAG_STRICT_GROUNDING=true |
Copilot
AI
Dec 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Documentation inconsistency: The documentation states the default behavior is false when strict grounding is not set (line 263: "默认 false,不改变 LightRAG 原有行为"), and line 264 shows the example as LIGHTRAG_STRICT_GROUNDING=true.
However, looking at the code in prompt_manager.py, when use_strict_grounding is False, the code doesn't set any prompt at all (lines 285-287, 298-300), which means it will use whatever LightRAG's default prompt is. This is correct behavior, but the env.example should make it clearer that:
- When not set or set to false: uses LightRAG's original prompts (no change)
- When set to true: uses enhanced prompts with strict grounding
Consider updating line 264 to show the disabled state as the example:
# LIGHTRAG_STRICT_GROUNDING=true # Uncomment to enableOr add a comment clarifying the default:
# 默认:注释掉或设置为 false(使用 LightRAG 原生行为)
LIGHTRAG_STRICT_GROUNDING=true| LIGHTRAG_STRICT_GROUNDING=true | |
| # 默认:注释掉或设置为 false(使用 LightRAG 原生行为) | |
| # LIGHTRAG_STRICT_GROUNDING=true # 取消注释以启用严格 Grounding |
Co-authored-by: BukeLy <19304666+BukeLy@users.noreply.github.com>
LightRAG 库中只定义了两种响应 Prompt:
这是 LightRAG 的设计决定,而非遗漏。已在文档中添加了"Prompt 与查询模式的对应关系"表格来说明这一点(commit 4cc8df8)。 |
BukeLy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
之前有实现过自定义Prompts的功能,在租户配置中。现在是否有功能重复?
Co-authored-by: BukeLy <19304666+BukeLy@users.noreply.github.com>
没有功能重复。原有的 这次更新扩展了功能,增加了对查询响应 prompts 的支持( 简而言之:
两者互补,不重复。 |
BukeLy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LightRAG原有的行为就是在没有明确上下文的情况下生成No context xxxx的,为什么换其他LLM会导致这个功能失效?
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
rag_response和naive_rag_responsepromptsrag_response和naive_rag_response的自定义支持- [n]而非* [n])_is_strict_grounding_enabled辅助函数减少代码重复true和True)Original prompt
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.