| Version | Supported |
|---|---|
| 0.9.x | ✅ |
| < 0.9 | ❌ |
We take security vulnerabilities seriously. If you discover a security issue, please report it responsibly.
- Do NOT create a public GitHub issue for security vulnerabilities
- Use the "Report a security vulnerability" issue template on GitHub (this creates a private security advisory visible only to maintainers)
- Include as much detail as possible:
- Description of the vulnerability
- Steps to reproduce
- Potential impact
- Any suggested fixes
- Acknowledgment: We will acknowledge receipt within 48 hours
- Initial Assessment: Within 7 days, we will provide an initial assessment
- Updates: We will keep you informed of our progress
- Resolution: We aim to resolve critical issues within 30 days
- Credit: We will credit you in the security advisory (unless you prefer anonymity)
This security policy covers:
- The
nxuskit-enginelibrary crate (nxuskit-core C ABI layer) - The
nxuskitRust wrapper crate - The
nxuskit-goGo SDK - The
nxuskit-pyPython SDK - The
nxuskit-clibinary crate - The licensing and authentication infrastructure
- Official examples and documentation
- Third-party dependencies (report to their maintainers)
- LLM provider APIs (report to respective providers)
- Issues in user code that uses this library
nxusKit handles API keys for various LLM providers. Best practices:
- Never commit API keys to version control
- Use environment variables for API keys
- The library does not log or persist API keys
- API keys are only sent to their respective provider endpoints
- All API calls use HTTPS
- Certificate validation is enforced by default
- No sensitive data is logged at default log levels
- Dependencies are regularly audited using
cargo audit - We minimize dependencies to reduce attack surface
- All dependencies are from crates.io with verified publishers where possible
- Environment Variables: Store API keys in environment variables, not in code
- Minimal Permissions: Use API keys with minimal required permissions
- Key Rotation: Rotate API keys regularly
- Logging: Be careful not to log request/response content containing sensitive data
- Input Validation: Validate and sanitize user inputs before sending to LLMs
LLMs are susceptible to prompt injection attacks. This library does not provide protection against prompt injection - that is the responsibility of the application developer. Consider:
- Validating and sanitizing user inputs
- Using system prompts to establish boundaries
- Not blindly executing LLM outputs
Content sent to LLM providers is processed according to each provider's terms of service and privacy policy. Ensure compliance with your data handling requirements.
We thank the security researchers who have helped improve the security of this project:
- (No reports yet)