diff --git a/.cursor/rules/README.md b/.cursor/rules/README.md new file mode 100644 index 00000000..b15b7b1a --- /dev/null +++ b/.cursor/rules/README.md @@ -0,0 +1,115 @@ +# Cursor Rules - Optimized Structure + +This directory contains optimized Cursor rules that follow the latest specification and best practices, including an **auto-improvement system** that automatically detects patterns and suggests rule enhancements. + +## Rule Structure + +### Core Rules +- **[cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc)** - Main specification and formatting guidelines +- **[self_improve.mdc](mdc:.cursor/rules/self_improve.mdc)** - Continuous improvement and pattern recognition +- **[auto_improvement.mdc](mdc:.cursor/rules/auto_improvement.mdc)** - **NEW**: Automatic rule improvement system + +### Domain-Specific Rules +- **[authentication_best_practices.mdc](mdc:.cursor/rules/authentication/authentication_best_practices.mdc)** - Security and authentication guidelines +- **[hotchocolate_best_practices.mdc](mdc:.cursor/rules/hotchocolate/hotchocolate_best_practices.mdc)** - GraphQL and HotChocolate best practices + +### Workflow Rules +- **[development_workflow.mdc](mdc:.cursor/rules/development_workflow.mdc)** - Development process and project organization + +### Automation Tools +- **[pattern_detector.ps1](mdc:.cursor/rules/pattern_detector.ps1)** - **NEW**: PowerShell script for automatic pattern detection + +## 🚀 Auto-Improvement System + +### What It Does +The auto-improvement system automatically detects when the same patterns appear multiple times in your codebase and suggests rule improvements: + +- **Pattern Detection**: Identifies recurring code patterns across 3+ files +- **Rule Suggestions**: Automatically generates new rule suggestions +- **Quality Metrics**: Assesses rule completeness and effectiveness +- **Cross-Referencing**: Maintains rule relationships and dependencies + +### How It Works +1. **Automatic Detection**: Monitors code for recurring patterns +2. **Threshold Triggers**: Suggests improvements when patterns appear 3+ times +3. **Rule Generation**: Creates comprehensive rule suggestions +4. **Quality Assessment**: Evaluates rule effectiveness and completeness + +### Pattern Categories Detected +- **Architectural Patterns**: Service registration, DI, middleware +- **Code Quality**: Error handling, logging, validation +- **Performance**: Database queries, caching, async patterns +- **Security**: Authentication, authorization, input validation + +### Usage +```powershell +# Run pattern detection (PowerShell) +.\pattern_detector.ps1 -ProjectRoot . -PatternThreshold 3 + +# Customize detection +.\pattern_detector.ps1 -PatternThreshold 5 -OutputFile "custom_analysis.json" +``` + +## Rule Format + +All rules follow this standardized format: + +```markdown +--- +description: Clear, one-line description of what the rule enforces +globs: path/to/files/*.ext, other/path/**/* +alwaysApply: boolean +--- + +# Rule Title + +## Section + +- **Key Point in Bold** + - Sub-points with details + - Examples and explanations +``` + +## Usage + +1. **Always Apply Rules**: These rules are automatically applied to all relevant files +2. **Domain Rules**: Apply to specific file types or directories +3. **Cross-References**: Rules reference each other for consistency +4. **Auto-Improvement**: Rules automatically improve based on detected patterns + +## Optimization Benefits + +- **Eliminated Duplication**: Consolidated similar rules into comprehensive files +- **Standardized Format**: All rules follow the same structure and metadata +- **Proper Cross-Referencing**: Rules link to each other for maintainability +- **Focused Scope**: Each rule file has a clear, specific purpose +- **Latest Specification**: Follows current Cursor rules best practices +- **🚀 Auto-Improvement**: Rules automatically evolve based on codebase patterns + +## Maintenance + +- **Automatic**: The auto-improvement system detects and suggests updates +- **Manual**: Update rules when new patterns emerge +- **Examples**: Add examples from actual codebase +- **Cross-Reference**: Rules automatically maintain references +- **Quality**: Follow the self-improvement guidelines in [self_improve.mdc](mdc:.cursor/rules/self_improve.mdc) + +## Auto-Improvement Workflow + +1. **Pattern Detection**: System monitors code for recurring patterns +2. **Threshold Trigger**: When pattern appears 3+ times, system suggests improvement +3. **Rule Suggestion**: Generates comprehensive rule or update suggestion +4. **Quality Assessment**: Evaluates rule completeness and effectiveness +5. **Implementation**: Apply approved rule improvements +6. **Continuous Monitoring**: System continues to detect new patterns + +## Backup + +A complete backup of all rules is available in `cursor-rules-backup-YYYYMMDD-HHMMSS.zip` in the project root. + +## Next Steps + +1. **Run Pattern Detection**: Execute `.\pattern_detector.ps1` to analyze your codebase +2. **Review Suggestions**: Check generated rule suggestions in `pattern_analysis.json` +3. **Implement Improvements**: Apply high-priority rule enhancements +4. **Monitor Quality**: Use the auto-improvement system for continuous enhancement diff --git a/.cursor/rules/authentication/authentication_best_practices.mdc b/.cursor/rules/authentication/authentication_best_practices.mdc new file mode 100644 index 00000000..cf94d668 --- /dev/null +++ b/.cursor/rules/authentication/authentication_best_practices.mdc @@ -0,0 +1,63 @@ +--- +alwaysApply: false +description: "Security guidelines for authentication and session management" +globs: ["**/*.cs", "**/Controllers/**/*", "**/Services/**/*"] +--- + +# Authentication Best Practices + +## Session Security +- **Always use HTTP-only cookies for session/auth tokens** + - Prevents XSS attacks by making tokens inaccessible to JavaScript + - Example (ASP.NET Core): + ```csharp + services.ConfigureApplicationCookie(options => + { + options.Cookie.HttpOnly = true; + options.Cookie.SecurePolicy = CookieSecurePolicy.Always; + options.Cookie.SameSite = SameSiteMode.Strict; + }); + ``` + +## Token Management +- **Use secure, short-lived tokens (JWT or MSAL best practices)** + - Set appropriate expiration and validate tokens on every request + - Example (JWT): + ```csharp + var token = new JwtSecurityToken( + expires: DateTime.UtcNow.AddMinutes(30), + ... // other claims + ); + ``` + +## OAuth Integration +- **Integrate social logins via secure OAuth/OpenID Connect flows** + - Use official providers and never expose secrets in client-side code + - Example (ASP.NET Core): + ```csharp + services.AddAuthentication().AddGoogle(options => + { + options.ClientId = Configuration["Authentication:Google:ClientId"]; + options.ClientSecret = Configuration["Authentication:Google:ClientSecret"]; + }); + ``` + +## Security Principles +- **Never expose secrets or tokens in client-side code** + - All sensitive operations must be server-side only + +## MSAL Integration +- **Use MSAL or equivalent for secure token acquisition and storage** + - Example (MSAL): + ```csharp + var app = PublicClientApplicationBuilder.Create(ClientId) + .WithAuthority(AzureCloudInstance.AzurePublic, TenantId) + .WithRedirectUri("http://localhost") + .Build(); + var result = await app.AcquireTokenInteractive(scopes).ExecuteAsync(); + ``` + +## References +- Context7/MSAL.NET documentation +- Microsoft authentication documentation +- Project AuthController/AuthMutations implementation diff --git a/.cursor/rules/auto_improvement.mdc b/.cursor/rules/auto_improvement.mdc new file mode 100644 index 00000000..8cc8831b --- /dev/null +++ b/.cursor/rules/auto_improvement.mdc @@ -0,0 +1,268 @@ +--- +alwaysApply: true +description: "Continuously improve development patterns by detecting repetitive code and suggesting standardized approaches" +globs: ["**/*.cs", "**/*.ts", "**/*.js", "**/*.tsx", "**/*.jsx"] +--- + +# 🔄 INTELLIGENT PATTERN DETECTION & RULE EVOLUTION + +## 🎯 CORE MISSION +Automatically detect repetitive patterns in code and evolve development rules to maintain consistency, reduce technical debt, and improve code quality. + +--- + +## 🔍 PATTERN DETECTION SYSTEM + +### **Immediate Detection Triggers** +| Pattern Type | Threshold | Action Required | +|-------------|-----------|-----------------| +| **Identical Code Blocks** | 2+ occurrences | Suggest abstraction | +| **Similar Error Handling** | 3+ variations | Propose standard pattern | +| **Configuration Patterns** | 2+ services | Create shared configuration | +| **API Response Patterns** | 3+ controllers | Standardize response format | +| **Database Query Patterns** | 3+ repositories | Suggest base repository | + +### **Real-Time Detection Categories** + +#### **🏗️ Architectural Patterns** +```csharp +// DETECT: When this appears 2+ times across different files +builder.Services.AddDbContext(options => + options.UseSqlServer(connectionString)); +builder.Services.AddScoped(); +``` +**AUTO-SUGGEST**: Create `ServiceRegistrationExtensions.cs` + +#### **⚠️ Error Handling Patterns** +```csharp +// DETECT: When similar try-catch blocks appear 3+ times +try +{ + var result = await _service.DoSomethingAsync(); + return Ok(result); +} +catch (Exception ex) +{ + _logger.LogError(ex, "Error in {Method}", nameof(DoSomething)); + return StatusCode(500, "Internal server error"); +} +``` +**AUTO-SUGGEST**: Create standardized error handling middleware + +#### **🔧 Configuration Patterns** +```csharp +// DETECT: When similar configurations appear in 2+ services +services.AddMassTransit(x => +{ + x.UsingRabbitMq((context, cfg) => { + cfg.Host("localhost", "/", h => { + h.Username("guest"); + h.Password("guest"); + }); + }); +}); +``` +**AUTO-SUGGEST**: Create shared configuration extension + +--- + +## 🚀 AUTO-IMPROVEMENT WORKFLOW + +### **Step 1: Real-Time Pattern Analysis** +``` +🔍 SCANNING... +├── Analyzing current file changes +├── Comparing with existing codebase patterns +├── Identifying similarity thresholds +└── Flagging improvement opportunities +``` + +### **Step 2: Intelligent Suggestion Generation** +When patterns are detected, automatically generate: + +#### **📋 Immediate Actionable Suggestions** +- **Extract Method**: "I notice this validation logic appears in 3 files. Should I extract it to a shared utility?" +- **Create Extension**: "This service registration pattern repeats. Should I create a `ServiceCollectionExtensions`?" +- **Standardize Pattern**: "Error handling varies across controllers. Should I create a standard approach?" + +#### **📝 Rule Updates** +- **Add New Rule**: When new patterns emerge +- **Update Existing Rule**: When patterns evolve +- **Deprecate Rule**: When patterns become obsolete + +### **Step 3: Implementation Assistance** +```csharp +// BEFORE: Detected pattern in 3+ files +public async Task GetUsers() +{ + try { /* logic */ } + catch (Exception ex) { /* handling */ } +} + +// AFTER: Suggested improvement +[HttpGet] +[StandardErrorHandling] // Custom attribute +public async Task GetUsers() +{ + var result = await _service.GetUsersAsync(); + return Ok(result); +} +``` + +--- + +## 🎛️ PATTERN CLASSIFICATION SYSTEM + +### **🟢 TIER 1: CRITICAL PATTERNS (Immediate Action)** +- **Security vulnerabilities** (SQL injection, XSS patterns) +- **Performance anti-patterns** (N+1 queries, missing async) +- **Memory leaks** (undisposed resources) + +### **🟡 TIER 2: CONSISTENCY PATTERNS (Suggest Standardization)** +- **Naming conventions** variations +- **Error handling** inconsistencies +- **Logging format** differences +- **API response** structure variations + +### **🔵 TIER 3: OPTIMIZATION PATTERNS (Recommend Improvements)** +- **Code duplication** opportunities +- **Abstraction** possibilities +- **Performance** enhancements + +--- + +## 🧠 INTELLIGENT RULE EVOLUTION + +### **Dynamic Rule Generation** +```markdown +## AUTO-GENERATED: API Response Standardization +**Detected**: 5 different response patterns across controllers +**Suggested**: Standardized ApiResponse wrapper + +### Implementation: +```csharp +public class ApiResponse +{ + public bool Success { get; set; } + public T Data { get; set; } + public string Message { get; set; } + public List Errors { get; set; } +} +``` + +**Next Steps**: +1. Create `ApiResponse` class +2. Update controllers to use standard response +3. Create extension methods for common responses +``` + +### **Rule Consolidation Intelligence** +**BEFORE**: 5 separate rules for API patterns +**AFTER**: 1 comprehensive "API Development Standards" rule + +### **Cross-Project Learning** +- **Pattern Export**: Save successful patterns for reuse +- **Best Practice Evolution**: Continuously refine based on outcomes +- **Anti-Pattern Detection**: Learn from problematic patterns + +--- + +## 🔧 PRACTICAL IMPLEMENTATION + +### **For Active Development** +```csharp +// 1. AS YOU TYPE: Real-time pattern detection +// 2. ON SAVE: Pattern analysis and suggestions +// 3. ON COMMIT: Comprehensive pattern review +``` + +### **Specific Triggers** +- **File Save**: Analyze current file for patterns +- **Multi-file Edit**: Cross-file pattern detection +- **Code Review**: Suggest standardizations +- **Refactoring**: Recommend pattern improvements + +### **User Interaction** +``` +🤖 CURSOR: "I notice you're implementing user validation. +I see similar logic in UserController.cs and AdminController.cs. + +Would you like me to: +1. ✅ Extract to shared ValidationService +2. ✅ Create UserValidationAttribute +3. ✅ Update existing files to use new pattern +4. ❌ Skip this suggestion" +``` + +--- + +## 📊 SUCCESS METRICS & FEEDBACK + +### **Automatic Quality Assessment** +- **Code Duplication Reduction**: Track % decrease +- **Consistency Score**: Measure pattern adherence +- **Bug Reduction**: Monitor issues related to inconsistency +- **Development Velocity**: Measure time savings + +### **Pattern Effectiveness Tracking** +``` +📈 PATTERN SUCCESS METRICS: +├── Times Pattern Suggested: 15 +├── Times Pattern Adopted: 12 (80%) +├── Bugs Prevented: 3 +├── Development Time Saved: 4.5 hours +└── Developer Satisfaction: 4.2/5 +``` + +### **Continuous Learning Loop** +1. **Detect** patterns in code +2. **Suggest** improvements +3. **Track** adoption rates +4. **Measure** impact +5. **Refine** detection algorithms + +--- + +## 🎯 ACTIONABLE OUTCOMES + +### **Immediate Benefits** +- ✅ **Reduce Copy-Paste**: Detect and prevent code duplication +- ✅ **Enforce Standards**: Automatically suggest consistent patterns +- ✅ **Prevent Technical Debt**: Catch anti-patterns early +- ✅ **Accelerate Development**: Reuse proven patterns + +### **Long-term Evolution** +- 🚀 **Self-Improving Codebase**: Patterns get better over time +- 🚀 **Team Learning**: Share successful patterns across team +- 🚀 **Quality Consistency**: Maintain standards automatically +- 🚀 **Knowledge Preservation**: Capture and reuse best practices + +--- + +## 🔄 INTEGRATION WITH EXISTING WORKFLOW + +### **Respects Current Rules** +- ✅ Never conflicts with established patterns +- ✅ Builds upon existing rule structure +- ✅ Maintains backward compatibility +- ✅ Enhances rather than replaces + +### **Complements Development Flow** +``` +💻 DEVELOPER WORKFLOW: +1. Write code → 2. Pattern detected → 3. Suggestion offered → 4. Choose to apply → 5. Pattern improved + ↓ + Codebase becomes more consistent and maintainable +``` + +--- + +## 🎖️ GOLDEN PRINCIPLES + +> **"DETECT EARLY, SUGGEST GENTLY, IMPROVE CONTINUOUSLY"** + +1. **Never Force**: Always suggest, never automatically change +2. **Learn Constantly**: Adapt patterns based on team preferences +3. **Stay Relevant**: Deprecate outdated patterns automatically +4. **Measure Impact**: Track effectiveness of suggestions +5. **Respect Context**: Understand when patterns don't apply \ No newline at end of file diff --git a/.cursor/rules/consolidated_workflow.mdc b/.cursor/rules/consolidated_workflow.mdc new file mode 100644 index 00000000..35551cd8 --- /dev/null +++ b/.cursor/rules/consolidated_workflow.mdc @@ -0,0 +1,127 @@ +--- +alwaysApply: true +description: "Comprehensive development workflow guidelines combining task management, code quality, and project organization" +globs: ["**/*"] +--- + +# Development Workflow & Task Management Guidelines + +## Task Management Integration + +### Task Structure & Status Management +- **Always include both title and description** for tasks and subtasks +- Use consistent status values: `pending`, `in-progress`, `done`, `deferred`, `cancelled` +- Mark subtasks as completed before marking parent tasks as done +- Update task statuses regularly during development +- Establish clear task dependencies using IDs and validate before starting work + +### Task Master Development Workflow +- Begin coding sessions with `get_tasks` / `task-master list` to see current tasks and status +- Determine next task using `next_task` / `task-master next` +- Analyze complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` +- View specific task details using `get_task` / `task-master show ` +- Break down complex tasks using `expand_task` / `task-master expand --id= --force --research` +- Mark completed tasks with `set_task_status` / `task-master set-status --id= --status=done` +- Update tasks when implementation differs from plan using `update_task` or `update` + +## Command Execution & Tool Usage + +### MCP Server Command Execution +- **ALWAYS execute build/test/npm/etc via the MCP server mcp-server-commands tool** +- Wrap commands with `tools/exec.ps1` for reliable execution +- Pass each argument as its own `-Args` token +- Avoid pipes; rely on RUN_COMPLETE markers and TRX/results files for completion detection +- For test runs, prefer wrapping scripts with: `pwsh -NoProfile -File tools/exec.ps1 -Exe pwsh -Args -NoProfile,-File,artifacts/test/run-tests.ps1,-Configuration,Debug,-Verbosity,minimal,-ResultsDir,artifacts/test,-LogFileName,test-results.trx,-TimeoutSec,900` +- The wrapper writes unique RUN_COMPLETE.marker and JSON summary files +- Auto-detects TRX path from --logger/--results-directory parameters + +### Command Execution Safety +- Use `run_terminal_cmd` for all shell/CLI actions (including curl commands) +- Define serial vs parallel modes: Serial commands wait for output, parallel commands use background flags +- Include safety & hygiene: idempotent starts, timeouts/retries, capture relevant output +- Maintain name/PID registry for command tracking + +## Project Organization + +### File Structure & Naming +- Organize code by domain/feature rather than technical layers +- Use consistent naming conventions across the project +- Maintain clear separation between API, Services, and Data layers +- Follow existing patterns and conventions in the codebase + +### Configuration Management +- Store sensitive data in environment variables +- Use `appsettings.json` for non-sensitive configuration +- Override configuration via Docker environment variables in production +- Validate environment configuration on startup + +## Code Quality Standards + +### Core Principles +- Follow SOLID principles and DRY (Don't Repeat Yourself) +- Never expose or leak internal data models outside the service layer +- Use structured logging with proper loggers throughout the codebase +- Implement comprehensive error handling and validation +- Follow security best practices and never commit secrets + +### Testing Strategy +- Write tests for all business logic +- Use integration tests to verify service interactions +- Maintain test coverage above 80% +- Run tests before committing code +- Verify tasks according to test strategies before marking complete + +### Documentation +- Keep README files up to date +- Document API endpoints with Swagger/OpenAPI +- Maintain inline code documentation for complex logic +- Update documentation when APIs change + +## Service Architecture + +### Docker & Containerization +- Use Docker Compose for local development +- Ensure all services can start independently +- Configure health checks for service monitoring +- Use named volumes for persistent data +- Set appropriate environment variables for each service + +### Service Communication +- Use MassTransit with RabbitMQ for inter-service communication +- Implement proper retry policies and circuit breakers +- Monitor service health and performance +- Use structured logging for debugging + +### Database Management +- Use Entity Framework Core with PostgreSQL +- Implement proper migrations for schema changes +- Use connection pooling for performance +- Monitor database performance and connections + +## Continuous Improvement Process + +### Code Review & Quality +- Review all code changes before merging +- Use automated tools for code quality checks +- Address feedback promptly and thoroughly +- Learn from code review comments to improve future code + +### Performance & Monitoring +- Monitor application performance metrics +- Use OpenTelemetry for distributed tracing +- Implement proper health checks +- Monitor resource usage and optimize accordingly + +### Pattern Recognition & Rule Updates +- Compare new code with existing rules and patterns +- Identify repeated implementations that should be standardized +- Look for common error patterns that could be prevented +- Monitor emerging best practices in the codebase +- Create new rules when technology/pattern is used in 3+ files +- Update existing rules when better examples exist or edge cases discovered + +## References +- Follow [cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc) for rule structure +- Follow [authentication_best_practices.mdc](mdc:.cursor/rules/authentication/authentication_best_practices.mdc) for security +- Follow [hotchocolate_best_practices.mdc](mdc:.cursor/rules/hotchocolate/hotchocolate_best_practices.mdc) for GraphQL +- Follow [taskmaster.mdc](mdc:.cursor/rules/taskmaster.mdc) for Task Master usage \ No newline at end of file diff --git a/.cursor/rules/critical-command-execution-rule.mdc b/.cursor/rules/critical-command-execution-rule.mdc new file mode 100644 index 00000000..6596a5f3 --- /dev/null +++ b/.cursor/rules/critical-command-execution-rule.mdc @@ -0,0 +1,164 @@ +--- +alwaysApply: true +description: "Critical safety protocol for development command execution" +globs: ["**/*"] +--- + +# 🛡️ CRITICAL DEVELOPMENT SAFETY PROTOCOL + +## 🚨 MANDATORY PRE-ACTION VERIFICATION + +### **STEP 1: UNDERSTAND BEFORE ACTING** +```bash +# ALWAYS run these commands FIRST: +dotnet build --verbosity minimal # Check build status +docker-compose ps # Check container status +git status # Check current changes +git log --oneline -5 # See recent commits +``` + +**🔍 INVESTIGATION REQUIREMENTS:** +- **NEVER** assume the problem - verify it exists +- **NEVER** assume the solution - understand the root cause +- **ALWAYS** ask clarifying questions if the request is ambiguous +- **ALWAYS** explain your understanding back to the user before proceeding + +--- + +## 🚫 ABSOLUTE PROHIBITIONS + +### **FRAMEWORK & VERSION CHANGES** +- ❌ **NEVER** change `TargetFramework` without explicit user approval +- ❌ **NEVER** downgrade .NET versions (e.g., net9.0 → net8.0) +- ❌ **NEVER** change major package versions without user consent +- ❌ **NEVER** modify Docker base images without approval + +### **ARCHITECTURAL CHANGES** +- ❌ **NEVER** restructure projects without explicit request +- ❌ **NEVER** change dependency injection patterns arbitrarily +- ❌ **NEVER** modify `Directory.Packages.props` without understanding impact +- ❌ **NEVER** remove packages without confirming their purpose + +### **DATA & CONFIGURATION** +- ❌ **NEVER** modify database schemas without explicit approval +- ❌ **NEVER** change environment configurations in production files +- ❌ **NEVER** delete configuration files or sections without confirmation + +--- + +## ✅ MANDATORY PROTOCOLS + +### **BEFORE ANY CHANGE:** +1. **State the problem clearly**: "I see you're experiencing X. Let me verify this is happening." +2. **Propose your solution**: "I plan to fix this by doing Y. This will affect files Z." +3. **Wait for approval**: "Does this approach sound correct to you?" +4. **Execute minimally**: Make the smallest possible change first + +### **CHANGE EXECUTION ORDER:** +1. **Verify current state** (run diagnostic commands) +2. **Create backup point** (`git add . && git commit -m "Before changes"`) +3. **Make minimal change** (one logical unit at a time) +4. **Test immediately** (`dotnet build` or relevant verification) +5. **Commit working state** before next change + +### **ERROR RECOVERY PROTOCOL:** +```bash +# If something breaks: +git status # See what changed +git diff # Review changes +git checkout . # Revert if needed +git reset --hard HEAD~1 # Go back to last commit if needed +``` + +--- + +## 🎯 COMMUNICATION REQUIREMENTS + +### **ALWAYS COMMUNICATE:** +- **What you found**: "The build is currently failing because..." +- **What you propose**: "I recommend we fix this by..." +- **Why this approach**: "This is the safest because..." +- **What could go wrong**: "The main risk is..." + +### **ALWAYS ASK WHEN:** +- The user's request affects multiple files +- You're unsure about the intended outcome +- Multiple solutions are possible +- The change could impact other functionality + +### **SPECIFIC PHRASES TO USE:** +- ✅ "Let me first check the current state..." +- ✅ "I'd like to verify this is the issue by..." +- ✅ "Before making changes, can you confirm..." +- ✅ "This change will affect X, Y, Z. Should I proceed?" + +--- + +## 🔧 SAFE DEVELOPMENT PATTERNS + +### **INCREMENTAL CHANGES:** +1. Fix one compile error at a time +2. Test after each logical change +3. Commit working states frequently +4. Never batch unrelated changes + +### **DEBUGGING APPROACH:** +1. **Reproduce the issue** first +2. **Isolate the cause** before fixing +3. **Fix root cause**, not symptoms +4. **Verify fix works** before moving on + +### **PACKAGE MANAGEMENT:** +- Use `dotnet list package --outdated` before updating +- Update packages one at a time +- Test after each package update +- Understand why a package is being used before removing + +--- + +## 🚦 DECISION TREE + +**When user reports an issue:** +``` +1. Can I verify this issue exists? + └─ No → Ask for more details/reproduction steps + └─ Yes → Continue + +2. Do I understand the root cause? + └─ No → Investigate further, ask questions + └─ Yes → Continue + +3. Does my fix require major changes? + └─ Yes → Get explicit approval first + └─ No → Proceed with minimal fix + +4. Will this affect other functionality? + └─ Maybe → Run full test suite first + └─ No → Make targeted change +``` + +--- + +## 🎖️ SUCCESS METRICS + +**I've followed this rule correctly when:** +- ✅ User explicitly approved any major changes +- ✅ I understood the problem before proposing solutions +- ✅ Changes were minimal and targeted +- ✅ Each change was tested immediately +- ✅ I could explain WHY I made each change + +**⚠️ WARNING SIGNS I'm violating this rule:** +- Making assumptions about what needs fixing +- Changing multiple unrelated things at once +- Not explaining my reasoning +- Proceeding without user confirmation +- Downgrading versions "to fix compatibility" + +--- + +## 🏆 GOLDEN RULE + +> **"FIRST, DO NO HARM. SECOND, UNDERSTAND BEFORE ACTING. THIRD, MINIMAL CHANGES WITH MAXIMUM VERIFICATION."** + +**When in doubt, the answer is ALWAYS to ask the user for clarification.** \ No newline at end of file diff --git a/.cursor/rules/cursor_rules.mdc b/.cursor/rules/cursor_rules.mdc new file mode 100644 index 00000000..64884016 --- /dev/null +++ b/.cursor/rules/cursor_rules.mdc @@ -0,0 +1,69 @@ +--- +alwaysApply: false +description: "Specification for creating and maintaining Cursor rules with proper frontmatter" +globs: ["**/*.mdc"] +--- + +# Cursor Rules Specification + +## Required Rule Structure +All Cursor rules must follow this standardized format: + +```markdown +--- +alwaysApply: boolean +description: "Clear, one-line description of what the rule enforces" +globs: ["path/to/files/*.ext", "other/path/**/*"] +--- + +# Rule Title + +- **Main Points in Bold** + - Sub-points with details + - Examples and explanations +``` + +## Frontmatter Configuration + +### alwaysApply Values +- `true`: Rule applies to all files matching globs (use sparingly) +- `false`: Rule applies only when relevant context is present + +### description Guidelines +- Keep under 100 characters +- Use clear, actionable language +- Describe what the rule enforces, not why + +### globs Patterns +- Use specific patterns over broad ones +- Avoid `**/*` unless absolutely necessary +- Examples: + - `["**/*.cs", "**/*.ts"]` for code files + - `["**/*.mdc"]` for rule files + - `["**/test/**/*"]` for test files + +## File Organization +- **Core Rules**: Fundamental patterns and structures +- **Domain Rules**: Technology-specific rules (e.g., authentication/, hotchocolate/) +- **Workflow Rules**: Development process and best practices + +## Quality Standards +- Rules must be actionable and specific +- Examples should come from actual code +- Use ✅ DO and ❌ DON'T patterns +- Reference existing code when possible +- Avoid duplicate content across rules + +## Code Examples Format +```typescript +// ✅ DO: Show good examples +const goodExample = true; + +// ❌ DON'T: Show anti-patterns +const badExample = false; +``` + +## Cross-Referencing +- Use `[filename](mdc:path/to/file)` for rule references +- Link related rules together +- Document migration paths for deprecated patterns \ No newline at end of file diff --git a/.cursor/rules/dev_workflow.mdc b/.cursor/rules/dev_workflow.mdc new file mode 100644 index 00000000..feddd1e8 --- /dev/null +++ b/.cursor/rules/dev_workflow.mdc @@ -0,0 +1,219 @@ +--- +alwaysApply: false +description: "DEPRECATED: Use consolidated_workflow.mdc instead" +globs: ["**/*"] +--- +# Task Master Development Workflow + +This guide outlines the typical process for using Task Master to manage software development projects. + +## Primary Interaction: MCP Server vs. CLI + +Task Master offers two primary ways to interact: + +1. **MCP Server (Recommended for Integrated Tools)**: + - For AI agents and integrated development environments (like Cursor), interacting via the **MCP server is the preferred method**. + - The MCP server exposes Task Master functionality through a set of tools (e.g., `get_tasks`, `add_subtask`). + - This method offers better performance, structured data exchange, and richer error handling compared to CLI parsing. + - Refer to [`mcp.mdc`](mdc:.cursor/rules/mcp.mdc) for details on the MCP architecture and available tools. + - A comprehensive list and description of MCP tools and their corresponding CLI commands can be found in [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc). + - **Restart the MCP server** if core logic in `scripts/modules` or MCP tool/direct function definitions change. + +2. **`task-master` CLI (For Users & Fallback)**: + - The global `task-master` command provides a user-friendly interface for direct terminal interaction. + - It can also serve as a fallback if the MCP server is inaccessible or a specific function isn't exposed via MCP. + - Install globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`. + - The CLI commands often mirror the MCP tools (e.g., `task-master list` corresponds to `get_tasks`). + - Refer to [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc) for a detailed command reference. + +## Standard Development Workflow Process + +- Start new projects by running `initialize_project` tool / `task-master init` or `parse_prd` / `task-master parse-prd --input=''` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to generate initial tasks.json +- Begin coding sessions with `get_tasks` / `task-master list` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to see current tasks, status, and IDs +- Determine the next task to work on using `next_task` / `task-master next` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)). +- Analyze task complexity with `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before breaking down tasks +- Review complexity report using `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)). +- Select tasks based on dependencies (all marked 'done'), priority level, and ID order +- Clarify tasks by checking task files in tasks/ directory or asking for user input +- View specific task details using `get_task` / `task-master show ` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to understand implementation requirements +- Break down complex tasks using `expand_task` / `task-master expand --id= --force --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) with appropriate flags like `--force` (to replace existing subtasks) and `--research`. +- Clear existing subtasks if needed using `clear_subtasks` / `task-master clear-subtasks --id=` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) before regenerating +- Implement code following task details, dependencies, and project standards +- Verify tasks according to test strategies before marking as complete (See [`tests.mdc`](mdc:.cursor/rules/tests.mdc)) +- Mark completed tasks with `set_task_status` / `task-master set-status --id= --status=done` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) +- Update dependent tasks when implementation differs from original plan using `update` / `task-master update --from= --prompt="..."` or `update_task` / `task-master update-task --id= --prompt="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) +- Add new tasks discovered during implementation using `add_task` / `task-master add-task --prompt="..." --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)). +- Add new subtasks as needed using `add_subtask` / `task-master add-subtask --parent= --title="..."` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)). +- Append notes or details to subtasks using `update_subtask` / `task-master update-subtask --id= --prompt='Add implementation notes here...\nMore details...'` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)). +- Generate task files with `generate` / `task-master generate` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) after updating tasks.json +- Maintain valid dependency structure with `add_dependency`/`remove_dependency` tools or `task-master add-dependency`/`remove-dependency` commands, `validate_dependencies` / `task-master validate-dependencies`, and `fix_dependencies` / `task-master fix-dependencies` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) when needed +- Respect dependency chains and task priorities when selecting work +- Report progress regularly using `get_tasks` / `task-master list` + +## Task Complexity Analysis + +- Run `analyze_project_complexity` / `task-master analyze-complexity --research` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for comprehensive analysis +- Review complexity report via `complexity_report` / `task-master complexity-report` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) for a formatted, readable version. +- Focus on tasks with highest complexity scores (8-10) for detailed breakdown +- Use analysis results to determine appropriate subtask allocation +- Note that reports are automatically used by the `expand_task` tool/command + +## Task Breakdown Process + +- Use `expand_task` / `task-master expand --id=`. It automatically uses the complexity report if found, otherwise generates default number of subtasks. +- Use `--num=` to specify an explicit number of subtasks, overriding defaults or complexity report recommendations. +- Add `--research` flag to leverage Perplexity AI for research-backed expansion. +- Add `--force` flag to clear existing subtasks before generating new ones (default is to append). +- Use `--prompt=""` to provide additional context when needed. +- Review and adjust generated subtasks as necessary. +- Use `expand_all` tool or `task-master expand --all` to expand multiple pending tasks at once, respecting flags like `--force` and `--research`. +- If subtasks need complete replacement (regardless of the `--force` flag on `expand`), clear them first with `clear_subtasks` / `task-master clear-subtasks --id=`. + +## Implementation Drift Handling + +- When implementation differs significantly from planned approach +- When future tasks need modification due to current implementation choices +- When new dependencies or requirements emerge +- Use `update` / `task-master update --from= --prompt='\nUpdate context...' --research` to update multiple future tasks. +- Use `update_task` / `task-master update-task --id= --prompt='\nUpdate context...' --research` to update a single specific task. + +## Task Status Management + +- Use 'pending' for tasks ready to be worked on +- Use 'done' for completed and verified tasks +- Use 'deferred' for postponed tasks +- Add custom status values as needed for project-specific workflows + +## Task Structure Fields + +- **id**: Unique identifier for the task (Example: `1`, `1.1`) +- **title**: Brief, descriptive title (Example: `"Initialize Repo"`) +- **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`) +- **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`) +- **dependencies**: IDs of prerequisite tasks (Example: `[1, 2.1]`) + - Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending) + - This helps quickly identify which prerequisite tasks are blocking work +- **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`) +- **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`) +- **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`) +- **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`) +- Refer to task structure details (previously linked to `tasks.mdc`). + +## Configuration Management (Updated) + +Taskmaster configuration is managed through two main mechanisms: + +1. **`.taskmasterconfig` File (Primary):** + * Located in the project root directory. + * Stores most configuration settings: AI model selections (main, research, fallback), parameters (max tokens, temperature), logging level, default subtasks/priority, project name, etc. + * **Managed via `task-master models --setup` command.** Do not edit manually unless you know what you are doing. + * **View/Set specific models via `task-master models` command or `models` MCP tool.** + * Created automatically when you run `task-master models --setup` for the first time. + +2. **Environment Variables (`.env` / `mcp.json`):** + * Used **only** for sensitive API keys and specific endpoint URLs. + * Place API keys (one per provider) in a `.env` file in the project root for CLI usage. + * For MCP/Cursor integration, configure these keys in the `env` section of `.cursor/mcp.json`. + * Available keys/variables: See `assets/env.example` or the Configuration section in the command reference (previously linked to `taskmaster.mdc`). + +**Important:** Non-API key settings (like model selections, `MAX_TOKENS`, `LOG_LEVEL`) are **no longer configured via environment variables**. Use the `task-master models` command (or `--setup` for interactive configuration) or the `models` MCP tool. +**If AI commands FAIL in MCP** verify that the API key for the selected provider is present in the `env` section of `.cursor/mcp.json`. +**If AI commands FAIL in CLI** verify that the API key for the selected provider is present in the `.env` file in the root of the project. + +## Determining the Next Task + +- Run `next_task` / `task-master next` to show the next task to work on. +- The command identifies tasks with all dependencies satisfied +- Tasks are prioritized by priority level, dependency count, and ID +- The command shows comprehensive task information including: + - Basic task details and description + - Implementation details + - Subtasks (if they exist) + - Contextual suggested actions +- Recommended before starting any new development work +- Respects your project's dependency structure +- Ensures tasks are completed in the appropriate sequence +- Provides ready-to-use commands for common task actions + +## Viewing Specific Task Details + +- Run `get_task` / `task-master show ` to view a specific task. +- Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1) +- Displays comprehensive information similar to the next command, but for a specific task +- For parent tasks, shows all subtasks and their current status +- For subtasks, shows parent task information and relationship +- Provides contextual suggested actions appropriate for the specific task +- Useful for examining task details before implementation or checking status + +## Managing Task Dependencies + +- Use `add_dependency` / `task-master add-dependency --id= --depends-on=` to add a dependency. +- Use `remove_dependency` / `task-master remove-dependency --id= --depends-on=` to remove a dependency. +- The system prevents circular dependencies and duplicate dependency entries +- Dependencies are checked for existence before being added or removed +- Task files are automatically regenerated after dependency changes +- Dependencies are visualized with status indicators in task listings and files + +## Iterative Subtask Implementation + +Once a task has been broken down into subtasks using `expand_task` or similar methods, follow this iterative process for implementation: + +1. **Understand the Goal (Preparation):** + * Use `get_task` / `task-master show ` (see [`taskmaster.mdc`](mdc:.cursor/rules/taskmaster.mdc)) to thoroughly understand the specific goals and requirements of the subtask. + +2. **Initial Exploration & Planning (Iteration 1):** + * This is the first attempt at creating a concrete implementation plan. + * Explore the codebase to identify the precise files, functions, and even specific lines of code that will need modification. + * Determine the intended code changes (diffs) and their locations. + * Gather *all* relevant details from this exploration phase. + +3. **Log the Plan:** + * Run `update_subtask` / `task-master update-subtask --id= --prompt=''`. + * Provide the *complete and detailed* findings from the exploration phase in the prompt. Include file paths, line numbers, proposed diffs, reasoning, and any potential challenges identified. Do not omit details. The goal is to create a rich, timestamped log within the subtask's `details`. + +4. **Verify the Plan:** + * Run `get_task` / `task-master show ` again to confirm that the detailed implementation plan has been successfully appended to the subtask's details. + +5. **Begin Implementation:** + * Set the subtask status using `set_task_status` / `task-master set-status --id= --status=in-progress`. + * Start coding based on the logged plan. + +6. **Refine and Log Progress (Iteration 2+):** + * As implementation progresses, you will encounter challenges, discover nuances, or confirm successful approaches. + * **Before appending new information**: Briefly review the *existing* details logged in the subtask (using `get_task` or recalling from context) to ensure the update adds fresh insights and avoids redundancy. + * **Regularly** use `update_subtask` / `task-master update-subtask --id= --prompt='\n- What worked...\n- What didn't work...'` to append new findings. + * **Crucially, log:** + * What worked ("fundamental truths" discovered). + * What didn't work and why (to avoid repeating mistakes). + * Specific code snippets or configurations that were successful. + * Decisions made, especially if confirmed with user input. + * Any deviations from the initial plan and the reasoning. + * The objective is to continuously enrich the subtask's details, creating a log of the implementation journey that helps the AI (and human developers) learn, adapt, and avoid repeating errors. + +7. **Review & Update Rules (Post-Implementation):** + * Once the implementation for the subtask is functionally complete, review all code changes and the relevant chat history. + * Identify any new or modified code patterns, conventions, or best practices established during the implementation. + * Create new or update existing rules following internal guidelines (previously linked to `cursor_rules.mdc` and `self_improve.mdc`). + +8. **Mark Task Complete:** + * After verifying the implementation and updating any necessary rules, mark the subtask as completed: `set_task_status` / `task-master set-status --id= --status=done`. + +9. **Commit Changes (If using Git):** + * Stage the relevant code changes and any updated/new rule files (`git add .`). + * Craft a comprehensive Git commit message summarizing the work done for the subtask, including both code implementation and any rule adjustments. + * Execute the commit command directly in the terminal (e.g., `git commit -m 'feat(module): Implement feature X for subtask \n\n- Details about changes...\n- Updated rule Y for pattern Z'`). + * Consider if a Changeset is needed according to internal versioning guidelines (previously linked to `changeset.mdc`). If so, run `npm run changeset`, stage the generated file, and amend the commit or create a new one. + +10. **Proceed to Next Subtask:** + * Identify the next subtask (e.g., using `next_task` / `task-master next`). + +## Code Analysis & Refactoring Techniques + +- **Top-Level Function Search**: + - Useful for understanding module structure or planning refactors. + - Use grep/ripgrep to find exported functions/constants: + `rg "export (async function|function|const) \w+"` or similar patterns. + - Can help compare functions between files during migrations or identify potential naming conflicts. + +--- +*This workflow provides a general guideline. Adapt it based on your specific project needs and team practices.* \ No newline at end of file diff --git a/.cursor/rules/development_workflow.mdc b/.cursor/rules/development_workflow.mdc new file mode 100644 index 00000000..dc560868 --- /dev/null +++ b/.cursor/rules/development_workflow.mdc @@ -0,0 +1,11 @@ +--- +alwaysApply: false +description: "DEPRECATED: Use consolidated_workflow.mdc instead" +globs: ["**/*"] +--- + +# DEPRECATED: Development Workflow Guidelines + +**⚠️ This rule has been consolidated into [consolidated_workflow.mdc](mdc:.cursor/rules/consolidated_workflow.mdc)** + +This file is kept for reference but should not be used. All workflow guidelines have been moved to the consolidated workflow rule for better organization and reduced duplication. diff --git a/.cursor/rules/hotchocolate/cost_analysis.mdc b/.cursor/rules/hotchocolate/cost_analysis.mdc new file mode 100644 index 00000000..bfb05aeb --- /dev/null +++ b/.cursor/rules/hotchocolate/cost_analysis.mdc @@ -0,0 +1,28 @@ +--- +alwaysApply: false +description: "HotChocolate GraphQL cost analysis configuration guidelines" +globs: ["**/*GraphQL*", "**/Program.cs", "**/Startup.cs", "**/*.cs"] +--- + +# HotChocolate Cost Analysis + +- **Configure cost analysis to prevent expensive queries** + - Protects your API from abuse and DoS attacks + - Example: + ```csharp + builder.Services + .AddGraphQLServer() + .ModifyCostOptions(options => + { + options.MaxFieldCost = 1000; + options.MaxTypeCost = 1000; + options.EnforceCostLimits = true; + options.ApplyCostDefaults = true; + options.DefaultResolverCost = 10.0; + }); + ``` + - Source: Context7/Hot Chocolate docs +description: +globs: +alwaysApply: false +--- diff --git a/.cursor/rules/hotchocolate/hotchocolate_best_practices.mdc b/.cursor/rules/hotchocolate/hotchocolate_best_practices.mdc new file mode 100644 index 00000000..7d04b3b1 --- /dev/null +++ b/.cursor/rules/hotchocolate/hotchocolate_best_practices.mdc @@ -0,0 +1,150 @@ +--- +alwaysApply: false +description: "GraphQL schema and resolver best practices for HotChocolate" +globs: ["**/*GraphQL*", "**/Schema/**/*", "**/Resolvers/**/*", "**/*.cs"] +--- + +# HotChocolate GraphQL Best Practices + +## Schema Design + +### Schema Naming +- **Store schema names as constants** + - Prevents typos and ensures consistency across the codebase + - Example: + ```csharp + public static class WellKnownSchemaNames + { + public const string Accounts = "accounts"; + public const string Inventory = "inventory"; + public const string UserManagement = "user-management"; + public const string Analytics = "analytics"; + } + ``` + +### Naming Conventions +- Use kebab-case for schema names (e.g., "user-management") +- Keep names descriptive and domain-specific +- Avoid abbreviations unless they are widely understood +- Use consistent pluralization rules + +## Input/Output Types +- **Use dedicated input/output types for mutations** + - Separate input validation from business logic + - Example: + ```csharp + public record CreateUserInput(string Email, string Name, string Role); + public record CreateUserPayload(User User, bool Success, string Message); + ``` + +## Pagination, Filtering, and Sorting +- **Implement consistent pagination patterns** + - Use cursor-based pagination for large datasets + - Example: + ```csharp + public record PaginationInput(int First, string? After); + public record PaginationPayload(IEnumerable Items, string? NextCursor, bool HasNextPage); + ``` + +## Database Context Registration +- **Register DbContext with proper lifetime** + - Use scoped lifetime for GraphQL operations + - Example: + ```csharp + services.AddDbContext(options => + options.UseNpgsql(connectionString), + ServiceLifetime.Scoped); + ``` + +## Global State Attributes +- **Use global state for shared context** + - Pass user context, tenant info, or request metadata + - Example: + ```csharp + [GlobalState("User")] User currentUser + [GlobalState("Tenant")] string tenantId + ``` + +## Middleware Configuration +- **Order middleware correctly** + - Authentication → Authorization → Business Logic + - Example: + ```csharp + services.AddGraphQLServer() + .AddAuthorization() + .AddFiltering() + .AddSorting() + .AddProjections(); + ``` + +## Data Loader Optimization +- **Use DataLoaders for N+1 query prevention** + - Batch database queries efficiently + - Example: + ```csharp + public class UserDataLoader : BatchDataLoader + { + protected override async Task> LoadBatchAsync( + IReadOnlyList keys, CancellationToken cancellationToken) + { + var users = await _context.Users + .Where(u => keys.Contains(u.Id)) + .ToListAsync(cancellationToken); + return users.ToDictionary(u => u.Id); + } + } + ``` + +## Cost Analysis +- **Implement query cost analysis** + - Prevent expensive queries and abuse + - Example: + ```csharp + services.AddGraphQLServer() + .AddCostAnalysis(options => + { + options.MaximumComplexity = 100; + options.ApplyCosts = true; + }); + ``` + +## Performance Optimization +- **Use projections to limit data transfer** + - Only fetch required fields + - Example: + ```csharp + public class UserType : ObjectType + { + protected override void Configure(IObjectTypeDescriptor descriptor) + { + descriptor.Field(u => u.Id).Type>(); + descriptor.Field(u => u.Email).Type>(); + descriptor.Field(u => u.Name).Type(); + } + } + ``` + +## Error Handling +- **Implement consistent error handling** + - Use custom error types for domain-specific errors + - Example: + ```csharp + public class UserNotFoundError : GraphQLError + { + public UserNotFoundError(string userId) + : base($"User with ID {userId} not found", "USER_NOT_FOUND") + { + Code = "USER_NOT_FOUND"; + Extensions["userId"] = userId; + } + } + ``` + +## References +- Source: Context7/Hot Chocolate documentation +- Follow [cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc) for rule structure +- Follow [self_improve.mdc](mdc:.cursor/rules/self_improve.mdc) for continuous improvement +description: +globs: +alwaysApply: false +--- diff --git a/.cursor/rules/hotchocolate/middleware_ordering.mdc b/.cursor/rules/hotchocolate/middleware_ordering.mdc new file mode 100644 index 00000000..5ffae030 --- /dev/null +++ b/.cursor/rules/hotchocolate/middleware_ordering.mdc @@ -0,0 +1,27 @@ +--- +alwaysApply: false +description: "HotChocolate GraphQL middleware ordering best practices" +globs: ["**/*GraphQL*", "**/Resolvers/**/*", "**/*.cs"] +--- + +# HotChocolate Middleware Ordering + +- **Order middleware as: UsePaging → UseProjection → UseFiltering → UseSorting** + - Ensures correct data processing and optimal performance + - Example: + ```csharp + [UsePaging] + [UseProjection] + [UseFiltering] + [UseSorting] + public IQueryable GetUsers([ScopedService] AppDbContext db) => db.Users; + ``` + - Descriptor API: + ```csharp + descriptor.UsePaging().UseFiltering().UseSorting(); + ``` + - Source: Context7/Hot Chocolate docs +description: +globs: +alwaysApply: false +--- diff --git a/.cursor/rules/pattern_detector.ps1 b/.cursor/rules/pattern_detector.ps1 new file mode 100644 index 00000000..1eec5e0d --- /dev/null +++ b/.cursor/rules/pattern_detector.ps1 @@ -0,0 +1,252 @@ +# Pattern Detector for Cursor Rules Auto-Improvement +# This script analyzes the codebase for recurring patterns and suggests rule improvements + +param( + [string]$ProjectRoot = ".", + [int]$PatternThreshold = 3, + [string]$OutputFile = "pattern_analysis.json" +) + +# Initialize pattern storage +$patterns = @{} +$filePatterns = @{} + +# Common pattern types to detect +$patternTypes = @{ + "MassTransit" = @( + "AddMassTransit", + "UsingRabbitMq", + "ConfigureEndpoints" + ) + "HealthChecks" = @( + "AddHealthChecks", + "AddDbContextCheck", + "MapHealthChecks" + ) + "Authentication" = @( + "AddAuthentication", + "AddAuthorization", + "UseAuthentication" + ) + "DependencyInjection" = @( + "AddScoped", + "AddSingleton", + "AddTransient", + "services\.Add" + ) + "ErrorHandling" = @( + "try\s*\{", + "catch\s*\(Exception", + "LogError", + "StatusCode\(500" + ) + "Logging" = @( + "_logger\.Log", + "LogInformation", + "LogWarning", + "LogError" + ) + "Database" = @( + "UseNpgsql", + "UseSqlServer", + "AddDbContext", + "DbContext" + ) + "GraphQL" = @( + "AddGraphQLServer", + "AddQueryType", + "AddMutationType", + "AddType" + ) +} + +# Function to detect patterns in a file +function Detect-PatternsInFile { + param([string]$FilePath) + + if (-not (Test-Path $FilePath)) { return } + + try { + $content = Get-Content $FilePath -Raw -ErrorAction SilentlyContinue + if (-not $content) { return } + + $filePatterns[$FilePath] = @{} + + foreach ($patternType in $patternTypes.Keys) { + $patterns = $patternTypes[$patternType] + $filePatterns[$FilePath][$patternType] = @{} + + foreach ($pattern in $patterns) { + $matches = [regex]::Matches($content, $pattern, [System.Text.RegularExpressions.RegexOptions]::IgnoreCase) + if ($matches.Count -gt 0) { + $filePatterns[$FilePath][$patternType][$pattern] = $matches.Count + + # Store global pattern count + if (-not $patterns.ContainsKey($patternType)) { + $patterns[$patternType] = @{} + } + if (-not $patterns[$patternType].ContainsKey($pattern)) { + $patterns[$patternType][$pattern] = @{} + } + if (-not $patterns[$patternType][$pattern].ContainsKey($FilePath)) { + $patterns[$patternType][$pattern][$FilePath] = 0 + } + $patterns[$patternType][$pattern][$FilePath] += $matches.Count + } + } + } + } + catch { + Write-Warning "Error processing file: $FilePath - $($_.Exception.Message)" + } +} + +# Function to analyze patterns and suggest improvements +function Analyze-Patterns { + $suggestions = @() + + foreach ($patternType in $patterns.Keys) { + foreach ($pattern in $patterns[$patternType].Keys) { + $fileCount = $patterns[$patternType][$pattern].Keys.Count + $totalOccurrences = ($patterns[$patternType][$pattern].Values | Measure-Object -Sum).Sum + + if ($fileCount -ge $PatternThreshold) { + $suggestion = @{ + Type = "Pattern Detected" + PatternType = $patternType + Pattern = $pattern + FileCount = $fileCount + TotalOccurrences = $totalOccurrences + Files = $patterns[$patternType][$pattern].Keys + Recommendation = "Consider creating/updating rule for $patternType patterns" + Priority = if ($fileCount -ge 5) { "High" } elseif ($fileCount -ge 3) { "Medium" } else { "Low" } + } + $suggestions += $suggestion + } + } + } + + return $suggestions +} + +# Function to generate rule suggestions +function Generate-RuleSuggestions { + param([array]$Suggestions) + + $ruleSuggestions = @() + + foreach ($suggestion in $Suggestions) { + $ruleSuggestion = @{ + RuleName = "$($suggestion.PatternType)_best_practices.mdc" + Description = "Best practices for $($suggestion.PatternType) patterns" + Trigger = "Pattern detected in $($suggestion.FileCount) files" + Examples = $suggestion.Files | Select-Object -First 3 + Content = @" +--- +description: Best practices for $($suggestion.PatternType) patterns based on detected usage across $($suggestion.FileCount) files. +globs: **/*.cs, **/*.ps1, **/*.yml, **/*.yaml, **/*.json +alwaysApply: false +--- + +# $($suggestion.PatternType) Best Practices + +## Pattern Detection +This rule was automatically generated based on the detection of `$($suggestion.Pattern)` pattern in $($suggestion.FileCount) files: +$($suggestion.Files -join ", ") + +## Standardized Implementation +- **Use consistent $($suggestion.PatternType) patterns across all services** +- **Follow established conventions from existing implementations** +- **Ensure proper error handling and validation** + +## Examples +Based on detected usage in: +$($suggestion.Files | ForEach-Object { "- $_" }) + +## References +- Follow [cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc) for rule structure +- Follow [auto_improvement.mdc](mdc:.cursor/rules/auto_improvement.mdc) for pattern detection +"@ + } + $ruleSuggestions += $ruleSuggestion + } + + return $ruleSuggestions +} + +# Main execution +Write-Host "Pattern Detector for Cursor Rules Auto-Improvement" -ForegroundColor Green +Write-Host "=====================================================" -ForegroundColor Green +Write-Host "" + +# Get all relevant files +$fileExtensions = @("*.cs", "*.ps1", "*.yml", "*.yaml", "*.json", "*.md") +$files = @() + +foreach ($ext in $fileExtensions) { + $files += Get-ChildItem -Path $ProjectRoot -Recurse -Filter $ext -ErrorAction SilentlyContinue | + Where-Object { $_.FullName -notlike "*\bin\*" -and $_.FullName -notlike "*\obj\*" -and $_.FullName -notlike "*\node_modules\*" } +} + +Write-Host "Found $($files.Count) files to analyze..." -ForegroundColor Yellow + +# Process each file +$processedCount = 0 +foreach ($file in $files) { + $processedCount++ + if ($processedCount % 50 -eq 0) { + Write-Progress -Activity "Analyzing files" -Status "Processed $processedCount of $($files.Count)" -PercentComplete (($processedCount / $files.Count) * 100) + } + + Detect-PatternsInFile -FilePath $file.FullName +} + +Write-Progress -Activity "Analyzing files" -Completed + +# Analyze patterns +Write-Host "Analyzing patterns..." -ForegroundColor Yellow +$suggestions = Analyze-Patterns + +# Generate rule suggestions +Write-Host "Generating rule suggestions..." -ForegroundColor Yellow +$ruleSuggestions = Generate-RuleSuggestions -Suggestions $suggestions + +# Create output +$output = @{ + AnalysisDate = Get-Date -Format "yyyy-MM-dd HH:mm:ss" + ProjectRoot = (Resolve-Path $ProjectRoot).Path + PatternThreshold = $PatternThreshold + TotalFilesAnalyzed = $files.Count + PatternsDetected = $patterns + Suggestions = $suggestions + RuleSuggestions = $ruleSuggestions +} + +# Save to file +$output | ConvertTo-Json -Depth 10 | Out-File -FilePath $OutputFile -Encoding UTF8 + +# Display summary +Write-Host "" +Write-Host "Analysis Complete!" -ForegroundColor Green +Write-Host "===================" -ForegroundColor Green +Write-Host "Files analyzed: $($files.Count)" -ForegroundColor White +Write-Host "Patterns detected: $($suggestions.Count)" -ForegroundColor White +Write-Host "Rule suggestions: $($ruleSuggestions.Count)" -ForegroundColor White +Write-Host "Results saved to: $OutputFile" -ForegroundColor White + +# Display high-priority suggestions +$highPrioritySuggestions = $suggestions | Where-Object { $_.Priority -eq "High" } +if ($highPrioritySuggestions.Count -gt 0) { + Write-Host "" + Write-Host "High Priority Suggestions:" -ForegroundColor Red + foreach ($suggestion in $highPrioritySuggestions) { + Write-Host " • $($suggestion.PatternType): $($suggestion.Pattern) (found in $($suggestion.FileCount) files)" -ForegroundColor Red + } +} + +Write-Host "" +Write-Host "Next Steps:" -ForegroundColor Cyan +Write-Host "1. Review the generated suggestions in $OutputFile" -ForegroundColor White +Write-Host "2. Implement high-priority rule improvements" -ForegroundColor White +Write-Host "3. Run this script regularly to maintain rule quality" -ForegroundColor White +Write-Host "4. Use the auto-improvement system in [auto_improvement.mdc](mdc:.cursor/rules/auto_improvement.mdc)" -ForegroundColor White diff --git a/.cursor/rules/self_improve.mdc b/.cursor/rules/self_improve.mdc new file mode 100644 index 00000000..626a92d3 --- /dev/null +++ b/.cursor/rules/self_improve.mdc @@ -0,0 +1,52 @@ +--- +alwaysApply: false +description: "Guidelines for recognizing patterns and self-improving development rules" +globs: ["**/*.mdc"] +--- + +# Rule Self-Improvement Guidelines + +## Pattern Recognition Triggers +- New code patterns not covered by existing rules (3+ occurrences) +- Repeated similar implementations across files +- Common error patterns that could be prevented +- New libraries or tools being used consistently +- Emerging best practices in the codebase + +## Analysis Process +- Compare new code with existing rules +- Identify patterns that should be standardized +- Look for references to external documentation +- Check for consistent error handling patterns +- Monitor test patterns and coverage + +## Rule Creation Criteria +**Add New Rules When:** +- A new technology/pattern is used in 3+ files +- Common bugs could be prevented by a rule +- Code reviews repeatedly mention the same feedback +- New security or performance patterns emerge + +**Modify Existing Rules When:** +- Better examples exist in the codebase +- Additional edge cases are discovered +- Related rules have been updated +- Implementation details have changed + +## Quality Assurance +- Rules must be actionable and specific +- Examples should come from actual code +- References must be up to date +- Patterns should be consistently enforced + +## Rule Maintenance +- Monitor code review comments +- Track common development questions +- Update rules after major refactors +- Add links to relevant documentation +- Cross-reference related rules +- Mark outdated patterns as deprecated +- Document migration paths for old patterns + +## Reference +Follow [cursor_rules.mdc](mdc:.cursor/rules/cursor_rules.mdc) for proper rule formatting and structure. diff --git a/.cursor/rules/taskmaster.mdc b/.cursor/rules/taskmaster.mdc new file mode 100644 index 00000000..37ad2e29 --- /dev/null +++ b/.cursor/rules/taskmaster.mdc @@ -0,0 +1,382 @@ +--- +alwaysApply: true +description: "Comprehensive reference for Taskmaster MCP tools and CLI commands" +globs: ["**/*"] +--- +# Taskmaster Tool & Command Reference + +This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Cursor, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback. + +**Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback. + +**Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`. + +--- + +## Initialization & Setup + +### 1. Initialize Project (`init`) + +* **MCP Tool:** `initialize_project` +* **CLI Command:** `task-master init [options]` +* **Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project.` +* **Key CLI Options:** + * `--name `: `Set the name for your project in Taskmaster's configuration.` + * `--description `: `Provide a brief description for your project.` + * `--version `: `Set the initial version for your project, e.g., '0.1.0'.` + * `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.` +* **Usage:** Run this once at the beginning of a new project. +* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.` +* **Key MCP Parameters/Options:** + * `projectName`: `Set the name for your project.` (CLI: `--name `) + * `projectDescription`: `Provide a brief description for your project.` (CLI: `--description `) + * `projectVersion`: `Set the initial version for your project, e.g., '0.1.0'.` (CLI: `--version `) + * `authorName`: `Author name.` (CLI: `--author `) + * `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`) + * `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`) + * `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`) +* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server. +* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in scripts/example_prd.txt. + +### 2. Parse PRD (`parse_prd`) + +* **MCP Tool:** `parse_prd` +* **CLI Command:** `task-master parse-prd [file] [options]` +* **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.` +* **Key Parameters/Options:** + * `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input `) + * `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to 'tasks/tasks.json'.` (CLI: `-o, --output `) + * `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks `) + * `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`) +* **Usage:** Useful for bootstrapping a project from an existing requirements document. +* **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `scripts/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`. + +--- + +## AI Model Configuration + +### 2. Manage Models (`models`) +* **MCP Tool:** `models` +* **CLI Command:** `task-master models [options]` +* **Description:** `View the current AI model configuration or set specific models for different roles (main, research, fallback). Allows setting custom model IDs for Ollama and OpenRouter.` +* **Key MCP Parameters/Options:** + * `setMain `: `Set the primary model ID for task generation/updates.` (CLI: `--set-main `) + * `setResearch `: `Set the model ID for research-backed operations.` (CLI: `--set-research `) + * `setFallback `: `Set the model ID to use if the primary fails.` (CLI: `--set-fallback `) + * `ollama `: `Indicates the set model ID is a custom Ollama model.` (CLI: `--ollama`) + * `openrouter `: `Indicates the set model ID is a custom OpenRouter model.` (CLI: `--openrouter`) + * `listAvailableModels `: `If true, lists available models not currently assigned to a role.` (CLI: No direct equivalent; CLI lists available automatically) + * `projectRoot `: `Optional. Absolute path to the project root directory.` (CLI: Determined automatically) +* **Key CLI Options:** + * `--set-main `: `Set the primary model.` + * `--set-research `: `Set the research model.` + * `--set-fallback `: `Set the fallback model.` + * `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).` + * `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.` + * `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.` +* **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`. +* **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-=` along with either `--ollama` or `--openrouter`. +* **Notes:** Configuration is stored in `.taskmasterconfig` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live. +* **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them. +* **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80. +* **Warning:** DO NOT MANUALLY EDIT THE .taskmasterconfig FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback. + +--- + +## Task Listing & Viewing + +### 3. Get Tasks (`get_tasks`) + +* **MCP Tool:** `get_tasks` +* **CLI Command:** `task-master list [options]` +* **Description:** `List your Taskmaster tasks, optionally filtering by status and showing subtasks.` +* **Key Parameters/Options:** + * `status`: `Show only Taskmaster tasks matching this status, e.g., 'pending' or 'done'.` (CLI: `-s, --status `) + * `withSubtasks`: `Include subtasks indented under their parent tasks in the list.` (CLI: `--with-subtasks`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file `) +* **Usage:** Get an overview of the project status, often used at the start of a work session. + +### 4. Get Next Task (`next_task`) + +* **MCP Tool:** `next_task` +* **CLI Command:** `task-master next [options]` +* **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.` +* **Key Parameters/Options:** + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file `) +* **Usage:** Identify what to work on next according to the plan. + +### 5. Get Task Details (`get_task`) + +* **MCP Tool:** `get_task` +* **CLI Command:** `task-master show [id] [options]` +* **Description:** `Display detailed information for a specific Taskmaster task or subtask by its ID.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task, e.g., '15', or subtask, e.g., '15.2', you want to view.` (CLI: `[id]` positional or `-i, --id `) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file `) +* **Usage:** Understand the full details, implementation notes, and test strategy for a specific task before starting work. + +--- + +## Task Creation & Modification + +### 6. Add Task (`add_task`) + +* **MCP Tool:** `add_task` +* **CLI Command:** `task-master add-task [options]` +* **Description:** `Add a new task to Taskmaster by describing it; AI will structure it.` +* **Key Parameters/Options:** + * `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt `) + * `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies `) + * `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority `) + * `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file `) +* **Usage:** Quickly add newly identified tasks during development. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 7. Add Subtask (`add_subtask`) + +* **MCP Tool:** `add_subtask` +* **CLI Command:** `task-master add-subtask [options]` +* **Description:** `Add a new subtask to a Taskmaster parent task, or convert an existing task into a subtask.` +* **Key Parameters/Options:** + * `id` / `parent`: `Required. The ID of the Taskmaster task that will be the parent.` (MCP: `id`, CLI: `-p, --parent `) + * `taskId`: `Use this if you want to convert an existing top-level Taskmaster task into a subtask of the specified parent.` (CLI: `-i, --task-id `) + * `title`: `Required if not using taskId. The title for the new subtask Taskmaster should create.` (CLI: `-t, --title `) + * `description`: `A brief description for the new subtask.` (CLI: `-d, --description <text>`) + * `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`) + * `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`) + * `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`) + * `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after adding the subtask.` (CLI: `--skip-generate`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Break down tasks manually or reorganize existing tasks. + +### 8. Update Tasks (`update`) + +* **MCP Tool:** `update` +* **CLI Command:** `task-master update [options]` +* **Description:** `Update multiple upcoming tasks in Taskmaster based on new context or changes, starting from a specific task ID.` +* **Key Parameters/Options:** + * `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`) + * `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'` +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 9. Update Task (`update_task`) + +* **MCP Tool:** `update_task` +* **CLI Command:** `task-master update-task [options]` +* **Description:** `Modify a specific Taskmaster task or subtask by its ID, incorporating new information or changes.` +* **Key Parameters/Options:** + * `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', or subtask, e.g., '15.2', you want to update.` (CLI: `-i, --id <id>`) + * `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Refine a specific task based on new understanding or feedback. Example CLI: `task-master update-task --id='15' --prompt='Clarification: Use PostgreSQL instead of MySQL.\nUpdate schema details...'` +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 10. Update Subtask (`update_subtask`) + +* **MCP Tool:** `update_subtask` +* **CLI Command:** `task-master update-subtask [options]` +* **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.` +* **Key Parameters/Options:** + * `id`: `Required. The specific ID of the Taskmaster subtask, e.g., '15.2', you want to add information to.` (CLI: `-i, --id <id>`) + * `prompt`: `Required. Provide the information or notes Taskmaster should append to the subtask's details. Ensure this adds *new* information not already present.` (CLI: `-p, --prompt <text>`) + * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Add implementation notes, code snippets, or clarifications to a subtask during development. Before calling, review the subtask's current details to append only fresh insights, helping to build a detailed log of the implementation journey and avoid redundancy. Example CLI: `task-master update-subtask --id='15.2' --prompt='Discovered that the API requires header X.\nImplementation needs adjustment...'` +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 11. Set Task Status (`set_task_status`) + +* **MCP Tool:** `set_task_status` +* **CLI Command:** `task-master set-status [options]` +* **Description:** `Update the status of one or more Taskmaster tasks or subtasks, e.g., 'pending', 'in-progress', 'done'.` +* **Key Parameters/Options:** + * `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`) + * `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Mark progress as tasks move through the development cycle. + +### 12. Remove Task (`remove_task`) + +* **MCP Tool:** `remove_task` +* **CLI Command:** `task-master remove-task [options]` +* **Description:** `Permanently remove a task or subtask from the Taskmaster tasks list.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`) + * `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project. +* **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks. + +--- + +## Task Structure & Breakdown + +### 13. Expand Task (`expand_task`) + +* **MCP Tool:** `expand_task` +* **CLI Command:** `task-master expand [options]` +* **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.` +* **Key Parameters/Options:** + * `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`) + * `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`) + * `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`) + * `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`) + * `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 14. Expand All Tasks (`expand_all`) + +* **MCP Tool:** `expand_all` +* **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag) +* **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.` +* **Key Parameters/Options:** + * `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`) + * `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`) + * `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`) + * `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 15. Clear Subtasks (`clear_subtasks`) + +* **MCP Tool:** `clear_subtasks` +* **CLI Command:** `task-master clear-subtasks [options]` +* **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.` +* **Key Parameters/Options:** + * `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using `all`.) (CLI: `-i, --id <ids>`) + * `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement. + +### 16. Remove Subtask (`remove_subtask`) + +* **MCP Tool:** `remove_subtask` +* **CLI Command:** `task-master remove-subtask [options]` +* **Description:** `Remove a subtask from its Taskmaster parent, optionally converting it into a standalone task.` +* **Key Parameters/Options:** + * `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`) + * `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`) + * `skipGenerate`: `Prevent Taskmaster from automatically regenerating markdown task files after removing the subtask.` (CLI: `--skip-generate`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task. + +--- + +## Dependency Management + +### 17. Add Dependency (`add_dependency`) + +* **MCP Tool:** `add_dependency` +* **CLI Command:** `task-master add-dependency [options]` +* **Description:** `Define a dependency in Taskmaster, making one task a prerequisite for another.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`) + * `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`) +* **Usage:** Establish the correct order of execution between tasks. + +### 18. Remove Dependency (`remove_dependency`) + +* **MCP Tool:** `remove_dependency` +* **CLI Command:** `task-master remove-dependency [options]` +* **Description:** `Remove a dependency relationship between two Taskmaster tasks.` +* **Key Parameters/Options:** + * `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`) + * `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Update task relationships when the order of execution changes. + +### 19. Validate Dependencies (`validate_dependencies`) + +* **MCP Tool:** `validate_dependencies` +* **CLI Command:** `task-master validate-dependencies [options]` +* **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.` +* **Key Parameters/Options:** + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Audit the integrity of your task dependencies. + +### 20. Fix Dependencies (`fix_dependencies`) + +* **MCP Tool:** `fix_dependencies` +* **CLI Command:** `task-master fix-dependencies [options]` +* **Description:** `Automatically fix dependency issues (like circular references or links to non-existent tasks) in your Taskmaster tasks.` +* **Key Parameters/Options:** + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Clean up dependency errors automatically. + +--- + +## Analysis & Reporting + +### 21. Analyze Project Complexity (`analyze_project_complexity`) + +* **MCP Tool:** `analyze_project_complexity` +* **CLI Command:** `task-master analyze-complexity [options]` +* **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.` +* **Key Parameters/Options:** + * `output`: `Where to save the complexity analysis report (default: 'scripts/task-complexity-report.json').` (CLI: `-o, --output <file>`) + * `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`) + * `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Used before breaking down tasks to identify which ones need the most attention. +* **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. + +### 22. View Complexity Report (`complexity_report`) + +* **MCP Tool:** `complexity_report` +* **CLI Command:** `task-master complexity-report [options]` +* **Description:** `Display the task complexity analysis report in a readable format.` +* **Key Parameters/Options:** + * `file`: `Path to the complexity report (default: 'scripts/task-complexity-report.json').` (CLI: `-f, --file <file>`) +* **Usage:** Review and understand the complexity analysis results after running analyze-complexity. + +--- + +## File Management + +### 23. Generate Task Files (`generate`) + +* **MCP Tool:** `generate` +* **CLI Command:** `task-master generate [options]` +* **Description:** `Create or update individual Markdown files for each task based on your tasks.json.` +* **Key Parameters/Options:** + * `output`: `The directory where Taskmaster should save the task files (default: in a 'tasks' directory).` (CLI: `-o, --output <directory>`) + * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) +* **Usage:** Run this after making changes to tasks.json to keep individual task files up to date. + +--- + +## Environment Variables Configuration (Updated) + +Taskmaster primarily uses the **`.taskmasterconfig`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`. + +Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL: + +* **API Keys (Required for corresponding provider):** + * `ANTHROPIC_API_KEY` + * `PERPLEXITY_API_KEY` + * `OPENAI_API_KEY` + * `GOOGLE_API_KEY` + * `MISTRAL_API_KEY` + * `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too) + * `OPENROUTER_API_KEY` + * `XAI_API_KEY` + * `OLLANA_API_KEY` (Requires `OLLAMA_BASE_URL` too) +* **Endpoints (Optional/Provider Specific inside .taskmasterconfig):** + * `AZURE_OPENAI_ENDPOINT` + * `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`) + +**Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.cursor/mcp.json`** file (for MCP/Cursor integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmasterconfig` via `task-master models` command or `models` MCP tool. + +--- + +For details on how these commands fit into the development process, see the [Development Workflow Guide](mdc:.cursor/rules/dev_workflow.mdc). diff --git a/src/Linq2GraphQL.Generator/Templates/Interface/InterfaceTemplate.cs b/src/Linq2GraphQL.Generator/Templates/Interface/InterfaceTemplate.cs index 7ab9a1d2..cf630aba 100644 --- a/src/Linq2GraphQL.Generator/Templates/Interface/InterfaceTemplate.cs +++ b/src/Linq2GraphQL.Generator/Templates/Interface/InterfaceTemplate.cs @@ -307,7 +307,7 @@ public virtual string TransformText() #line default #line hidden - this.Write("\")]\r\n "); + this.Write("\")]\r\n public "); #line 89 "C:\Data\Linq2GraphQL.Client-1\src\Linq2GraphQL.Generator\Templates\Interface\InterfaceTemplate.tt" this.Write(this.ToStringHelper.ToStringWithCulture(coreType.CSharpTypeDefinition)); @@ -329,8 +329,8 @@ public virtual string TransformText() #line default #line hidden this.Write(" /// <summary>\r\n /// GraphQL __typename field for runtime type resolution\r\n" + - " /// </summary>\r\n [GraphQLMember(\"__typename\")]\r\n string __TypeName { g" + - "et; set; }\r\n}"); + " /// </summary>\r\n [GraphQLMember(\"__typename\")]\r\n public string __TypeN" + + "ame { get; set; }\r\n}"); return this.GenerationEnvironment.ToString(); } }