This repository contains the source code for the CSLA .NET MCP (Model Context Protocol) Server. This server is designed to support the use of generative AI (LLM) models when they are used to create .NET C# apps using the CSLA .NET framework.
The CSLA MCP Server provides AI coding assistants with access to official CSLA .NET code examples, patterns, and best practices. It implements the Model Context Protocol (MCP) to serve as a knowledge base for CSLA development.
- Code Examples: Comprehensive collection of CSLA .NET code examples organized by concept and complexity
- Semantic Search: Find relevant examples using natural language queries powered by Azure OpenAI embeddings
- Concept Browsing: Browse available CSLA concepts and categories
- Aspire Integration: Built with .NET Aspire for modern cloud-native development
- HTTP API: RESTful API endpoints for easy integration
This project made public via a container image file that you can host.
Docker Hub is used to host the CSLA .NET MCP server container image.
You can run this container image in any container host that supports x64 Linux containers. This includes Docker Desktop, Azure Web Service, ACA, Kubernetes, etc.
To run it on your local Docker instance with keyword-only search:
docker run --rm -p 8080:8080 `
--name csla-mcp-server rockylhotka/csla-mcp-server:latestℹ️ The container image includes pre-generated embeddings and CSLA code examples.
ℹ️ The container exposes port 8080 by default. You can map it to a different port on your host (e.g.,
-p 9000:8080).
To enable semantic search with vector embeddings, provide Azure OpenAI credentials:
docker run --rm -p 8080:8080 `
-e AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/" `
-e AZURE_OPENAI_API_KEY="your-api-key-here" `
--name csla-mcp-server rockylhotka/csla-mcp-server:latest
⚠️ Azure OpenAI credentials are required to generate embeddings for user search queries at runtime.
If you want to use your own code examples or updated embeddings, you can override the embedded data with volume mounts:
docker run --rm -p 8080:8080 `
-e AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/" `
-e AZURE_OPENAI_API_KEY="your-api-key-here" `
-v ${PWD}/embeddings.json:/app/embeddings.json:ro `
-v ${PWD}/csla-examples:/app/csla-examples:ro `
--name csla-mcp-server rockylhotka/csla-mcp-server:latestOnce the server is running, you can connect to it from MCP-compatible tools like VS Code with GitHub Copilot.
-
Install the GitHub Copilot Chat extension in VS Code (if not already installed)
-
Open VS Code Settings (File > Preferences > Settings or
Ctrl+,) -
Search for "MCP" in the settings search bar
-
Find "Chat > MCP: Servers" and click "Edit in settings.json"
-
Add your MCP server configuration to the
github.copilot.chat.mcp.serversobject:{ "github.copilot.chat.mcp.servers": { "csla-mcp": { "type": "http", "url": "http://localhost:8080/mcp" } } }Note: If you mapped the Docker container to a different port (e.g.,
-p 9000:8080), use that port in the URL:http://localhost:9000/mcp -
Restart VS Code to apply the changes
-
Verify the connection: Open GitHub Copilot Chat and you should now be able to use the CSLA MCP tools in your conversations. The server provides two tools:
Search- Search CSLA code examples and documentationFetch- Retrieve specific code examples by filename
You can test the MCP server is working by asking GitHub Copilot questions about CSLA, such as:
- "Show me how to create an editable root business object in CSLA"
- "How do I implement a read-only property in CSLA?"
- "What's an example of using the data portal in CSLA?"
Copilot will use the MCP server tools to search for and retrieve relevant CSLA code examples.
- Connection failed: Ensure the Docker container is running (
docker ps) and accessible at the configured URL - No results: Check the server logs (
docker logs csla-mcp-server) for errors - Semantic search not working: Verify Azure OpenAI environment variables are set correctly
The server uses Azure OpenAI for vector embeddings to provide semantic search capabilities. You must configure the following environment variables:
AZURE_OPENAI_ENDPOINT: Your Azure OpenAI service endpoint (e.g.,https://your-resource.openai.azure.com/)AZURE_OPENAI_API_KEY: Your Azure OpenAI API key
AZURE_OPENAI_EMBEDDING_MODEL: The embedding model deployment name to use (default:text-embedding-3-large)AZURE_OPENAI_API_VERSION: The API version to use (default:2024-02-01)
Before running the server, you must deploy an embedding model in your Azure OpenAI resource. The deployment name must exactly match the AZURE_OPENAI_EMBEDDING_MODEL environment variable.
Quick Setup: See azure-openai-setup-guide.md for step-by-step instructions.
To deploy a model:
- Go to Azure OpenAI Studio
- Navigate to "Deployments"
- Create a new deployment with the model
text-embedding-3-large - Ensure the deployment name matches your environment variable
Fallback Mode: If Azure OpenAI isn't configured, the server will run in keyword-only search mode.
PowerShell (Windows):
$env:AZURE_OPENAI_ENDPOINT = "https://your-resource.openai.azure.com/"
$env:AZURE_OPENAI_API_KEY = "your-api-key-here"
$env:AZURE_OPENAI_EMBEDDING_MODEL = "text-embedding-3-large" # Must match deployment name
$env:AZURE_OPENAI_API_VERSION = "2024-02-01" # Optional, API versionBash (Linux/macOS):
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_API_KEY="your-api-key-here"
export AZURE_OPENAI_EMBEDDING_MODEL="text-embedding-3-large" # Must match deployment name
export AZURE_OPENAI_API_VERSION="2024-02-01" # Optional, API versionFor more detailed configuration information, see azure-openai-config.md.
The server uses pre-generated vector embeddings for semantic search functionality. This significantly reduces startup time and eliminates Azure OpenAI API costs for embedding generation.
- Embedding Generation (before running the server):
- Run the
csla-embeddings-generatorCLI tool to generate embeddings for all code samples - This creates an
embeddings.jsonfile containing pre-computed vector embeddings
- Run the
- Server Startup:
- The server loads the pre-generated embeddings from
embeddings.jsonat startup - No embedding generation occurs during server initialization
- The server loads the pre-generated embeddings from
- Runtime (user queries):
- Azure OpenAI credentials are still required to generate embeddings for user search queries
- The server compares user query embeddings against the pre-loaded code sample embeddings
Before running the server, you must generate embeddings for your code samples:
# Generate embeddings for the default csla-examples directory
dotnet run --project csla-embeddings-generator
# Or specify custom paths
dotnet run --project csla-embeddings-generator -- --examples-path ./csla-examples --output ./embeddings.jsonThis will create an embeddings.json file in the current directory (or the specified output path).
See csla-embeddings-generator/README.md for more details.
When running the server locally for development, you must use the run command:
# From the repository root
dotnet run --project csla-mcp-server -- run
# With custom code samples path
dotnet run --project csla-mcp-server -- run --folder ./my-custom-examplesThe server needs to know where to find the CSLA code samples. There are three ways to configure this (priority from highest to lowest):
- Command-line flag
--folderor-f - Environment variable
CSLA_CODE_SAMPLES_PATH - Default path:
../csla-examples(relative to the executable)
Using command-line flag:
dotnet run --project csla-mcp-server -- run --folder ./csla-examplesUsing environment variable (PowerShell):
$env:CSLA_CODE_SAMPLES_PATH = "S:\src\rdl\csla-mcp\csla-examples"
dotnet run --project csla-mcp-server -- runUsing environment variable (Bash):
export CSLA_CODE_SAMPLES_PATH="/path/to/csla-examples"
dotnet run --project csla-mcp-server -- runUsing default path:
# When running from the repository root, the default ../csla-examples works automatically
dotnet run --project csla-mcp-server -- runThe server loads pre-generated embeddings from embeddings.json in the application's base directory. In a containerized deployment, this file should be mounted via a volume (see Docker examples above). When running locally with dotnet run, place the file in the same directory as the server executable or in the project directory.
- Faster Startup: Server starts immediately without waiting for embedding generation
- Reduced Costs: Code sample embeddings are only generated once, not on every server restart
- Offline Development: Server can start without Azure OpenAI (though semantic search requires it for user queries)
- Consistent Results: Same embeddings used across all server instances
The server currently exposes two MCP tools implemented in the CslaCodeTool class:
Search— search code samples and markdown snippets for keyword matches and return scored results.Fetch— return the raw content of a named code sample or markdown file.
Both tools operate over the repository folder that contains the example files. By default, this is ../csla-examples relative to the server executable, but this can be configured using:
- The
--folderor-fcommand-line option - The
CSLA_CODE_SAMPLES_PATHenvironment variable - When running from the repository root, the default resolves to
csla-examples/
Description: Extracts significant words from the provided input text and searches .cs and .md files under the examples folder for occurrences of those words. Returns a JSON array of consolidated search results that merge semantic (vector-based) and word-based (keyword) search scores.
Parameters:
message(string, required): Natural language text or keywords to search for. Words of length 4 or less are ignored by the tool. The tool also searches for 2-word combinations from adjacent words to find phrase matches (e.g., "create operation" and "operation method" from "create operation method").version(integer, optional): CSLA version number to filter results (e.g.,9or10). If not provided, defaults to the highest version available by scanning version subdirectories in the examples folder (e.g.,v9/,v10/). Files in the root directory (common to all versions) are included regardless of the specified version.
Output: JSON array of objects with the shape:
FileName(string): relative file path from the examples folder (e.g.,v10/ReadOnlyProperty.mdorCommonFile.cs)Score(double): normalized combined score (0.0 to 1.0) from semantic and word searchesVectorScore(double, nullable): semantic similarity score from Azure OpenAI embeddings (null if semantic search unavailable)WordScore(double, nullable): normalized keyword match score (null if no keyword matches found)
Example call (MCP tools/call):
{
"method": "tools/call",
"params": {
"name": "Search",
"arguments": {
"message": "data portal authorization business object",
"version": 10
}
}
}Example call without version (uses highest available):
{
"method": "tools/call",
"params": {
"name": "Search",
"arguments": {
"message": "read-write property editable root"
}
}
}Notes and behavior:
- The tool ignores short words (<= 3 characters) when building the search terms.
- The tool creates 2-word combinations from adjacent words in the search message to find phrase matches. Multi-word phrase matches receive higher scores (weight of 2) compared to single word matches (weight of 1).
- Word matching uses word boundaries to ensure exact matches. For example, searching for "property" will not match "ReadProperty" or "GetProperty".
- Matching is case-insensitive and counts multiple occurrences in a file.
- Results combine both semantic search (when Azure OpenAI is configured) and keyword search for more accurate results.
- Results are ordered by
Scoredescending, then by filename. - Version filtering: Files in version subdirectories (e.g.,
v9/,v10/) are filtered by the specified version. Files in the root directory are considered common to all versions and are always included.
Description: Returns the text contents of a specific file from the configured code samples folder by file name.
Parameters:
fileName(string, required): The name or relative path of the file to fetch (for example,ReadOnlyProperty.md,v10/EditableRoot.md, orMyBusinessClass.cs). The tool resolves the file by combining the configured code samples path with the given file name. Path traversal attempts (e.g.,../) are blocked for security.
Output: Raw file contents as a string. If the file is not found or the path is invalid, the tool returns a JSON error object with Error and Message fields.
Example call (MCP tools/call):
{
"method": "tools/call",
"params": {
"name": "Fetch",
"arguments": { "fileName": "v10/ReadOnlyProperty.md" }
}
}Security note:
- The implementation validates file paths to prevent path traversal attacks. Only files within the configured code samples directory can be accessed. Relative paths like
../or absolute paths are rejected.
This MCP server is designed to be used by AI coding assistants to provide accurate, up-to-date CSLA .NET examples and guidance. When integrated:
- AI assistants can query for specific CSLA patterns
- The server returns official, tested code examples
- AI assistants can provide more accurate CSLA guidance to developers
- Fork the repository
- Create a feature branch
- Add your code examples following the established patterns
- Test your changes
- Submit a pull request
- Use clear, descriptive file names
- Include comprehensive examples that demonstrate the concept
- Add explanatory comments in code examples
- Create accompanying markdown documentation for complex patterns
- Follow CSLA best practices and conventions
This project is licensed under the MIT License - see the LICENSE file for details.
For questions about CSLA .NET, visit: