Add BenchmarkDotNet benchmarks and manual workflow trigger#1
Conversation
There was a problem hiding this comment.
Pull request overview
Adds a BenchmarkDotNet-based benchmark suite to the repo and wires it into GitHub Actions so performance regressions can be tracked over time, with an additional manual trigger for ad-hoc runs.
Changes:
- Introduce a new
ExpressiveSharp.Benchmarksproject with benchmarks covering generator, resolver/replacer, transformers, and EF Core query translation overhead. - Add a dedicated
BenchmarksGitHub Actions workflow (includingworkflow_dispatch) to run and publish benchmark results togh-pagesviagithub-action-benchmark. - Document benchmark purpose and local execution commands in the testing docs and contributor guide.
Reviewed changes
Copilot reviewed 17 out of 17 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| docs/testing-strategy.md | Documents the benchmark suite, CI regression tracking, and local run commands. |
| benchmarks/ExpressiveSharp.Benchmarks/TransformerBenchmarks.cs | Adds transformer + full pipeline benchmarks. |
| benchmarks/ExpressiveSharp.Benchmarks/Program.cs | BenchmarkDotNet entrypoint via BenchmarkSwitcher. |
| benchmarks/ExpressiveSharp.Benchmarks/PolyfillGeneratorBenchmarks.cs | Benchmarks polyfill interceptor generator cold vs incremental runs. |
| benchmarks/ExpressiveSharp.Benchmarks/Helpers/TestEntity.cs | Provides [Expressive] test model used by multiple benchmarks. |
| benchmarks/ExpressiveSharp.Benchmarks/Helpers/TestDbContext.cs | EF Core SQLite context used for query overhead benchmarks. |
| benchmarks/ExpressiveSharp.Benchmarks/Helpers/BenchmarkCompilationHelper.cs | Utilities to generate synthetic Roslyn compilations for generator benchmarks. |
| benchmarks/ExpressiveSharp.Benchmarks/GeneratorBenchmarks.cs | Benchmarks generator cold and incremental scenarios across sizes. |
| benchmarks/ExpressiveSharp.Benchmarks/ExpressiveSharp.Benchmarks.csproj | Defines the new benchmarks project and its dependencies. |
| benchmarks/ExpressiveSharp.Benchmarks/ExpressionResolverBenchmarks.cs | Benchmarks registry vs reflection lookup paths. |
| benchmarks/ExpressiveSharp.Benchmarks/ExpressionReplacerBenchmarks.cs | Benchmarks replacer behavior on representative expression shapes. |
| benchmarks/ExpressiveSharp.Benchmarks/EFCoreQueryOverheadBenchmarks.cs | Benchmarks EF Core ToQueryString() overhead with/without Expressives. |
| benchmarks/Directory.Build.props | Restricts benchmarks folder to net10.0 TFM. |
| ExpressiveSharp.slnx | Adds the benchmarks project to the solution structure. |
| Directory.Packages.props | Pins BenchmarkDotNet via central package management. |
| CLAUDE.md | Adds benchmark run instructions + lists benchmarks project. |
| .github/workflows/benchmarks.yml | Adds CI workflow to run and publish benchmark results, including manual trigger. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
benchmarks/ExpressiveSharp.Benchmarks/ExpressiveSharp.Benchmarks.csproj
Outdated
Show resolved
Hide resolved
.github/workflows/benchmarks.yml
Outdated
|
|
||
| - name: Find benchmark result | ||
| id: find-result | ||
| run: echo "file=$(find BenchmarkDotNet.Artifacts -name '*-report-full.json' | head -1)" >> $GITHUB_OUTPUT |
There was a problem hiding this comment.
The workflow runs all benchmark classes (--filter "*"), which will produce multiple *-report-full.json files (one per benchmark type). The find ... | head -1 step selects a single file arbitrarily, so github-action-benchmark will record/compare only a subset of benchmarks and the selected subset may change between runs. Consider running benchmarks in a matrix per class (one JSON per job) or explicitly aggregating/choosing a deterministic combined output before passing it to output-file-path.
| run: echo "file=$(find BenchmarkDotNet.Artifacts -name '*-report-full.json' | head -1)" >> $GITHUB_OUTPUT | |
| run: | | |
| files=$(find BenchmarkDotNet.Artifacts -name '*-report-full.json' | sort) | |
| if [ -z "$files" ]; then | |
| echo "No benchmark result files found" >&2 | |
| exit 1 | |
| fi | |
| combined="BenchmarkDotNet.Artifacts/combined-report-full.json" | |
| jq -s 'add' $files > "$combined" | |
| echo "file=$combined" >> "$GITHUB_OUTPUT" |
| permissions: | ||
| contents: write | ||
| pull-requests: write |
There was a problem hiding this comment.
This workflow grants contents: write and pull-requests: write for all events, including pull_request. For least privilege (and to reduce the blast radius if an action dependency is compromised), consider scoping write permissions to only the push/main case that actually needs to push to gh-pages, and keep PR runs read-only (or split into separate jobs with different permissions).
…g result file search
…ks.csproj Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Introduce BenchmarkDotNet benchmarks for performance evaluation and integrate them into the CI workflow. Enable manual triggering of the benchmark workflow using
workflow_dispatch.