Skip to content

Add BenchmarkDotNet benchmarks and manual workflow trigger#1

Merged
koenbeuk merged 5 commits intomainfrom
feat/benchmarks
Mar 26, 2026
Merged

Add BenchmarkDotNet benchmarks and manual workflow trigger#1
koenbeuk merged 5 commits intomainfrom
feat/benchmarks

Conversation

@koenbeuk
Copy link
Collaborator

Introduce BenchmarkDotNet benchmarks for performance evaluation and integrate them into the CI workflow. Enable manual triggering of the benchmark workflow using workflow_dispatch.

Copilot AI review requested due to automatic review settings March 26, 2026 00:22
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a BenchmarkDotNet-based benchmark suite to the repo and wires it into GitHub Actions so performance regressions can be tracked over time, with an additional manual trigger for ad-hoc runs.

Changes:

  • Introduce a new ExpressiveSharp.Benchmarks project with benchmarks covering generator, resolver/replacer, transformers, and EF Core query translation overhead.
  • Add a dedicated Benchmarks GitHub Actions workflow (including workflow_dispatch) to run and publish benchmark results to gh-pages via github-action-benchmark.
  • Document benchmark purpose and local execution commands in the testing docs and contributor guide.

Reviewed changes

Copilot reviewed 17 out of 17 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
docs/testing-strategy.md Documents the benchmark suite, CI regression tracking, and local run commands.
benchmarks/ExpressiveSharp.Benchmarks/TransformerBenchmarks.cs Adds transformer + full pipeline benchmarks.
benchmarks/ExpressiveSharp.Benchmarks/Program.cs BenchmarkDotNet entrypoint via BenchmarkSwitcher.
benchmarks/ExpressiveSharp.Benchmarks/PolyfillGeneratorBenchmarks.cs Benchmarks polyfill interceptor generator cold vs incremental runs.
benchmarks/ExpressiveSharp.Benchmarks/Helpers/TestEntity.cs Provides [Expressive] test model used by multiple benchmarks.
benchmarks/ExpressiveSharp.Benchmarks/Helpers/TestDbContext.cs EF Core SQLite context used for query overhead benchmarks.
benchmarks/ExpressiveSharp.Benchmarks/Helpers/BenchmarkCompilationHelper.cs Utilities to generate synthetic Roslyn compilations for generator benchmarks.
benchmarks/ExpressiveSharp.Benchmarks/GeneratorBenchmarks.cs Benchmarks generator cold and incremental scenarios across sizes.
benchmarks/ExpressiveSharp.Benchmarks/ExpressiveSharp.Benchmarks.csproj Defines the new benchmarks project and its dependencies.
benchmarks/ExpressiveSharp.Benchmarks/ExpressionResolverBenchmarks.cs Benchmarks registry vs reflection lookup paths.
benchmarks/ExpressiveSharp.Benchmarks/ExpressionReplacerBenchmarks.cs Benchmarks replacer behavior on representative expression shapes.
benchmarks/ExpressiveSharp.Benchmarks/EFCoreQueryOverheadBenchmarks.cs Benchmarks EF Core ToQueryString() overhead with/without Expressives.
benchmarks/Directory.Build.props Restricts benchmarks folder to net10.0 TFM.
ExpressiveSharp.slnx Adds the benchmarks project to the solution structure.
Directory.Packages.props Pins BenchmarkDotNet via central package management.
CLAUDE.md Adds benchmark run instructions + lists benchmarks project.
.github/workflows/benchmarks.yml Adds CI workflow to run and publish benchmark results, including manual trigger.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


- name: Find benchmark result
id: find-result
run: echo "file=$(find BenchmarkDotNet.Artifacts -name '*-report-full.json' | head -1)" >> $GITHUB_OUTPUT
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow runs all benchmark classes (--filter "*"), which will produce multiple *-report-full.json files (one per benchmark type). The find ... | head -1 step selects a single file arbitrarily, so github-action-benchmark will record/compare only a subset of benchmarks and the selected subset may change between runs. Consider running benchmarks in a matrix per class (one JSON per job) or explicitly aggregating/choosing a deterministic combined output before passing it to output-file-path.

Suggested change
run: echo "file=$(find BenchmarkDotNet.Artifacts -name '*-report-full.json' | head -1)" >> $GITHUB_OUTPUT
run: |
files=$(find BenchmarkDotNet.Artifacts -name '*-report-full.json' | sort)
if [ -z "$files" ]; then
echo "No benchmark result files found" >&2
exit 1
fi
combined="BenchmarkDotNet.Artifacts/combined-report-full.json"
jq -s 'add' $files > "$combined"
echo "file=$combined" >> "$GITHUB_OUTPUT"

Copilot uses AI. Check for mistakes.
Comment on lines +10 to +12
permissions:
contents: write
pull-requests: write
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This workflow grants contents: write and pull-requests: write for all events, including pull_request. For least privilege (and to reduce the blast radius if an action dependency is compromised), consider scoping write permissions to only the push/main case that actually needs to push to gh-pages, and keep PR runs read-only (or split into separate jobs with different permissions).

Copilot uses AI. Check for mistakes.
@koenbeuk koenbeuk merged commit a944078 into main Mar 26, 2026
3 of 4 checks passed
@koenbeuk koenbeuk deleted the feat/benchmarks branch March 26, 2026 02:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants