Skip to content

Eliminate unnecessary vtable slot backpatching during non-final tier transitions#125359

Draft
davidwrighton wants to merge 9 commits intodotnet:mainfrom
davidwrighton:investigate_tiering_issue
Draft

Eliminate unnecessary vtable slot backpatching during non-final tier transitions#125359
davidwrighton wants to merge 9 commits intodotnet:mainfrom
davidwrighton:investigate_tiering_issue

Conversation

@davidwrighton
Copy link
Member

Summary

This PR eliminates unnecessary vtable slot backpatching during non-final compilation tiers (Tier0, Tier0Instrumented), significantly reducing lock contention and entry point oscillations in the tiered compilation pipeline.

Problem

When PR #124664 (""Eliminate forwarder stubs by reusing method precodes for call counting indirection"") was merged, it was subsequently reverted by #125285 due to a 12-13% regression on the Orchard benchmark. Investigation with CLR logging revealed the root cause:

  • With forwarder stubs eliminated, vtable slots point directly to precodes
  • During non-final tier transitions, SetCodeEntryPoint eagerly backpatched vtable slots via DoBackpatch()
  • Each DoBackpatch() call acquires MethodDescBackpatchInfoTracker lock (RtlEnterCriticalSection)
  • This caused 335 TryBackpatchEntryPointSlots calls and 91 slot writes during test execution
  • Additionally, entry point oscillations occurred (281 backpatch attempts) as methods bounced between code versions

Solution

The fix keeps vtable slots pointing to the precode (temporary entry point) during non-final tiers, deferring backpatching until the final tier:

Non-final tiers (Tier0, Tier0Instrumented):

  • Redirect precode target to compiled code via SetTargetInterlocked() (atomic, no lock needed)
  • Do NOT modify GetMethodEntryPoint() (vtable slot stays at temporary entry point)
  • DoBackpatch() short-circuits when GetMethodEntryPoint() == GetTemporaryEntryPoint(), preventing slot recording

Final tier (Tier1):

  • Set vtable slot to final code via SetMethodEntryPoint()
  • Reset precode to prestub via ResetTargetInterlocked()
  • Subsequent calls through unpatched vtable slots flow through prestub → DoBackpatch(), which lazily discovers, records, and patches each slot

Results

Metric Before (forwarder stubs) After optimization
TryBackpatchEntryPointSlots 335 0
BackpatchSlots 335 0
Slot writes 91 0
Entry point oscillations 281 0

All vtable slot patching now happens lazily at final tier via DoBackpatch().

Changes

Core optimization (commits 3-4)

  • callcounting.cpp: 8 paths modified to use SetBackpatchableEntryPoint helper
  • codeversion.cpp: Initial publish path modified for backpatchable methods
  • tieredcompilation.cpp: Tiering delay activation path modified
  • prestub.cpp: Updated comments describing the new design invariant

Logging (commits 1, 5)

  • Added LOG calls in SetCodeEntryPoint, TryBackpatchEntryPointSlots, Backpatch_Locked (method.cpp, methoddescbackpatchinfo.cpp)
  • These use LF_TIEREDCOMPILATION at LL_INFO10000

Refactoring (commit 6)

  • Extracted MethodDesc::SetBackpatchableEntryPoint() helper consolidating 10 duplicated code paths into one method
  • Net reduction of ~82 lines

Thread Safety

  • Non-final tier paths: Use SetTargetInterlocked() which is atomic — no lock required
  • Final tier paths: Called under MethodDescBackpatchInfoTracker lock (asserted by SetMethodEntryPoint)
  • HasNativeCode(): Independent of GetMethodEntryPoint() — checks NativeCodeVersion, unaffected by keeping vtable slot at temporary entry point

Testing

  • Custom vtable backpatch test with abstract base class, derived types, factory method, and hot loops exercising the full TieredPGO pipeline
  • Verified with CLR logging that the full lifecycle is visible (non-final redirects + final tier patching)

davidwrighton and others added 8 commits March 9, 2026 14:22
Add a new Copilot skill that guides running any coreclr or libraries
test with the CoreCLR diagnostic logging subsystem enabled. The skill
documents all available log facilities (LF_*), extended facilities
(LF2_*), log levels, and output control environment variables, and
provides step-by-step instructions for capturing log files.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add logging to the vtable slot backpatching code path so that each
entry point update is visible in CLR logs when LF_TIEREDCOMPILATION
is enabled. The new LOG calls cover:

- MethodDesc::SetCodeEntryPoint: logs which path is taken
  (BackpatchSlots, Precode, StableEntryPoint)
- MethodDesc::TryBackpatchEntryPointSlots: logs entry point
  transitions and early-exit reasons
- MethodDescBackpatchInfoTracker::Backpatch_Locked: logs the method
  and count of slots backpatched
- EntryPointSlots::Backpatch_Locked: logs each individual slot write
  with slot type (Normal, Vtable, Executable, ExecutableRel32)

All logging uses LF_TIEREDCOMPILATION at LL_INFO10000, matching
existing tiered compilation log patterns.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
For backpatchable methods, avoid calling SetCodeEntryPoint (which
eagerly backpatches all recorded vtable slots) during non-final tier
transitions. Instead, use SetMethodEntryPoint to update the method
entry point and SetTargetInterlocked to redirect the precode to the
new code, keeping vtable slots pointing to the precode.

For final tier activation, reset the precode to prestub so that calls
through vtable slots trigger DoBackpatch() for lazy slot discovery,
recording, and patching to the final tier code.

This eliminates the PreStub lock contention that caused a 12-13%
regression in PR dotnet#124664, while preserving the forwarder stub
elimination benefits.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
During non-final tiers, keep GetMethodEntryPoint() at the temporary
entry point (precode) instead of setting it to native code. This causes
DoBackpatch() to short-circuit early, preventing vtable slot recording
and patching during non-final tier transitions. The precode target is
still redirected to native code or call counting stubs via
SetTargetInterlocked, so calls through the precode work correctly.

When the final tier is activated, SetMethodEntryPoint() is set to the
final code and the precode is reset to prestub. The next call through
a vtable slot triggers DoBackpatch(), which records and patches the slot
to point directly to the final tier code. Subsequent calls bypass the
precode entirely.

Key changes:
- PublishVersionedMethodCode: skip TrySetInitialCodeEntryPointForVersionableMethod
  for non-final tiers with call counting; just redirect the precode target
- CallCountingManager::SetCodeEntryPoint: remove SetMethodEntryPoint and
  BackpatchToResetEntryPointSlots calls for non-final tier backpatchable methods
- TrySetCodeEntryPointAndRecordMethodForCallCounting: remove SetMethodEntryPoint
- CompleteCallCounting: remove SetMethodEntryPoint for non-final tiers
- DoBackpatch comments: updated to describe the new invariant

Results (TryBackpatchEntryPointSlots / BackpatchSlots / slot writes):
- Baseline (forwarder stubs):    335 / 91 / -
- Previous optimization:         281 / 0  / 60
- After this change:             0   / 0  / 0

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The existing LOG calls in MethodDesc::SetCodeEntryPoint only cover
non-backpatchable methods. Backpatchable methods bypass SetCodeEntryPoint
and go through dedicated paths in CallCountingManager::SetCodeEntryPoint,
CompleteCallCounting, PublishVersionableCodeIfNecessary, and
TrySetCodeEntryPointAndRecordMethodForCallCounting. Without logging in
these paths, the vtable slot lifecycle for backpatchable methods is
invisible in checked build logs.

Add LOG calls at all 10 vtable slot backpatch paths with descriptive
labels: InitialPublish-NonFinal, CallCountingStub, ThresholdReached,
NonFinal, FinalTier, FinalTier-NonDefault, SameVersion-NonFinal,
DiffVersion-FinalTier, DiffVersion-NonFinal, and DelayActivation-NonFinal.

All use LF_TIEREDCOMPILATION at LL_INFO10000, consistent with the
existing LOG calls in method.cpp.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…aths

Consolidate 10 duplicated vtable slot backpatch code patterns across
callcounting.cpp, codeversion.cpp, and tieredcompilation.cpp into a
single MethodDesc::SetBackpatchableEntryPoint helper method.

The helper encapsulates the two core patterns:
- Final tier: SetMethodEntryPoint(code) + ResetTargetInterlocked()
- Non-final tier: SetTargetInterlocked(target, fOnlyRedirectFromPrestub)

Includes centralized LOG calls and an assert to prevent invalid
parameter combinations (isFinalTier + fOnlyRedirectFromPrestub).

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…_issue

# Conflicts:
#	src/coreclr/vm/callcounting.cpp
Copilot AI review requested due to automatic review settings March 10, 2026 03:11
@dotnet-policy-service
Copy link
Contributor

Tagging subscribers to this area: @agocke
See info in area-owners.md if you want to be subscribed.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR changes tiered compilation and call counting for backpatchable (vtable-slot) methods to avoid eagerly backpatching vtable slots during non-final tiers, deferring slot patching to the final tier to reduce lock contention and entry point oscillations.

Changes:

  • Introduces MethodDesc::SetBackpatchableEntryPoint() and updates tiering/call-counting/version-publish paths to redirect the temporary-entrypoint precode during non-final tiers instead of backpatching vtable slots.
  • Updates the final-tier activation path for backpatchable methods to set the owning vtable slot to final code and reset the precode to prestub to enable lazy DoBackpatch() patching.
  • Adds additional tiered-compilation logging and a new “coreclr-logging-test” skill doc for capturing CoreCLR logs.

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
src/coreclr/vm/tieredcompilation.cpp Uses SetBackpatchableEntryPoint for backpatchable methods when recording for call counting during tiering delay.
src/coreclr/vm/prestub.cpp Updates DoBackpatch() commentary to describe the new non-final vs final tier behavior/invariant.
src/coreclr/vm/methoddescbackpatchinfo.cpp Adds high-verbosity logging for slot backpatch operations and counts patched slots.
src/coreclr/vm/method.hpp Declares MethodDesc::SetBackpatchableEntryPoint().
src/coreclr/vm/method.cpp Implements SetBackpatchableEntryPoint() and adds logging in entrypoint/backpatch code paths.
src/coreclr/vm/codeversion.cpp Updates initial publish path to redirect precode (no slot backpatch) for call-counting-first-call + backpatchable methods.
src/coreclr/vm/callcounting.h Updates call counting design documentation to reflect precode-based indirection for backpatchable methods.
src/coreclr/vm/callcounting.cpp Removes forwarder-stub tracking and switches multiple paths to use SetBackpatchableEntryPoint().
.github/skills/coreclr-logging-test/SKILL.md Adds documentation for running tests with CLR logging enabled and collecting logs.

Comment on lines +28 to +33
static const char *slotTypeNames[] = { "Normal", "Vtable", "Executable", "ExecutableRel32" };
static_assert(ARRAY_SIZE(slotTypeNames) == SlotType_Count, "Update slotTypeNames when adding new SlotTypes");

LOG((LF_TIEREDCOMPILATION, LL_INFO10000,
"EntryPointSlots::Backpatch_Locked slot=" FMT_ADDR " slotType=%s entryPoint=" FMT_ADDR "\n",
DBG_ADDR(slot), slotTypeNames[slotType], DBG_ADDR(entryPoint)));
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

slotTypeNames is only referenced inside LOG(...). When LOGGING is not defined, LOG expands to an empty macro (see src/coreclr/inc/log.h), which will make slotTypeNames an unused local and can trigger -Wunused-variable warnings (often treated as errors in CoreCLR builds). Consider wrapping the slotTypeNames declaration (and the LOG call that uses it) in #ifdef LOGGING (or otherwise ensuring it is only compiled when logging is enabled).

Copilot uses AI. Check for mistakes.
Comment on lines +22 to +31
- For tiered methods that do not normally use a MethodDesc precode as the stable entry point (virtual and interface methods
when slot backpatching is enabled), the method's own temporary-entrypoint precode is redirected to the call counting stub,
and vtable slots are reset to point to the temporary entry point. This ensures calls flow through temporary-entrypoint
precode -> call counting stub -> native code, and the call counting stub can be safely deleted since vtable slots don't
point to it directly. GetMethodEntryPoint() is kept at the native code entry point (not the temporary entry point) so that
DoBackpatch() can still record new vtable slots after the precode reverts to prestub. During call counting, there is a
bounded window where new vtable slots may not be recorded because the precode target is the call counting stub rather than
the prestub. These slots are corrected once call counting stubs are deleted and the precode reverts to prestub.
When call counting ends, the precode is always reset to prestub (never to native code), preserving the invariant
that new vtable slots can be discovered and recorded by DoBackpatch().
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The updated call counting overview still states that for backpatchable methods GetMethodEntryPoint() is kept at the native-code entry point and that vtable slots are reset when call counting starts. With the new non-final-tier design (vtable slots kept at the temporary entry point, and slot recording/backpatching deferred until final tier), this description appears inconsistent with the current behavior and the updated invariants described in prestub.cpp. Please update this comment block to reflect the new scheme (temporary entry point stays as the method entry point during non-final tiers; precode target is redirected; backpatching/slot discovery happens lazily at final tier).

Copilot uses AI. Check for mistakes.
…pilation lifecycle

Funcptr precodes for backpatchable methods are now registered in the entry point
slot backpatching table at creation time, replacing the ad-hoc funcptr precode
lookup and patching that was previously done in TryBackpatchEntryPointSlots.

This ensures:
- During non-final tiers, funcptr precode targets point to the method's precode
  (temporary entry point), so calls flow through the same path as vtable calls
- At final tier, SetBackpatchableEntryPoint calls Backpatch_Locked to update all
  registered slots (including funcptr precodes) to the final code
- During rejit, BackpatchToResetEntryPointSlots resets funcptr targets via the
  backpatching table, ensuring proper re-discovery through the prestub

The change adds Precode::GetTargetSlot() and StubPrecode::GetTargetSlot() to
expose the writable target field address for registration as a SlotType_Normal
entry point slot.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants