Skip to content

fix: fixup/down for the remove path#28

Merged
rustatian merged 11 commits into
masterfrom
fix/follow-floyds-algo-better
Mar 8, 2026
Merged

fix: fixup/down for the remove path#28
rustatian merged 11 commits into
masterfrom
fix/follow-floyds-algo-better

Conversation

@rustatian
Copy link
Copy Markdown
Member

@rustatian rustatian commented Mar 8, 2026

Reason for This PR

  • Strictly follow Floyd's binheap algo in the Remove method.

Description of Changes

  • Fixup/down in the Remove.
  • More tests, more benchmarks.

License Acceptance

By submitting this pull request, I confirm that my contribution is made under
the terms of the MIT license.

PR Checklist

[Author TODO: Meet these criteria.]
[Reviewer TODO: Verify that these criteria are met. Request changes if not]

  • All commits in this PR are signed (git commit -s).
  • The reason for this PR is clearly provided (issue no. or explanation).
  • The description of changes is clear and encompassing.
  • Any required documentation changes (code and docs) are included in this PR.
  • Any user-facing changes are mentioned in CHANGELOG.md.
  • All added/changed functionality is tested.

Summary by CodeRabbit

  • Bug Fixes

    • Removals now fully restore heap ordering and reliably wake blocked inserts so ExtractMin and subsequent operations behave correctly.
  • Performance

    • Simplified synchronization and length handling for more consistent behavior and reduced overhead.
    • Clear operation preserves internal capacity to avoid unnecessary reallocations.
  • Tests

    • Added extensive regression, concurrency, stress, edge-case and benchmark tests covering remove/insert/extract and backpressure scenarios.
  • Chores

    • Updated CI/test workflow and added coverage configuration.

Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
@rustatian rustatian self-assigned this Mar 8, 2026
Copilot AI review requested due to automatic review settings March 8, 2026 10:05
@rustatian rustatian added the enhancement New feature or request label Mar 8, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 8, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough
📝 Walkthrough
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 5.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix: fixup/down for the remove path' is specific and directly describes the main change—correcting fixup/down behavior in the Remove method to follow Floyd's algorithm.
Description check ✅ Passed The PR description includes all required sections: clear reasoning (follow Floyd's algorithm in Remove), concise description of changes (fixup/down and test additions), and completed PR checklist with confirmed commitments.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/follow-floyds-algo-better

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the binary heap’s Remove path to more closely follow Floyd’s heapify approach so the heap property is restored after in-place compaction, and expands the test/benchmark suite around remove/backpressure/concurrency behavior.

Changes:

  • Re-heapify the heap after Remove() compacts the underlying slice (Floyd-style heapify).
  • Replace atomic length tracking with lock-based len(bh.items) and adjust backpressure logic in Insert/ExtractMin.
  • Add multiple regression/edge/concurrency tests and several new benchmarks.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
binary_heap.go Updates Remove() to re-heapify after compaction; changes length management and bounded-queue signaling behavior.
binary_heap_test.go Adds regression tests for Remove() heap property restoration plus additional edge/concurrency tests and new benchmarks.
monotonic_stack.go Changes stack clearing to reuse the existing slice backing array.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread binary_heap.go
Comment thread binary_heap.go
Comment thread binary_heap.go
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
monotonic_stack.go (1)

47-48: Consider capping retained capacity in clear().

BinHeap.Remove() reuses one stack instance per heap, so st.idx = st.idx[:0] keeps the largest backing array alive for the lifetime of that heap. One large, sparse remove can permanently ratchet memory. If the goal is to reduce churn, consider shrinking back toward the default capacity once cap(st.idx) crosses a threshold.

Possible bounded-retention variant
 func (st *stack) clear() {
-	st.idx = st.idx[:0]
+	const defaultCap = 10
+	if cap(st.idx) > defaultCap*4 {
+		st.idx = make([][2]int, 0, defaultCap)
+		return
+	}
+	st.idx = st.idx[:0]
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monotonic_stack.go` around lines 47 - 48, The clear method on type stack
currently does st.idx = st.idx[:0], which can retain a very large backing array
after a single large Remove; modify stack.clear() to cap retained capacity by
checking cap(st.idx) and if it exceeds a chosen threshold (e.g., maxRetainCap)
reallocate st.idx to a new zero-length slice with a bounded capacity (e.g.,
defaultCap) instead of slicing to zero; reference the stack.clear method and the
reuse in BinHeap.Remove() when implementing the threshold and new allocation to
avoid permanently holding huge arrays.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@binary_heap.go`:
- Around line 120-122: Remove() currently shrinks the heap but does not wake
producers waiting in Insert(); after the bh.st.clear() call in Remove(), notify
the same waiter used by Insert() to unblock producers (e.g., call the
condition/semaphore notify used in Insert(), such as
bh.notFull.Signal()/Broadcast() or bh.st.Signal()/Broadcast() depending on the
implementation) so any goroutines waiting on len(bh.items) >= bh.maxLen are
awakened before returning out from Remove().

---

Nitpick comments:
In `@monotonic_stack.go`:
- Around line 47-48: The clear method on type stack currently does st.idx =
st.idx[:0], which can retain a very large backing array after a single large
Remove; modify stack.clear() to cap retained capacity by checking cap(st.idx)
and if it exceeds a chosen threshold (e.g., maxRetainCap) reallocate st.idx to a
new zero-length slice with a bounded capacity (e.g., defaultCap) instead of
slicing to zero; reference the stack.clear method and the reuse in
BinHeap.Remove() when implementing the threshold and new allocation to avoid
permanently holding huge arrays.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ccba6ee8-9151-4044-b7e6-2c0cd613d0fc

📥 Commits

Reviewing files that changed from the base of the PR and between 0c9c546 and e2fab05.

📒 Files selected for processing (3)
  • binary_heap.go
  • binary_heap_test.go
  • monotonic_stack.go

Comment thread binary_heap.go
Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
binary_heap_test.go (1)

351-352: ⚠️ Potential issue | 🟡 Minor

Error message mismatch.

The check validates len(out) != 6 but the error message says "should be 5". The check is correct (6 items have GroupID "1"), but the message is misleading.

🐛 Fix the error message
 	out := bh.Remove("1")
-	if len(out) != 6 {
-		t.Fatal("should be 5")
-	}
+	require.Len(t, out, 6, "expected 6 items with GroupID '1'")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@binary_heap_test.go` around lines 351 - 352, The test's failure message is
misleading: the assertion checks if len(out) != 6 but reports "should be 5";
update the error message in the test (the assertion around variable out in
binary_heap_test.go) to reflect the expected value 6 (or use a formatted message
like "expected 6, got %d" with len(out)) so the message matches the check in the
test function.
🧹 Nitpick comments (1)
binary_heap.go (1)

183-186: Minor inconsistency: Signal after unlock vs. within lock.

ExtractMin() and Insert() call Signal() after releasing the lock, whereas Remove() calls Broadcast() while still holding it (before the deferred unlock). Both approaches are valid in Go, but the inconsistent pattern could cause confusion. Consider using the same style throughout for clarity.

♻️ Optional: Consistent signaling while holding lock
 	// remove item
 	delete(bh.exists, item.ID())
 
-	bh.cond.L.Unlock()
-
 	// signal blocked producers waiting for space
 	bh.cond.Signal()
+	bh.cond.L.Unlock()
 	return item
 }

Same pattern could be applied to Insert().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@binary_heap.go` around lines 183 - 186, The signaling pattern is
inconsistent: Remove() calls bh.cond.Broadcast() while holding the lock but
ExtractMin() and Insert() call bh.cond.Signal() after bh.cond.L.Unlock(), which
is confusing—make them consistent by moving the Signal calls to occur while
holding the lock (i.e., call bh.cond.Signal() before bh.cond.L.Unlock() in
ExtractMin() and Insert()) so all three methods use the same "signal while
locked" pattern with bh.cond.Signal()/Broadcast().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@binary_heap_test.go`:
- Around line 351-352: The test's failure message is misleading: the assertion
checks if len(out) != 6 but reports "should be 5"; update the error message in
the test (the assertion around variable out in binary_heap_test.go) to reflect
the expected value 6 (or use a formatted message like "expected 6, got %d" with
len(out)) so the message matches the check in the test function.

---

Nitpick comments:
In `@binary_heap.go`:
- Around line 183-186: The signaling pattern is inconsistent: Remove() calls
bh.cond.Broadcast() while holding the lock but ExtractMin() and Insert() call
bh.cond.Signal() after bh.cond.L.Unlock(), which is confusing—make them
consistent by moving the Signal calls to occur while holding the lock (i.e.,
call bh.cond.Signal() before bh.cond.L.Unlock() in ExtractMin() and Insert()) so
all three methods use the same "signal while locked" pattern with
bh.cond.Signal()/Broadcast().

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7d1a4079-25cd-474f-ab2b-f145b33ec342

📥 Commits

Reviewing files that changed from the base of the PR and between e2fab05 and 1f7207b.

📒 Files selected for processing (2)
  • binary_heap.go
  • binary_heap_test.go

Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@binary_heap_test.go`:
- Around line 581-582: The stress test can hang because
NewBinHeap[Item](100_000) may fill and block producers inside Insert() before
they re-check the done flag; change the test to use an effectively unbounded
maxLen (e.g., pass 0 or math.MaxInt or a constant named unboundedMax) when
creating the heap via NewBinHeap, or add explicit draining/unblocking logic that
wakes blocked producers before waiting on producerWg; update the test where
NewBinHeap[Item](100_000) is used (also at the other occurrence) to either
create the heap with an unbounded maxLen or ensure producers are
signaled/drained after flipping done so producer goroutines exit and
producerWg.Wait() cannot deadlock.

In `@binary_heap.go`:
- Around line 182-183: The current use of a single sync.Cond with
bh.cond.Signal() can deadlock because Insert() and ExtractMin() wait on
different predicates; change the signaling to avoid missed wakeups by replacing
bh.cond.Signal() with bh.cond.Broadcast() at both the producer-side
free-capacity site (currently bh.cond.Signal() near where capacity is freed) and
the consumer-side site (the wait in ExtractMin()/Insert() around Line 160), or
alternatively refactor to two condition variables (e.g., bh.notEmpty and
bh.notFull) and use Signal() on the appropriate cond in Insert() and
ExtractMin(); update all references to bh.cond accordingly so
producers/consumers wake reliably.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5963938e-4503-463f-94ff-4daf60495610

📥 Commits

Reviewing files that changed from the base of the PR and between 1f7207b and 85fc1f2.

📒 Files selected for processing (2)
  • binary_heap.go
  • binary_heap_test.go

Comment thread binary_heap_test.go Outdated
Comment thread binary_heap.go Outdated
Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
binary_heap_test.go (1)

523-525: Time-based synchronization may be flaky under load.

The 100ms sleep assumes the goroutine will have started and blocked by then. While generally sufficient, this could fail on heavily loaded CI runners.

Consider using a channel or sync primitive to signal when the goroutine has actually reached the blocking point, though this would require changes to the heap implementation. The current approach is acceptable for most CI environments.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@binary_heap_test.go` around lines 523 - 525, Replace the fragile time.Sleep
by having the producer goroutine signal when it reaches the blocking point and
waiting on that signal in the test; e.g., add a channel (blockedCh) in the test
and modify the producer callback/closure that calls the heap insertion (the code
that currently blocks on bh when full) to send on blockedCh just before the
blocking push, then in the test do <-blockedCh and assert require.Equal(t,
uint64(5), bh.Len()) — alternatively use a sync.WaitGroup similarly; reference
the producer goroutine and bh.Len() in the test and, if necessary, add a small
signaling hook where the producer calls bh.Push/Offer so the test can
deterministically observe the blocked state.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@binary_heap_test.go`:
- Around line 523-525: Replace the fragile time.Sleep by having the producer
goroutine signal when it reaches the blocking point and waiting on that signal
in the test; e.g., add a channel (blockedCh) in the test and modify the producer
callback/closure that calls the heap insertion (the code that currently blocks
on bh when full) to send on blockedCh just before the blocking push, then in the
test do <-blockedCh and assert require.Equal(t, uint64(5), bh.Len()) —
alternatively use a sync.WaitGroup similarly; reference the producer goroutine
and bh.Len() in the test and, if necessary, add a small signaling hook where the
producer calls bh.Push/Offer so the test can deterministically observe the
blocked state.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2b0508ae-55be-41e7-8723-530d00e51a12

📥 Commits

Reviewing files that changed from the base of the PR and between 85fc1f2 and a6b5e53.

📒 Files selected for processing (1)
  • binary_heap_test.go

Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 8, 2026

Welcome to Codecov 🎉

Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests.

Thanks for integrating Codecov - We've got you covered ☂️

dependabot Bot and others added 3 commits March 8, 2026 13:03
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 7 to 8.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](actions/download-artifact@v7...v8)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 6 to 7.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](actions/upload-artifact@v6...v7)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
.github/workflows/linux.yml (1)

58-60: Only one matrix coverage artifact is consumed here.

This job now reads pq-ubuntu-latest only, so the macOS and Windows coverage artifacts produced by the matrix are dead CI weight. Either merge all matrix artifacts here, or only generate/upload coverage on a single matrix leg to save runtime and artifact storage.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/linux.yml around lines 58 - 60, The workflow currently
downloads only the artifact named "pq-ubuntu-latest" (name: pq-ubuntu-latest)
into path ./coverage, leaving macOS/Windows matrix artifacts unused; either
change the download step to fetch all matrix artifact names (e.g., iterate or
use matrix.os to construct artifact names) and merge them into ./coverage, or
limit artifact upload to a single matrix leg by conditionally running the upload
only when a specific matrix value (e.g., matrix.os == 'ubuntu-latest' or a
job-level flag) is set; update the artifact download/upload steps accordingly so
the names and paths align with how artifacts are produced.
binary_heap.go (1)

114-121: Avoid rebuilding the heap on a no-op Remove.

When out is empty, the heap shape and ordering are unchanged. The full Floyd pass plus Broadcast() only adds O(n) work and wakes blocked goroutines for no state change; guard that block behind len(out) > 0.

♻️ Suggested change
-	// re-heapify after compaction (Floyd's algorithm)
-	n := len(bh.items)
-	for i := n/2 - 1; i >= 0; i-- {
-		bh.fixDown(i, n-1)
-	}
-
-	bh.st.clear()
-	bh.cond.Broadcast()
+	if len(out) > 0 {
+		// re-heapify after compaction (Floyd's algorithm)
+		n := len(bh.items)
+		for i := n/2 - 1; i >= 0; i-- {
+			bh.fixDown(i, n-1)
+		}
+		bh.cond.Broadcast()
+	}
+
+	bh.st.clear()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@binary_heap.go` around lines 114 - 121, Guard the expensive re-heapify and
wake-up steps so they only run when items were actually removed: check len(out)
> 0 before running the Floyd rebuild loop (the n := len(bh.items) / for i := n/2
- 1; i >= 0; i-- { bh.fixDown(i, n-1) }) and before calling bh.st.clear() and
bh.cond.Broadcast(); if len(out) == 0 simply return/skip the compaction work to
avoid O(n) work and unnecessary Broadcasts on a no-op Remove.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@binary_heap.go`:
- Around line 104-118: The Remove and ExtractMin implementations leave freed
elements reachable in bh.items' backing array, preventing GC; update Remove (the
loop that computes start/end from bh.st.Indices and then does bh.items =
append(bh.items[:start], bh.items[end+1:]...)) to first clear the freed slots by
setting each slot in the range [start,end] to nil (or the zero value for the
item type) before reslicing, and update ExtractMin to set bh.items[n-1] = nil
before doing bh.items = bh.items[:n-1]; after zeroing the freed entries,
continue with the existing re-heapify using bh.fixDown.

---

Nitpick comments:
In @.github/workflows/linux.yml:
- Around line 58-60: The workflow currently downloads only the artifact named
"pq-ubuntu-latest" (name: pq-ubuntu-latest) into path ./coverage, leaving
macOS/Windows matrix artifacts unused; either change the download step to fetch
all matrix artifact names (e.g., iterate or use matrix.os to construct artifact
names) and merge them into ./coverage, or limit artifact upload to a single
matrix leg by conditionally running the upload only when a specific matrix value
(e.g., matrix.os == 'ubuntu-latest' or a job-level flag) is set; update the
artifact download/upload steps accordingly so the names and paths align with how
artifacts are produced.

In `@binary_heap.go`:
- Around line 114-121: Guard the expensive re-heapify and wake-up steps so they
only run when items were actually removed: check len(out) > 0 before running the
Floyd rebuild loop (the n := len(bh.items) / for i := n/2 - 1; i >= 0; i-- {
bh.fixDown(i, n-1) }) and before calling bh.st.clear() and bh.cond.Broadcast();
if len(out) == 0 simply return/skip the compaction work to avoid O(n) work and
unnecessary Broadcasts on a no-op Remove.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3ebd4103-7c78-4b1a-8dc9-0ba8d3aac3ae

📥 Commits

Reviewing files that changed from the base of the PR and between a6b5e53 and 8aeb926.

📒 Files selected for processing (4)
  • .github/workflows/linux.yml
  • binary_heap.go
  • binary_heap_test.go
  • codecov.yml
✅ Files skipped from review due to trivial changes (1)
  • codecov.yml

Comment thread binary_heap.go
Signed-off-by: Valery Piashchynski <piashchynski.valery@gmail.com>
@rustatian rustatian merged commit a47575e into master Mar 8, 2026
16 checks passed
@rustatian rustatian deleted the fix/follow-floyds-algo-better branch March 8, 2026 12:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants