Skip to content

Reduced number of addresses per NodeID in peermanager to 1#2990

Open
pompon0 wants to merge 6 commits intomainfrom
gprusak-less-addrs
Open

Reduced number of addresses per NodeID in peermanager to 1#2990
pompon0 wants to merge 6 commits intomainfrom
gprusak-less-addrs

Conversation

@pompon0
Copy link
Contributor

@pompon0 pompon0 commented Feb 27, 2026

With 6.3 upgrade, invalid/outdated addresses are no longer gossiped by honest nodes (malicious nodes can poison the addresses buffer anyway). With that we can simplify the logic to handle just 1 address per peer. I've also added an additional mechanism to prefer public ips over private ips, to account for nodes gossiping private addresses (that they have configured as persistent peers for example).

@github-actions
Copy link

github-actions bot commented Feb 27, 2026

The latest Buf updates on your PR. Results from workflow Buf / buf (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed✅ passed✅ passed✅ passedFeb 27, 2026, 12:29 PM

@codecov
Copy link

codecov bot commented Feb 27, 2026

Codecov Report

❌ Patch coverage is 89.06250% with 7 lines in your changes missing coverage. Please review.
✅ Project coverage is 71.76%. Comparing base (89daf14) to head (4e896ef).
⚠️ Report is 6 commits behind head on main.

Files with missing lines Patch % Lines
sei-tendermint/internal/p2p/peermanager.go 64.70% 4 Missing and 2 partials ⚠️
sei-tendermint/internal/p2p/router.go 50.00% 1 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff             @@
##             main    #2990       +/-   ##
===========================================
+ Coverage   58.08%   71.76%   +13.67%     
===========================================
  Files        2109       22     -2087     
  Lines      173234     1948   -171286     
===========================================
- Hits       100626     1398    -99228     
+ Misses      63664      450    -63214     
+ Partials     8944      100     -8844     
Flag Coverage Δ
sei-chain 72.48% <89.06%> (+14.42%) ⬆️
sei-db 69.50% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
sei-tendermint/internal/p2p/address.go 91.52% <100.00%> (+0.78%) ⬆️
sei-tendermint/internal/p2p/peermanager_pool.go 100.00% <100.00%> (ø)
sei-tendermint/internal/p2p/router.go 83.05% <50.00%> (-0.85%) ⬇️
sei-tendermint/internal/p2p/peermanager.go 77.29% <64.70%> (-11.08%) ⬇️

... and 2089 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Comment on lines +112 to +118
for id, old := range p.addrs {
if old.lastFail.IsPresent() {
delete(p.addrs, id)
p.addrs[pa.addr.NodeID] = pa
return nil
}
}

Check warning

Code scanning / CodeQL

Iteration over map Warning

Iteration over map may be a possible source of non-determinism
if _, ok := peerAddrs.addrs[addr]; ok {
peerAddrs.fails[addr] = now
if peerAddr, ok := p.addrs[addr.NodeID]; ok && peerAddr.addr == addr {
peerAddr.lastFail = utils.Some(time.Now())

Check warning

Code scanning / CodeQL

Calling the system time Warning

Calling the system time may be a possible source of non-determinism
p.addrs[pa.addr.NodeID] = pa
return nil
}
// Otherwise, do not replace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, what if my legitimate peer is moving to a new data center and i do want to replace it? I need to shut down the old one and wait for the address to be valid?

return nil
}
}
// If the new address is public, find a private address to prune.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could it be the private address is actually a high staked validator I don't want to lose?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants