Skip to content

Conversation

@gacevicljubisa
Copy link
Member

@gacevicljubisa gacevicljubisa commented Sep 26, 2025

Checklist

  • I have read the coding guide.
  • My change requires a documentation update, and I have done it.
  • I have added tests to cover my changes.
  • I have filled out the description and linked the related issues.

Description

This change ensures that erasure-coded content can be properly pinned, maintaining data availability and integrity within the Swarm network. Issue is desribed in this comment.

This basic getter doesn't understand the replica structure of erasure-coded chunks, causing the traversal to fail when trying to pin such content.

Changed the getter to use replicas.NewGetter which is specifically designed to handle erasure-coded chunks:

Open API Spec Version Changes (if applicable)

Motivation and Context (Optional)

Related Issue (Optional)

Screenshots (if appropriate):

@gacevicljubisa
Copy link
Member Author

gacevicljubisa commented Oct 2, 2025

Testing

Steps to Reproduce the Issue:

  1. Upload a file using erasure coding, resulting in a specific hash (e.g., ffe6d7fc805ede4581af1a01d9fcedf21332cfe3353d74a4961e53c360d06342)
  2. Attempt to pin the file using the Bee API
  3. Observe that the pinning operation fails, despite the file being downloadable

Example Download Command:

curl localhost:1633/bzz/ffe6d7fc805ede4581af1a01d9fcedf21332cfe3353d74a4961e53c360d06342/ -o "wiki copy.png"

Verification:

After applying this fix, the same hash should be pinable via:

curl -X POST localhost:1633/pins/ffe6d7fc805ede4581af1a01d9fcedf21332cfe3353d74a4961e53c360d06342

@bcsorvasi bcsorvasi added this to the v2.7.0 milestone Oct 7, 2025
@bcsorvasi bcsorvasi added the bug Something isn't working label Oct 7, 2025
Copy link
Member

@zelig zelig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont think this should go in . we need:

  • to explain what was the issue
  • to explain why the issue does not appear now
  • why redundancy.DefaultLevel is used

return
}

getter := s.storer.Download(true)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think this does what we need.
What caused the problem and how did you fix it?

Honestly, I dont this replicas (as opposed to parities) should be pinned at all, cos they are the same as the root chunk. When it comes to repair it should just upload the replicas too to swarm .

Also it is very suspicious that redundancy.DefaultLevel is used here. You know the level of a pinned file we can assume, no?

Copy link
Member Author

@gacevicljubisa gacevicljubisa Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idea behind this PR was to addresses just specific comment in issue #5228 .

Thanks for clarifying that replicas shouldn't be pinned.
I used redundancy.DefaultLevel as an assumption, and it is much used all over the codebase. Recently I created new issue related to the use of redundancy.DefaultLevel and inconsistent Swarm-Redundancy-Level header.

I wasn't aware we can extract the redundancy level from the pinned file.

Can you suggest me the flow, how pinning should be handled if route chunk can not be found?

Also, how we can repair by uploading the replicas too to swarm?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants