fix: allow dag import of 1MiB chunks wrapped in dag-pb

IPIP-499's unixfs-v1-2025 profile uses 1MiB chunks. with
--raw-leaves=false, protobuf wrapping pushes blocks slightly over 1MiB.
the previous 1MiB SoftBlockLimit rejected these blocks on dag import.

raise SoftBlockLimit to 2MiB to match the bitswap spec, which requires
implementations to support blocks up to 2MiB.

- raise SoftBlockLimit to 2MiB per the bitswap spec
- update error messages and help text
- bump boxo to main with ipfs/boxo#1101 (raised ChunkSizeLimit/BlockSizeLimit,
  256-byte overhead budget)
- update sharness tests for 2MiB boundary
- add test/cli boundary tests for block put, dag put, dag import,
  ipfs add (raw and wrapped leaves), and bitswap exchange including
  regression tests for the libp2p message size hard limit
This commit is contained in:
Marcin Rataj 2026-02-06 23:17:28 +01:00
parent f57d13c2c2
commit 80581df473
15 changed files with 466 additions and 35 deletions

View File

@ -172,6 +172,16 @@ Buzhash or Rabin fingerprint chunker for content defined chunking by
specifying buzhash or rabin-[min]-[avg]-[max] (where min/avg/max refer
to the desired chunk sizes in bytes), e.g. 'rabin-262144-524288-1048576'.
The maximum accepted value for 'size-N' and rabin 'max' parameter is
2MiB minus 256 bytes (2096896 bytes). The 256-byte overhead budget is
reserved for protobuf/UnixFS framing so that serialized blocks stay
within the 2MiB block size limit from the bitswap spec. The buzhash
chunker uses a fixed internal maximum of 512KiB and is not affected.
Only the fixed-size chunker ('size-N') guarantees that the same data
will always produce the same CID. The rabin and buzhash chunkers may
change their internal parameters in a future release.
The following examples use very small byte sizes to demonstrate the
properties of the different chunkers on a small file. You'll likely
want to use a 1024 times larger chunk sizes for most files.

View File

@ -14,14 +14,16 @@ import (
const (
AllowBigBlockOptionName = "allow-big-block"
SoftBlockLimit = 1024 * 1024 // https://github.com/ipfs/kubo/issues/7421#issuecomment-910833499
MaxPinNameBytes = 255 // Maximum number of bytes allowed for a pin name
// SoftBlockLimit is the maximum block size for bitswap transfer.
// If this value changes, update the "2MiB" strings in error messages below.
SoftBlockLimit = 2 * 1024 * 1024 // https://specs.ipfs.tech/bitswap-protocol/#block-sizes
MaxPinNameBytes = 255 // Maximum number of bytes allowed for a pin name
)
var AllowBigBlockOption cmds.Option
func init() {
AllowBigBlockOption = cmds.BoolOption(AllowBigBlockOptionName, "Disable block size check and allow creation of blocks bigger than 1MiB. WARNING: such blocks won't be transferable over the standard bitswap.").WithDefault(false)
AllowBigBlockOption = cmds.BoolOption(AllowBigBlockOptionName, "Disable block size check and allow creation of blocks bigger than 2MiB. WARNING: such blocks won't be transferable over the standard bitswap.").WithDefault(false)
}
func CheckCIDSize(req *cmds.Request, c cid.Cid, dagAPI coreiface.APIDagService) error {
@ -44,11 +46,10 @@ func CheckBlockSize(req *cmds.Request, size uint64) error {
return nil
}
// We do not allow producing blocks bigger than 1 MiB to avoid errors
// when transmitting them over BitSwap. The 1 MiB constant is an
// unenforced and undeclared rule of thumb hard-coded here.
// Block size is limited to SoftBlockLimit (2MiB) as defined in the bitswap spec.
// https://specs.ipfs.tech/bitswap-protocol/#block-sizes
if size > SoftBlockLimit {
return fmt.Errorf("produced block is over 1MiB: big blocks can't be exchanged with other peers. consider using UnixFS for automatic chunking of bigger files, or pass --allow-big-block to override")
return fmt.Errorf("produced block is over 2MiB: big blocks can't be exchanged with other peers. consider using UnixFS for automatic chunking of bigger files, or pass --allow-big-block to override")
}
return nil
}

View File

@ -67,6 +67,10 @@ The `test-cid-v1` and `test-cid-v1-wide` profiles have been removed. Use `unixfs
When writing to MFS directories that use CIDv1 (via `--cid-version=1` or `ipfs files chcid`), single-block files now produce raw block CIDs (like `bafkrei...`), matching the behavior of `ipfs add --raw-leaves`. Previously, MFS would wrap single-block files in dag-pb even when raw leaves were enabled. CIDv0 directories continue to use dag-pb.
**Block size limit raised to 2MiB**
`ipfs block put`, `ipfs dag put`, and `ipfs dag import` now accept blocks up to 2MiB without `--allow-big-block`, matching the [bitswap spec](https://specs.ipfs.tech/bitswap-protocol/#block-sizes). The previous 1MiB limit was too restrictive and broke `ipfs dag import` of 1MiB-chunked non-raw-leaf data (protobuf wrapping pushes blocks slightly over 1MiB). The max `--chunker` value for `ipfs add` is `2MiB - 256 bytes` to leave room for protobuf framing. IPIP-499 profiles use lower chunk sizes (256KiB and 1MiB) and are not affected.
**HAMT Threshold Fix**
HAMT directory sharding threshold changed from `>=` to `>` to match the Go docs and JS implementation ([ipfs/boxo@6707376](https://github.com/ipfs/boxo/commit/6707376002a3d4ba64895749ce9be2e00d265ed5)). A directory exactly at 256 KiB now stays as a basic directory instead of converting to HAMT. This is a theoretical breaking change, but unlikely to impact real-world users as it requires a directory to be exactly at the threshold boundary. If you depend on the old behavior, adjust [`Import.UnixFSHAMTShardingSize`](https://github.com/ipfs/kubo/blob/master/docs/config.md#importunixfshamtshardingsize) to be 1 byte lower.

View File

@ -3716,9 +3716,21 @@ The default UnixFS chunker. Commands affected: `ipfs add`.
Valid formats:
- `size-<bytes>` - fixed size chunker
- `rabin-<min>-<avg>-<max>` - rabin fingerprint chunker
- `rabin-<min>-<avg>-<max>` - rabin fingerprint chunker
- `buzhash` - buzhash chunker
The maximum accepted value for `size-<bytes>` and rabin `max` parameter is
`2MiB - 256 bytes` (2096896 bytes). The 256-byte overhead budget is reserved
for protobuf/UnixFS framing so that serialized blocks stay within the 2MiB
block size limit defined by the
[bitswap spec](https://specs.ipfs.tech/bitswap-protocol/#block-sizes).
The `buzhash` chunker uses a fixed internal maximum of 512KiB and is not
affected by this limit.
Only the fixed-size chunker (`size-<bytes>`) guarantees that the same data
will always produce the same CID. The `rabin` and `buzhash` chunkers may
change their internal parameters in a future release.
Default: `size-262144`
Type: `optionalString`

View File

@ -7,7 +7,7 @@ go 1.25
replace github.com/ipfs/kubo => ./../../..
require (
github.com/ipfs/boxo v0.36.1-0.20260205235512-2a942e3e1a75
github.com/ipfs/boxo v0.36.1-0.20260206224221-77bd614971f0
github.com/ipfs/kubo v0.0.0-00010101000000-000000000000
github.com/libp2p/go-libp2p v0.47.0
github.com/multiformats/go-multiaddr v0.16.1

View File

@ -267,8 +267,8 @@ github.com/ipfs-shipyard/nopfs/ipfs v0.25.0 h1:OqNqsGZPX8zh3eFMO8Lf8EHRRnSGBMqcd
github.com/ipfs-shipyard/nopfs/ipfs v0.25.0/go.mod h1:BxhUdtBgOXg1B+gAPEplkg/GpyTZY+kCMSfsJvvydqU=
github.com/ipfs/bbloom v0.0.4 h1:Gi+8EGJ2y5qiD5FbsbpX/TMNcJw8gSqr7eyjHa4Fhvs=
github.com/ipfs/bbloom v0.0.4/go.mod h1:cS9YprKXpoZ9lT0n/Mw/a6/aFV6DTjTLYHeA+gyqMG0=
github.com/ipfs/boxo v0.36.1-0.20260205235512-2a942e3e1a75 h1:1UoSAzXwwgOrCZm5cu6v6bL4OGYIzcaOew9Rl6ZycqQ=
github.com/ipfs/boxo v0.36.1-0.20260205235512-2a942e3e1a75/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
github.com/ipfs/boxo v0.36.1-0.20260206224221-77bd614971f0 h1:tC8iJdzsCy/npaez/gtQqNDLpl7DBqCARj9AECmYmoI=
github.com/ipfs/boxo v0.36.1-0.20260206224221-77bd614971f0/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
github.com/ipfs/go-bitfield v1.1.0 h1:fh7FIo8bSwaJEh6DdTWbCeZ1eqOaOkKFI74SCnsWbGA=
github.com/ipfs/go-bitfield v1.1.0/go.mod h1:paqf1wjq/D2BBmzfTVFlJQ9IlFOZpg422HL0HqsGWHU=
github.com/ipfs/go-block-format v0.0.3/go.mod h1:4LmD4ZUw0mhO+JSKdpWwrzATiEfM7WWgQ8H5l6P8MVk=

3
go.mod
View File

@ -21,7 +21,7 @@ require (
github.com/hashicorp/go-version v1.8.0
github.com/ipfs-shipyard/nopfs v0.0.14
github.com/ipfs-shipyard/nopfs/ipfs v0.25.0
github.com/ipfs/boxo v0.36.1-0.20260205235512-2a942e3e1a75
github.com/ipfs/boxo v0.36.1-0.20260206224221-77bd614971f0
github.com/ipfs/go-block-format v0.2.3
github.com/ipfs/go-cid v0.6.0
github.com/ipfs/go-cidutil v0.1.0
@ -274,6 +274,7 @@ require (
)
// Exclude ancient +incompatible versions that confuse Dependabot.
// These pre-Go-modules versions reference packages that no longer exist.
exclude (
github.com/ipfs/go-ipfs-cmds v2.0.1+incompatible

4
go.sum
View File

@ -337,8 +337,8 @@ github.com/ipfs-shipyard/nopfs/ipfs v0.25.0 h1:OqNqsGZPX8zh3eFMO8Lf8EHRRnSGBMqcd
github.com/ipfs-shipyard/nopfs/ipfs v0.25.0/go.mod h1:BxhUdtBgOXg1B+gAPEplkg/GpyTZY+kCMSfsJvvydqU=
github.com/ipfs/bbloom v0.0.4 h1:Gi+8EGJ2y5qiD5FbsbpX/TMNcJw8gSqr7eyjHa4Fhvs=
github.com/ipfs/bbloom v0.0.4/go.mod h1:cS9YprKXpoZ9lT0n/Mw/a6/aFV6DTjTLYHeA+gyqMG0=
github.com/ipfs/boxo v0.36.1-0.20260205235512-2a942e3e1a75 h1:1UoSAzXwwgOrCZm5cu6v6bL4OGYIzcaOew9Rl6ZycqQ=
github.com/ipfs/boxo v0.36.1-0.20260205235512-2a942e3e1a75/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
github.com/ipfs/boxo v0.36.1-0.20260206224221-77bd614971f0 h1:tC8iJdzsCy/npaez/gtQqNDLpl7DBqCARj9AECmYmoI=
github.com/ipfs/boxo v0.36.1-0.20260206224221-77bd614971f0/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
github.com/ipfs/go-bitfield v1.1.0 h1:fh7FIo8bSwaJEh6DdTWbCeZ1eqOaOkKFI74SCnsWbGA=
github.com/ipfs/go-bitfield v1.1.0/go.mod h1:paqf1wjq/D2BBmzfTVFlJQ9IlFOZpg422HL0HqsGWHU=
github.com/ipfs/go-block-format v0.0.3/go.mod h1:4LmD4ZUw0mhO+JSKdpWwrzATiEfM7WWgQ8H5l6P8MVk=

403
test/cli/block_size_test.go Normal file
View File

@ -0,0 +1,403 @@
package cli
import (
"bytes"
"crypto/rand"
"encoding/json"
"fmt"
"os"
"path/filepath"
"strings"
"testing"
"time"
"github.com/ipfs/kubo/test/cli/harness"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const (
twoMiB = 2 * 1024 * 1024 // 2097152 - bitswap spec block size limit
twoMiBPlus = twoMiB + 1 // 2097153
maxChunkSize = twoMiB - 256 // 2096896 - max chunker value (overhead budget for protobuf framing)
overMaxChunk = maxChunkSize + 1 // 2096897
// go-libp2p v0.47.0 network.MessageSizeMax is 4194304 bytes (4MiB).
// A bitswap message carrying a single block has a protobuf envelope
// whose size depends on the CID used to represent the block. For
// CIDv1 with raw codec and SHA2-256 multihash (4-byte CID prefix),
// the envelope is 18 bytes: 2 bytes for the empty Wantlist submessage,
// 6 bytes for the CID prefix field, 5 bytes for field tags and the
// payload length varint, and 5 bytes for the data length varint and
// block submessage length varint. The msgio varint reader rejects
// messages strictly larger than MessageSizeMax, so the maximum block
// that fits is 4194304 - 18 = 4194286 bytes.
//
// The hard limit varies slightly depending on the CID: a longer
// multihash (e.g. SHA-512) increases the CID prefix and reduces the
// maximum block payload by the same amount.
libp2pMsgMax = 4 * 1024 * 1024 // 4194304 - libp2p network.MessageSizeMax
bsBlockEnvelope = 18 // protobuf overhead for CIDv1 + raw + SHA2-256
maxTransferBlock = libp2pMsgMax - bsBlockEnvelope // 4194286 - largest block transferable via bitswap
overMaxTransfer = maxTransferBlock + 1 // 4194287
)
// blockSize returns the block size in bytes for a given CID by parsing
// the JSON output of `ipfs block stat --enc=json <cid>`.
func blockSize(t *testing.T, node *harness.Node, cid string) int {
t.Helper()
res := node.IPFS("block", "stat", "--enc=json", cid)
var stat struct {
Key string
Size int
}
require.NoError(t, json.Unmarshal(res.Stdout.Bytes(), &stat))
return stat.Size
}
// allBlockCIDs returns the root CID plus all recursive refs for a DAG.
func allBlockCIDs(t *testing.T, node *harness.Node, root string) []string {
t.Helper()
cids := []string{root}
res := node.IPFS("refs", "-r", "--unique", root)
for _, line := range strings.Split(strings.TrimSpace(res.Stdout.String()), "\n") {
if line != "" {
cids = append(cids, line)
}
}
return cids
}
// assertAllBlocksWithinLimit checks that every block in the DAG rooted at
// root is at most twoMiB bytes.
func assertAllBlocksWithinLimit(t *testing.T, node *harness.Node, root string) {
t.Helper()
for _, c := range allBlockCIDs(t, node, root) {
size := blockSize(t, node, c)
assert.LessOrEqual(t, size, twoMiB, fmt.Sprintf("block %s is %d bytes, exceeds 2MiB limit", c, size))
}
}
func TestBlockSizeBoundary(t *testing.T) {
t.Parallel()
t.Run("block put", func(t *testing.T) {
t.Parallel()
t.Run("exactly 2MiB succeeds", func(t *testing.T) {
t.Parallel()
node := harness.NewT(t).NewNode().Init().StartDaemon("--offline")
defer node.StopDaemon()
data := make([]byte, twoMiB)
cid := strings.TrimSpace(
node.PipeToIPFS(bytes.NewReader(data), "block", "put").Stdout.String(),
)
got := node.IPFS("block", "get", cid)
assert.Len(t, got.Stdout.Bytes(), twoMiB)
})
t.Run("2MiB+1 fails without --allow-big-block", func(t *testing.T) {
t.Parallel()
node := harness.NewT(t).NewNode().Init().StartDaemon("--offline")
defer node.StopDaemon()
data := make([]byte, twoMiBPlus)
res := node.RunPipeToIPFS(bytes.NewReader(data), "block", "put")
assert.NotEqual(t, 0, res.ExitCode())
assert.Contains(t, res.Stderr.String(), "produced block is over 2MiB: big blocks can't be exchanged with other peers. consider using UnixFS for automatic chunking of bigger files, or pass --allow-big-block to override")
})
t.Run("2MiB+1 succeeds with --allow-big-block", func(t *testing.T) {
t.Parallel()
node := harness.NewT(t).NewNode().Init().StartDaemon("--offline")
defer node.StopDaemon()
data := make([]byte, twoMiBPlus)
cid := strings.TrimSpace(
node.PipeToIPFS(bytes.NewReader(data), "block", "put", "--allow-big-block").Stdout.String(),
)
got := node.IPFS("block", "get", cid)
assert.Len(t, got.Stdout.Bytes(), twoMiBPlus)
})
})
t.Run("dag put", func(t *testing.T) {
t.Parallel()
t.Run("exactly 2MiB succeeds", func(t *testing.T) {
t.Parallel()
node := harness.NewT(t).NewNode().Init().StartDaemon("--offline")
defer node.StopDaemon()
data := make([]byte, twoMiB)
cid := strings.TrimSpace(
node.PipeToIPFS(bytes.NewReader(data), "dag", "put", "--input-codec=raw", "--store-codec=raw").Stdout.String(),
)
got := node.IPFS("block", "get", cid)
assert.Len(t, got.Stdout.Bytes(), twoMiB)
})
t.Run("2MiB+1 fails without --allow-big-block", func(t *testing.T) {
t.Parallel()
node := harness.NewT(t).NewNode().Init().StartDaemon("--offline")
defer node.StopDaemon()
data := make([]byte, twoMiBPlus)
res := node.RunPipeToIPFS(bytes.NewReader(data), "dag", "put", "--input-codec=raw", "--store-codec=raw")
assert.NotEqual(t, 0, res.ExitCode())
assert.Contains(t, res.Stderr.String(), "produced block is over 2MiB: big blocks can't be exchanged with other peers. consider using UnixFS for automatic chunking of bigger files, or pass --allow-big-block to override")
})
t.Run("2MiB+1 succeeds with --allow-big-block", func(t *testing.T) {
t.Parallel()
node := harness.NewT(t).NewNode().Init().StartDaemon("--offline")
defer node.StopDaemon()
data := make([]byte, twoMiBPlus)
cid := strings.TrimSpace(
node.PipeToIPFS(bytes.NewReader(data), "dag", "put", "--input-codec=raw", "--store-codec=raw", "--allow-big-block").Stdout.String(),
)
got := node.IPFS("block", "get", cid)
assert.Len(t, got.Stdout.Bytes(), twoMiBPlus)
})
})
t.Run("dag import and export", func(t *testing.T) {
t.Parallel()
t.Run("2MiB+1 block round-trips with --allow-big-block", func(t *testing.T) {
t.Parallel()
node := harness.NewT(t).NewNode().Init().StartDaemon("--offline")
defer node.StopDaemon()
// put an oversized raw block with override
data := make([]byte, twoMiBPlus)
cid := strings.TrimSpace(
node.PipeToIPFS(bytes.NewReader(data), "dag", "put", "--input-codec=raw", "--store-codec=raw", "--allow-big-block").Stdout.String(),
)
// export to CAR
carPath := filepath.Join(node.Dir, "oversized.car")
require.NoError(t, node.IPFSDagExport(cid, carPath))
// re-import without --allow-big-block should fail
carFile, err := os.Open(carPath)
require.NoError(t, err)
res := node.RunPipeToIPFS(carFile, "dag", "import")
carFile.Close()
assert.NotEqual(t, 0, res.ExitCode())
assert.Contains(t, res.Stderr.String()+res.Stdout.String(), "produced block is over 2MiB: big blocks can't be exchanged with other peers. consider using UnixFS for automatic chunking of bigger files, or pass --allow-big-block to override")
// re-import with --allow-big-block should succeed
carFile, err = os.Open(carPath)
require.NoError(t, err)
res = node.RunPipeToIPFS(carFile, "dag", "import", "--allow-big-block")
carFile.Close()
assert.Equal(t, 0, res.ExitCode())
})
})
t.Run("ipfs add non-raw-leaves", func(t *testing.T) {
t.Parallel()
// The chunker enforces ChunkSizeLimit (maxChunkSize = 2MiB - 256
// as of boxo 2026Q1) regardless of leaf type. It does not know at parse time whether
// raw or wrapped leaves will be used, so the 256-byte overhead
// budget is applied uniformly.
//
// With --raw-leaves=false each chunk is wrapped in protobuf,
// adding ~14 bytes overhead that pushes blocks past the chunk size.
// The overhead budget ensures the wrapped block stays within 2MiB.
//
// With --raw-leaves=true there is no protobuf wrapper, so the
// block is exactly the chunk size (maxChunkSize). The 256-byte
// budget is unused in this case but the chunker still enforces it.
// A full 2MiB chunk (--chunker=size-2097152) is rejected even
// though the resulting raw block would fit within BlockSizeLimit.
t.Run("1MiB chunk with protobuf wrapping succeeds under 2MiB limit", func(t *testing.T) {
t.Parallel()
node := harness.NewT(t).NewNode().Init().StartDaemon("--offline")
defer node.StopDaemon()
data := make([]byte, twoMiB)
res := node.RunPipeToIPFS(bytes.NewReader(data), "add", "-q", "--chunker=size-1048576", "--raw-leaves=false")
require.Equal(t, 0, res.ExitCode(), "stderr: %s", res.Stderr.String())
root := strings.TrimSpace(res.Stdout.String())
// the last line of `ipfs add -q` is the root CID
lines := strings.Split(root, "\n")
root = lines[len(lines)-1]
assertAllBlocksWithinLimit(t, node, root)
})
t.Run("max chunk with protobuf wrapping stays within block limit", func(t *testing.T) {
t.Parallel()
node := harness.NewT(t).NewNode().Init().StartDaemon("--offline")
defer node.StopDaemon()
// maxChunkSize leaves room for protobuf framing overhead
data := make([]byte, maxChunkSize*2)
res := node.RunPipeToIPFS(bytes.NewReader(data), "add", "-q",
fmt.Sprintf("--chunker=size-%d", maxChunkSize), "--raw-leaves=false")
require.Equal(t, 0, res.ExitCode(), "stderr: %s", res.Stderr.String())
lines := strings.Split(strings.TrimSpace(res.Stdout.String()), "\n")
root := lines[len(lines)-1]
assertAllBlocksWithinLimit(t, node, root)
})
t.Run("chunk size over limit is rejected by chunker", func(t *testing.T) {
t.Parallel()
node := harness.NewT(t).NewNode().Init().StartDaemon("--offline")
defer node.StopDaemon()
data := make([]byte, twoMiB+twoMiB)
res := node.RunPipeToIPFS(bytes.NewReader(data), "add", "-q",
fmt.Sprintf("--chunker=size-%d", overMaxChunk), "--raw-leaves=false")
assert.NotEqual(t, 0, res.ExitCode())
assert.Contains(t, res.Stderr.String(),
fmt.Sprintf("chunker parameters may not exceed the maximum chunk size of %d", maxChunkSize))
})
t.Run("max chunk with raw leaves succeeds", func(t *testing.T) {
t.Parallel()
node := harness.NewT(t).NewNode().Init().StartDaemon("--offline")
defer node.StopDaemon()
// raw leaves have no protobuf wrapper, so max chunk size fits easily
data := make([]byte, maxChunkSize*2)
res := node.RunPipeToIPFS(bytes.NewReader(data), "add", "-q",
fmt.Sprintf("--chunker=size-%d", maxChunkSize), "--raw-leaves=true")
require.Equal(t, 0, res.ExitCode(), "stderr: %s", res.Stderr.String())
lines := strings.Split(strings.TrimSpace(res.Stdout.String()), "\n")
root := lines[len(lines)-1]
assertAllBlocksWithinLimit(t, node, root)
})
})
t.Run("bitswap exchange", func(t *testing.T) {
t.Parallel()
t.Run("2MiB raw block transfers between peers", func(t *testing.T) {
t.Parallel()
h := harness.NewT(t)
provider := h.NewNode().Init("--profile=unixfs-v1-2025").StartDaemon()
defer provider.StopDaemon()
requester := h.NewNode().Init("--profile=unixfs-v1-2025").StartDaemon()
defer requester.StopDaemon()
data := make([]byte, twoMiB)
_, err := rand.Read(data)
require.NoError(t, err)
cid := strings.TrimSpace(
provider.PipeToIPFS(bytes.NewReader(data), "block", "put").Stdout.String(),
)
requester.Connect(provider)
res := requester.IPFS("block", "get", cid)
assert.Equal(t, data, res.Stdout.Bytes(), "retrieved block should match original")
})
t.Run("unixfs-v1-2025: 2MiB file transfers between peers", func(t *testing.T) {
t.Parallel()
h := harness.NewT(t)
provider := h.NewNode().Init("--profile=unixfs-v1-2025").StartDaemon()
defer provider.StopDaemon()
requester := h.NewNode().Init("--profile=unixfs-v1-2025").StartDaemon()
defer requester.StopDaemon()
// unixfs-v1-2025 profile uses CIDv1, raw leaves, SHA2-256,
// and 1MiB chunks. A 2MiB file produces two 1MiB raw leaf
// blocks plus a root node, all within the 2MiB spec limit.
data := make([]byte, twoMiB)
_, err := rand.Read(data)
require.NoError(t, err)
res := provider.RunPipeToIPFS(bytes.NewReader(data), "add", "-q")
require.Equal(t, 0, res.ExitCode(), "stderr: %s", res.Stderr.String())
lines := strings.Split(strings.TrimSpace(res.Stdout.String()), "\n")
root := lines[len(lines)-1]
requester.Connect(provider)
got := requester.IPFS("cat", root)
assert.Equal(t, data, got.Stdout.Bytes(), "retrieved file should match original")
})
// The following two tests guard the physical hard limit of the
// libp2p transport layer (network.MessageSizeMax = 4MiB). This is
// the actual ceiling for bitswap block transfer, independent of the
// 2MiB soft limit from the bitswap spec. Knowing the exact hard
// limit is important for backward-compatible protocol and standards
// evolution: any future increase to the bitswap spec block size
// must stay within the libp2p message framing budget, or the
// transport layer must be updated first.
t.Run("bitswap-over-libp2p: largest block that fits in message transfers", func(t *testing.T) {
t.Parallel()
h := harness.NewT(t)
provider := h.NewNode().Init("--profile=unixfs-v1-2025").StartDaemon()
defer provider.StopDaemon()
requester := h.NewNode().Init("--profile=unixfs-v1-2025").StartDaemon()
defer requester.StopDaemon()
data := make([]byte, maxTransferBlock)
_, err := rand.Read(data)
require.NoError(t, err)
cid := strings.TrimSpace(
provider.PipeToIPFS(bytes.NewReader(data), "block", "put", "--allow-big-block").Stdout.String(),
)
requester.Connect(provider)
// successful transfers complete in ~1s
timeout := time.After(5 * time.Second)
dataChan := make(chan []byte, 1)
go func() {
res := requester.RunIPFS("block", "get", cid)
dataChan <- res.Stdout.Bytes()
}()
select {
case got := <-dataChan:
assert.Equal(t, data, got, "retrieved block should match original")
case <-timeout:
t.Fatal("block get timed out: expected transfer to succeed at maxTransferBlock")
}
})
t.Run("bitswap-over-libp2p: one byte over message limit does not transfer", func(t *testing.T) {
t.Parallel()
h := harness.NewT(t)
provider := h.NewNode().Init("--profile=unixfs-v1-2025").StartDaemon()
defer provider.StopDaemon()
requester := h.NewNode().Init("--profile=unixfs-v1-2025").StartDaemon()
defer requester.StopDaemon()
data := make([]byte, overMaxTransfer)
_, err := rand.Read(data)
require.NoError(t, err)
cid := strings.TrimSpace(
provider.PipeToIPFS(bytes.NewReader(data), "block", "put", "--allow-big-block").Stdout.String(),
)
requester.Connect(provider)
timeout := time.After(5 * time.Second)
dataChan := make(chan []byte, 1)
go func() {
res := requester.RunIPFS("block", "get", cid)
dataChan <- res.Stdout.Bytes()
}()
select {
case got := <-dataChan:
t.Fatalf("expected timeout, but block was retrieved (%d bytes)", len(got))
case <-timeout:
t.Log("block get timed out as expected: block exceeds libp2p message size limit")
}
})
})
}

View File

@ -135,7 +135,7 @@ require (
github.com/huin/goupnp v1.3.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/ipfs/bbloom v0.0.4 // indirect
github.com/ipfs/boxo v0.36.1-0.20260205235512-2a942e3e1a75 // indirect
github.com/ipfs/boxo v0.36.1-0.20260206224221-77bd614971f0 // indirect
github.com/ipfs/go-bitfield v1.1.0 // indirect
github.com/ipfs/go-block-format v0.2.3 // indirect
github.com/ipfs/go-cid v0.6.0 // indirect

View File

@ -296,8 +296,8 @@ github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/ipfs/bbloom v0.0.4 h1:Gi+8EGJ2y5qiD5FbsbpX/TMNcJw8gSqr7eyjHa4Fhvs=
github.com/ipfs/bbloom v0.0.4/go.mod h1:cS9YprKXpoZ9lT0n/Mw/a6/aFV6DTjTLYHeA+gyqMG0=
github.com/ipfs/boxo v0.36.1-0.20260205235512-2a942e3e1a75 h1:1UoSAzXwwgOrCZm5cu6v6bL4OGYIzcaOew9Rl6ZycqQ=
github.com/ipfs/boxo v0.36.1-0.20260205235512-2a942e3e1a75/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
github.com/ipfs/boxo v0.36.1-0.20260206224221-77bd614971f0 h1:tC8iJdzsCy/npaez/gtQqNDLpl7DBqCARj9AECmYmoI=
github.com/ipfs/boxo v0.36.1-0.20260206224221-77bd614971f0/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
github.com/ipfs/go-bitfield v1.1.0 h1:fh7FIo8bSwaJEh6DdTWbCeZ1eqOaOkKFI74SCnsWbGA=
github.com/ipfs/go-bitfield v1.1.0/go.mod h1:paqf1wjq/D2BBmzfTVFlJQ9IlFOZpg422HL0HqsGWHU=
github.com/ipfs/go-block-format v0.2.3 h1:mpCuDaNXJ4wrBJLrtEaGFGXkferrw5eqVvzaHhtFKQk=

View File

@ -291,17 +291,17 @@ test_expect_success "put with sha3 and cidv0 fails" '
'
test_expect_success "'ipfs block put' check block size" '
dd if=/dev/zero bs=2MB count=1 > 2-MB-file &&
test_expect_code 1 ipfs block put 2-MB-file >block_put_out 2>&1
dd if=/dev/zero bs=2097153 count=1 > over-2MiB-file &&
test_expect_code 1 ipfs block put over-2MiB-file >block_put_out 2>&1
'
test_expect_success "ipfs block put output has the correct error" '
grep "produced block is over 1MiB" block_put_out
grep "produced block is over 2MiB" block_put_out
'
test_expect_success "ipfs block put --allow-big-block=true works" '
test_expect_code 0 ipfs block put 2-MB-file --allow-big-block=true &&
rm 2-MB-file
test_expect_code 0 ipfs block put over-2MiB-file --allow-big-block=true &&
rm over-2MiB-file
'
test_done

View File

@ -42,16 +42,16 @@ test_object_cmd() {
test_expect_success "'ipfs object patch' check output block size" '
DIR=$EMPTY_UNIXFS_DIR
for i in {1..13}
for i in {1..14}
do
DIR=$(ipfs object patch "$DIR" add-link "$DIR.jpg" "$DIR")
done
# Fail when new block goes over the BS limit of 1MiB, but allow manual override
# Fail when new block goes over the BS limit of 2MiB, but allow manual override
test_expect_code 1 ipfs object patch "$DIR" add-link "$DIR.jpg" "$DIR" >patch_out 2>&1
'
test_expect_success "ipfs object patch add-link output has the correct error" '
grep "produced block is over 1MiB" patch_out
grep "produced block is over 2MiB" patch_out
'
test_expect_success "ipfs object patch --allow-big-block=true add-link works" '

View File

@ -45,17 +45,17 @@ test_dag_cmd() {
'
test_expect_success "'ipfs dag put' check block size" '
dd if=/dev/zero bs=2MB count=1 > 2-MB-file &&
test_expect_code 1 ipfs dag put --input-codec=raw --store-codec=raw 2-MB-file >dag_put_out 2>&1
dd if=/dev/zero bs=2097153 count=1 > over-2MiB-file &&
test_expect_code 1 ipfs dag put --input-codec=raw --store-codec=raw over-2MiB-file >dag_put_out 2>&1
'
test_expect_success "ipfs dag put output has the correct error" '
grep "produced block is over 1MiB" dag_put_out
grep "produced block is over 2MiB" dag_put_out
'
test_expect_success "ipfs dag put --allow-big-block=true works" '
test_expect_code 0 ipfs dag put --input-codec=raw --store-codec=raw 2-MB-file --allow-big-block=true &&
rm 2-MB-file
test_expect_code 0 ipfs dag put --input-codec=raw --store-codec=raw over-2MiB-file --allow-big-block=true &&
rm over-2MiB-file
'
test_expect_success "can add an ipld object using dag-json to dag-json" '

View File

@ -232,16 +232,16 @@ test_expect_success "naked root import expected output" '
'
test_expect_success "'ipfs dag import' check block size" '
BIG_CID=$(dd if=/dev/zero bs=2MB count=1 | ipfs dag put --input-codec=raw --store-codec=raw --allow-big-block) &&
ipfs dag export $BIG_CID > 2-MB-block.car &&
test_expect_code 1 ipfs dag import 2-MB-block.car >dag_import_out 2>&1
BIG_CID=$(dd if=/dev/zero bs=2097153 count=1 | ipfs dag put --input-codec=raw --store-codec=raw --allow-big-block) &&
ipfs dag export $BIG_CID > over-2MiB-block.car &&
test_expect_code 1 ipfs dag import over-2MiB-block.car >dag_import_out 2>&1
'
test_expect_success "ipfs dag import output has the correct error" '
grep "block is over 1MiB" dag_import_out
grep "block is over 2MiB" dag_import_out
'
test_expect_success "ipfs dag import --allow-big-block works" '
test_expect_code 0 ipfs dag import --allow-big-block 2-MB-block.car
test_expect_code 0 ipfs dag import --allow-big-block over-2MiB-block.car
'
cat > version_2_import_expected << EOE