mirror of
https://github.com/ipfs/kubo.git
synced 2026-02-21 10:27:46 +08:00
feat(config): add Import.* for CID Profiles from IPIP-499 (#11148)
Some checks are pending
CodeQL / codeql (push) Waiting to run
Docker Check / lint (push) Waiting to run
Docker Check / build (push) Waiting to run
Gateway Conformance / gateway-conformance (push) Waiting to run
Gateway Conformance / gateway-conformance-libp2p-experiment (push) Waiting to run
Go Build / go-build (push) Waiting to run
Go Check / go-check (push) Waiting to run
Go Lint / go-lint (push) Waiting to run
Go Test / unit-tests (push) Waiting to run
Go Test / cli-tests (push) Waiting to run
Go Test / example-tests (push) Waiting to run
Interop / interop-prep (push) Waiting to run
Interop / helia-interop (push) Blocked by required conditions
Interop / ipfs-webui (push) Blocked by required conditions
Sharness / sharness-test (push) Waiting to run
Spell Check / spellcheck (push) Waiting to run
Some checks are pending
CodeQL / codeql (push) Waiting to run
Docker Check / lint (push) Waiting to run
Docker Check / build (push) Waiting to run
Gateway Conformance / gateway-conformance (push) Waiting to run
Gateway Conformance / gateway-conformance-libp2p-experiment (push) Waiting to run
Go Build / go-build (push) Waiting to run
Go Check / go-check (push) Waiting to run
Go Lint / go-lint (push) Waiting to run
Go Test / unit-tests (push) Waiting to run
Go Test / cli-tests (push) Waiting to run
Go Test / example-tests (push) Waiting to run
Interop / interop-prep (push) Waiting to run
Interop / helia-interop (push) Blocked by required conditions
Interop / ipfs-webui (push) Blocked by required conditions
Sharness / sharness-test (push) Waiting to run
Spell Check / spellcheck (push) Waiting to run
* feat(config): Import.* and unixfs-v1-2025 profile
implements IPIP-499: add config options for controlling UnixFS DAG
determinism and introduces `unixfs-v1-2025` and `unixfs-v0-2015`
profiles for cross-implementation CID reproducibility.
changes:
- add Import.* fields: HAMTDirectorySizeEstimation, SymlinkMode,
DAGLayout, IncludeEmptyDirectories, IncludeHidden
- add validation for all Import.* config values
- add unixfs-v1-2025 profile (recommended for new data)
- add unixfs-v0-2015 profile (alias: legacy-cid-v0)
- remove deprecated test-cid-v1 and test-cid-v1-wide profiles
- wire Import.HAMTSizeEstimationMode() to boxo globals
- update go.mod to use boxo with SizeEstimationMode support
ref: https://specs.ipfs.tech/ipips/ipip-0499/
* feat(add): add --dereference-symlinks, --empty-dirs, --hidden CLI flags
add CLI flags for controlling file collection behavior during ipfs add:
- `--dereference-symlinks`: recursively resolve symlinks to their target
content (replaces deprecated --dereference-args which only worked on
CLI arguments). wired through go-ipfs-cmds to boxo's SerialFileOptions.
- `--empty-dirs` / `-E`: include empty directories (default: true)
- `--hidden` / `-H`: include hidden files (default: false)
these flags are CLI-only and not wired to Import.* config options because
go-ipfs-cmds library handles input file filtering before the directory
tree is passed to kubo. removed unused Import.UnixFSSymlinkMode config
option that was defined but never actually read by the CLI.
also:
- wire --trickle to Import.UnixFSDAGLayout config default
- update go-ipfs-cmds to v0.15.1-0.20260117043932-17687e216294
- add SYMLINK HANDLING section to ipfs add help text
- add CLI tests for all three flags
ref: https://github.com/ipfs/specs/pull/499
* test(add): add CID profile tests and wire SizeEstimationMode
add comprehensive test suite for UnixFS CID determinism per IPIP-499:
- verify exact HAMT threshold boundary for both estimation modes:
- v0-2015 (links): sum(name_len + cid_len) == 262144
- v1-2025 (block): serialized block size == 262144
- verify HAMT triggers at threshold + 1 byte for both profiles
- add all deterministic CIDs for cross-implementation testing
also wires SizeEstimationMode through CLI/API, allowing
Import.UnixFSHAMTSizeEstimation config to take effect.
bumps boxo to ipfs/boxo@6707376 which aligns HAMT threshold with
JS implementation (uses > instead of >=), fixing CID determinism
at the exact 256 KiB boundary.
* feat(add): --dereference-symlinks now resolves all symlinks
Previously, resolving symlinks required two flags:
- --dereference-args: resolved symlinks passed as CLI arguments
- --dereference-symlinks: resolved symlinks inside directories
Now --dereference-symlinks handles both cases. Users only need one flag
to fully dereference symlinks when adding files to IPFS.
The deprecated --dereference-args still works for backwards compatibility
but is no longer necessary.
* chore: update boxo and improve changelog
- update boxo to ebdaf07c (nil filter fix, thread-safety docs)
- simplify changelog for IPIP-499 section
- shorten test names, move context to comments
* chore: update boxo to 5cf22196
* chore: apply suggestions from code review
Co-authored-by: Andrew Gillis <11790789+gammazero@users.noreply.github.com>
* test(add): verify balanced DAG layout produces uniform leaf depth
add test that confirms kubo uses balanced layout (all leaves at same
depth) rather than balanced-packed (varying depths). creates 45MiB file
to trigger multi-level DAG and walks it to verify leaf depth uniformity.
includes trickle subtest to validate test logic can detect varying depths.
supports CAR export via DAG_LAYOUT_CAR_OUTPUT env var for test vectors.
* chore(deps): update boxo to 6141039ad8ef
switches to 6141039ad8
changes since 5cf22196ad0b:
- refactor(unixfs): use arithmetic for exact block size calculation
- refactor(unixfs): unify size tracking and make SizeEstimationMode immutable
- feat(unixfs): optimize SizeEstimationBlock and add mode/mtime tests
also clarifies that directory sharding globals affect both `ipfs add` and MFS.
* test(cli): improve HAMT threshold tests with exact +1 byte verification
- add UnixFSDataType() helper to directly check UnixFS type via protobuf
- refactor threshold tests to use exact +1 byte calculations instead of +1 file
- verify directory type directly (ft.TDirectory vs ft.THAMTShard) instead of
inferring from link count
- clean up helper function signatures by removing unused cidLength parameter
* test(cli): consolidate profile tests into cid_profiles_test.go
remove duplicate profile threshold tests from add_test.go since they
are fully covered by the data-driven tests in cid_profiles_test.go.
changes:
- improve test names to describe what threshold is being tested
- add inline documentation explaining each test's purpose
- add byte-precise helper IPFSAddDeterministicBytes for threshold tests
- remove ~200 lines of duplicated test code from add_test.go
- keep non-profile tests (pinning, symlinks, hidden files) in add_test.go
* chore: update to rebased boxo and go-ipfs-cmds PRs
* docs: add HAMT threshold fix details to changelog
* feat(mfs): use Import config for CID version and hash function
make MFS commands (files cp, files write, files mkdir, files chcid)
respect Import.CidVersion and Import.HashFunction config settings
when CLI options are not explicitly provided.
also add tests for:
- files write respects Import.UnixFSRawLeaves=true
- single-block file: files write produces same CID as ipfs add
- updated comments clarifying CID parity with ipfs add
* feat(files): wire Import.UnixFSChunker and UnixFSDirectoryMaxLinks to MFS
`ipfs files` commands now respect these Import.* config options:
- UnixFSChunker: configures chunk size for `files write`
- UnixFSDirectoryMaxLinks: triggers HAMT sharding in `files mkdir`
- UnixFSHAMTDirectorySizeEstimation: controls size estimation mode
previously, MFS used hardcoded defaults ignoring user config.
changes:
- config/import.go: add UnixFSSplitterFunc() returning chunk.SplitterGen
- core/node/core.go: pass chunker, maxLinks, sizeEstimationMode to
mfs.NewRoot() via new boxo RootOption API
- core/commands/files.go: pass maxLinks and sizeEstimationMode to
mfs.Mkdir() and ensureContainingDirectoryExists(); document that
UnixFSFileMaxLinks doesn't apply to files write (trickle DAG limitation)
- test/cli/files_test.go: add tests for UnixFSDirectoryMaxLinks and
UnixFSChunker, including CID parity test with `ipfs add --trickle`
related: boxo@54e044f1b265
* feat(files): wire Import.UnixFSHAMTDirectoryMaxFanout and UnixFSHAMTDirectorySizeThreshold
wire remaining HAMT config options to MFS root:
- Import.UnixFSHAMTDirectoryMaxFanout via mfs.WithMaxHAMTFanout
- Import.UnixFSHAMTDirectorySizeThreshold via mfs.WithHAMTShardingSize
add CLI tests:
- files mkdir respects Import.UnixFSHAMTDirectoryMaxFanout
- files mkdir respects Import.UnixFSHAMTDirectorySizeThreshold
- config change takes effect after daemon restart
add UnixFSHAMTFanout() helper to test harness
update boxo to ac97424d99ab90e097fc7c36f285988b596b6f05
* fix(mfs): single-block files in CIDv1 dirs now produce raw CIDs
problem: `ipfs files write` in CIDv1 directories wrapped single-block
files in dag-pb even when raw-leaves was enabled, producing different
CIDs than `ipfs add --raw-leaves` for the same content.
fix: boxo now collapses single-block ProtoNode wrappers (with no
metadata) to RawNode in DagModifier.GetNode(). files with mtime/mode
stay as dag-pb since raw blocks cannot store UnixFS metadata.
also fixes sparse file writes where writing past EOF would lose data
because expandSparse didn't update the internal node pointer.
updates boxo to v0.36.1-0.20260203003133-7884ae23aaff
updates t0250-files-api.sh test hashes to match new behavior
* chore(test): use Go 1.22+ range-over-int syntax
* chore: update boxo to c6829fe26860
- fix typo in files write help text
- update boxo with CI fixes (gofumpt, race condition in test)
* chore: update go-ipfs-cmds to 192ec9d15c1f
includes binary content types fix: gzip, zip, vnd.ipld.car, vnd.ipld.raw,
vnd.ipfs.ipns-record
* chore: update boxo to 0a22cde9225c
includes refactor of maxLinks check in addLinkChild (review feedback).
* ci: fix helia-interop and improve caching
skip '@helia/mfs - should have the same CID after creating a file' test
until helia implements IPIP-499 (tracking: https://github.com/ipfs/helia/issues/941)
the test fails because kubo now collapses single-block files to raw CIDs
while helia explicitly uses reduceSingleLeafToSelf: false
changes:
- run aegir directly instead of helia-interop binary (binary ignores --grep flags)
- cache node_modules keyed by @helia/interop version from npm registry
- skip npm install on cache hit (matches ipfs-webui caching pattern)
* chore: update boxo to 1e30b954
includes latest upstream changes from boxo main
* chore: update go-ipfs-cmds to 1b2a641ed6f6
* chore: update boxo to f188f79fd412
switches to boxo@main after merging https://github.com/ipfs/boxo/pull/1088
* chore: update go-ipfs-cmds to af9bcbaf5709
switches to go-ipfs-cmds@master after merging https://github.com/ipfs/go-ipfs-cmds/pull/315
---------
Co-authored-by: Andrew Gillis <11790789+gammazero@users.noreply.github.com>
This commit is contained in:
parent
ff4bb10989
commit
67c89bbd7e
40
.github/workflows/interop.yml
vendored
40
.github/workflows/interop.yml
vendored
@ -71,18 +71,42 @@ jobs:
|
||||
name: kubo
|
||||
path: cmd/ipfs
|
||||
- run: chmod +x cmd/ipfs/ipfs
|
||||
- run: echo "dir=$(npm config get cache)" >> $GITHUB_OUTPUT
|
||||
id: npm-cache-dir
|
||||
- uses: actions/cache@v5
|
||||
with:
|
||||
path: ${{ steps.npm-cache-dir.outputs.dir }}
|
||||
key: ${{ runner.os }}-${{ github.job }}-helia-${{ hashFiles('**/package-lock.json') }}
|
||||
restore-keys: ${{ runner.os }}-${{ github.job }}-helia-
|
||||
- run: sudo apt update
|
||||
- run: sudo apt install -y libxkbcommon0 libxdamage1 libgbm1 libpango-1.0-0 libcairo2 # dependencies for playwright
|
||||
- run: npx --package @helia/interop helia-interop
|
||||
# Cache node_modules based on latest @helia/interop version from npm registry.
|
||||
# This ensures we always test against the latest release while still benefiting
|
||||
# from caching when the version hasn't changed.
|
||||
- name: Get latest @helia/interop version
|
||||
id: helia-version
|
||||
run: echo "version=$(npm view @helia/interop version)" >> $GITHUB_OUTPUT
|
||||
- name: Cache helia-interop node_modules
|
||||
uses: actions/cache@v5
|
||||
id: helia-cache
|
||||
with:
|
||||
path: node_modules
|
||||
key: ${{ runner.os }}-helia-interop-${{ steps.helia-version.outputs.version }}
|
||||
- name: Install @helia/interop
|
||||
if: steps.helia-cache.outputs.cache-hit != 'true'
|
||||
run: npm install @helia/interop
|
||||
# TODO(IPIP-499): Remove --grep --invert workaround once helia implements IPIP-499
|
||||
# Tracking issue: https://github.com/ipfs/helia/issues/941
|
||||
#
|
||||
# PROVISIONAL HACK: Skip '@helia/mfs - should have the same CID after
|
||||
# creating a file' test due to IPIP-499 changes in kubo.
|
||||
#
|
||||
# WHY IT FAILS: The test creates a 5-byte file in MFS on both kubo and helia,
|
||||
# then compares the root directory CID. With kubo PR #11148, `ipfs files write`
|
||||
# now produces raw CIDs for single-block files (matching `ipfs add --raw-leaves`),
|
||||
# while helia uses `reduceSingleLeafToSelf: false` which keeps the dag-pb wrapper.
|
||||
# Different file CIDs lead to different directory CIDs.
|
||||
#
|
||||
# We run aegir directly (instead of helia-interop binary) because only aegir
|
||||
# supports the --grep/--invert flags needed to exclude specific tests.
|
||||
- name: Run helia-interop tests (excluding IPIP-499 incompatible test)
|
||||
run: npx aegir test -t node --bail -- --grep 'should have the same CID after creating a file' --invert
|
||||
env:
|
||||
KUBO_BINARY: ${{ github.workspace }}/cmd/ipfs/ipfs
|
||||
working-directory: node_modules/@helia/interop
|
||||
ipfs-webui:
|
||||
needs: [interop-prep]
|
||||
runs-on: ${{ fromJSON(github.repository == 'ipfs/kubo' && '["self-hosted", "linux", "x64", "2xlarge"]' || '"ubuntu-latest"') }}
|
||||
|
||||
112
config/import.go
112
config/import.go
@ -2,11 +2,13 @@ package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
chunk "github.com/ipfs/boxo/chunker"
|
||||
"github.com/ipfs/boxo/ipld/unixfs/importer/helpers"
|
||||
"github.com/ipfs/boxo/ipld/unixfs/io"
|
||||
uio "github.com/ipfs/boxo/ipld/unixfs/io"
|
||||
"github.com/ipfs/boxo/verifcid"
|
||||
mh "github.com/multiformats/go-multihash"
|
||||
)
|
||||
@ -29,29 +31,44 @@ const (
|
||||
// write-batch. The total size of the batch is limited by
|
||||
// BatchMaxnodes and BatchMaxSize.
|
||||
DefaultBatchMaxSize = 100 << 20 // 20MiB
|
||||
|
||||
// HAMTSizeEstimation values for Import.UnixFSHAMTDirectorySizeEstimation
|
||||
HAMTSizeEstimationLinks = "links" // legacy: estimate using link names + CID byte lengths (default)
|
||||
HAMTSizeEstimationBlock = "block" // full serialized dag-pb block size
|
||||
HAMTSizeEstimationDisabled = "disabled" // disable HAMT sharding entirely
|
||||
|
||||
// DAGLayout values for Import.UnixFSDAGLayout
|
||||
DAGLayoutBalanced = "balanced" // balanced DAG layout (default)
|
||||
DAGLayoutTrickle = "trickle" // trickle DAG layout
|
||||
|
||||
DefaultUnixFSHAMTDirectorySizeEstimation = HAMTSizeEstimationLinks // legacy behavior
|
||||
DefaultUnixFSDAGLayout = DAGLayoutBalanced // balanced DAG layout
|
||||
DefaultUnixFSIncludeEmptyDirs = true // include empty directories
|
||||
)
|
||||
|
||||
var (
|
||||
DefaultUnixFSFileMaxLinks = int64(helpers.DefaultLinksPerBlock)
|
||||
DefaultUnixFSDirectoryMaxLinks = int64(0)
|
||||
DefaultUnixFSHAMTDirectoryMaxFanout = int64(io.DefaultShardWidth)
|
||||
DefaultUnixFSHAMTDirectoryMaxFanout = int64(uio.DefaultShardWidth)
|
||||
)
|
||||
|
||||
// Import configures the default options for ingesting data. This affects commands
|
||||
// that ingest data, such as 'ipfs add', 'ipfs dag put, 'ipfs block put', 'ipfs files write'.
|
||||
type Import struct {
|
||||
CidVersion OptionalInteger
|
||||
UnixFSRawLeaves Flag
|
||||
UnixFSChunker OptionalString
|
||||
HashFunction OptionalString
|
||||
UnixFSFileMaxLinks OptionalInteger
|
||||
UnixFSDirectoryMaxLinks OptionalInteger
|
||||
UnixFSHAMTDirectoryMaxFanout OptionalInteger
|
||||
UnixFSHAMTDirectorySizeThreshold OptionalBytes
|
||||
BatchMaxNodes OptionalInteger
|
||||
BatchMaxSize OptionalInteger
|
||||
FastProvideRoot Flag
|
||||
FastProvideWait Flag
|
||||
CidVersion OptionalInteger
|
||||
UnixFSRawLeaves Flag
|
||||
UnixFSChunker OptionalString
|
||||
HashFunction OptionalString
|
||||
UnixFSFileMaxLinks OptionalInteger
|
||||
UnixFSDirectoryMaxLinks OptionalInteger
|
||||
UnixFSHAMTDirectoryMaxFanout OptionalInteger
|
||||
UnixFSHAMTDirectorySizeThreshold OptionalBytes
|
||||
UnixFSHAMTDirectorySizeEstimation OptionalString // "links", "block", or "disabled"
|
||||
UnixFSDAGLayout OptionalString // "balanced" or "trickle"
|
||||
BatchMaxNodes OptionalInteger
|
||||
BatchMaxSize OptionalInteger
|
||||
FastProvideRoot Flag
|
||||
FastProvideWait Flag
|
||||
}
|
||||
|
||||
// ValidateImportConfig validates the Import configuration according to UnixFS spec requirements.
|
||||
@ -129,6 +146,30 @@ func ValidateImportConfig(cfg *Import) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Validate UnixFSHAMTDirectorySizeEstimation
|
||||
if !cfg.UnixFSHAMTDirectorySizeEstimation.IsDefault() {
|
||||
est := cfg.UnixFSHAMTDirectorySizeEstimation.WithDefault(DefaultUnixFSHAMTDirectorySizeEstimation)
|
||||
switch est {
|
||||
case HAMTSizeEstimationLinks, HAMTSizeEstimationBlock, HAMTSizeEstimationDisabled:
|
||||
// valid
|
||||
default:
|
||||
return fmt.Errorf("Import.UnixFSHAMTDirectorySizeEstimation must be %q, %q, or %q, got %q",
|
||||
HAMTSizeEstimationLinks, HAMTSizeEstimationBlock, HAMTSizeEstimationDisabled, est)
|
||||
}
|
||||
}
|
||||
|
||||
// Validate UnixFSDAGLayout
|
||||
if !cfg.UnixFSDAGLayout.IsDefault() {
|
||||
layout := cfg.UnixFSDAGLayout.WithDefault(DefaultUnixFSDAGLayout)
|
||||
switch layout {
|
||||
case DAGLayoutBalanced, DAGLayoutTrickle:
|
||||
// valid
|
||||
default:
|
||||
return fmt.Errorf("Import.UnixFSDAGLayout must be %q or %q, got %q",
|
||||
DAGLayoutBalanced, DAGLayoutTrickle, layout)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -144,8 +185,7 @@ func isValidChunker(chunker string) bool {
|
||||
}
|
||||
|
||||
// Check for size-<bytes> format
|
||||
if strings.HasPrefix(chunker, "size-") {
|
||||
sizeStr := strings.TrimPrefix(chunker, "size-")
|
||||
if sizeStr, ok := strings.CutPrefix(chunker, "size-"); ok {
|
||||
if sizeStr == "" {
|
||||
return false
|
||||
}
|
||||
@ -167,7 +207,7 @@ func isValidChunker(chunker string) bool {
|
||||
|
||||
// Parse and validate min, avg, max values
|
||||
values := make([]int, 3)
|
||||
for i := 0; i < 3; i++ {
|
||||
for i := range 3 {
|
||||
val, err := strconv.Atoi(parts[i+1])
|
||||
if err != nil {
|
||||
return false
|
||||
@ -182,3 +222,41 @@ func isValidChunker(chunker string) bool {
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// HAMTSizeEstimationMode returns the boxo SizeEstimationMode based on the config value.
|
||||
func (i *Import) HAMTSizeEstimationMode() uio.SizeEstimationMode {
|
||||
switch i.UnixFSHAMTDirectorySizeEstimation.WithDefault(DefaultUnixFSHAMTDirectorySizeEstimation) {
|
||||
case HAMTSizeEstimationLinks:
|
||||
return uio.SizeEstimationLinks
|
||||
case HAMTSizeEstimationBlock:
|
||||
return uio.SizeEstimationBlock
|
||||
case HAMTSizeEstimationDisabled:
|
||||
return uio.SizeEstimationDisabled
|
||||
default:
|
||||
return uio.SizeEstimationLinks
|
||||
}
|
||||
}
|
||||
|
||||
// UnixFSSplitterFunc returns a SplitterGen function based on Import.UnixFSChunker.
|
||||
// The returned function creates a Splitter for the configured chunking strategy.
|
||||
// The chunker string is parsed once when this method is called, not on each use.
|
||||
func (i *Import) UnixFSSplitterFunc() chunk.SplitterGen {
|
||||
chunkerStr := i.UnixFSChunker.WithDefault(DefaultUnixFSChunker)
|
||||
|
||||
// Parse size-based chunker (most common case) and return optimized generator
|
||||
if sizeStr, ok := strings.CutPrefix(chunkerStr, "size-"); ok {
|
||||
if size, err := strconv.ParseInt(sizeStr, 10, 64); err == nil && size > 0 {
|
||||
return chunk.SizeSplitterGen(size)
|
||||
}
|
||||
}
|
||||
|
||||
// For other chunker types (rabin, buzhash) or invalid config,
|
||||
// fall back to parsing per-use (these are rare cases)
|
||||
return func(r io.Reader) chunk.Splitter {
|
||||
s, err := chunk.FromString(r, chunkerStr)
|
||||
if err != nil {
|
||||
return chunk.DefaultSplitter(r)
|
||||
}
|
||||
return s
|
||||
}
|
||||
}
|
||||
|
||||
@ -4,6 +4,7 @@ import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/ipfs/boxo/ipld/unixfs/io"
|
||||
mh "github.com/multiformats/go-multihash"
|
||||
)
|
||||
|
||||
@ -406,3 +407,104 @@ func TestIsPowerOfTwo(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateImportConfig_HAMTSizeEstimation(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
value string
|
||||
wantErr bool
|
||||
errMsg string
|
||||
}{
|
||||
{name: "valid links", value: HAMTSizeEstimationLinks, wantErr: false},
|
||||
{name: "valid block", value: HAMTSizeEstimationBlock, wantErr: false},
|
||||
{name: "valid disabled", value: HAMTSizeEstimationDisabled, wantErr: false},
|
||||
{name: "invalid unknown", value: "unknown", wantErr: true, errMsg: "must be"},
|
||||
{name: "invalid empty", value: "", wantErr: true, errMsg: "must be"},
|
||||
{name: "invalid typo", value: "link", wantErr: true, errMsg: "must be"},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
cfg := &Import{
|
||||
UnixFSHAMTDirectorySizeEstimation: *NewOptionalString(tt.value),
|
||||
}
|
||||
|
||||
err := ValidateImportConfig(cfg)
|
||||
|
||||
if tt.wantErr {
|
||||
if err == nil {
|
||||
t.Errorf("expected error for value=%q, got nil", tt.value)
|
||||
} else if tt.errMsg != "" && !strings.Contains(err.Error(), tt.errMsg) {
|
||||
t.Errorf("error = %v, want error containing %q", err, tt.errMsg)
|
||||
}
|
||||
} else {
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error for value=%q: %v", tt.value, err)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateImportConfig_DAGLayout(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
value string
|
||||
wantErr bool
|
||||
errMsg string
|
||||
}{
|
||||
{name: "valid balanced", value: DAGLayoutBalanced, wantErr: false},
|
||||
{name: "valid trickle", value: DAGLayoutTrickle, wantErr: false},
|
||||
{name: "invalid unknown", value: "unknown", wantErr: true, errMsg: "must be"},
|
||||
{name: "invalid empty", value: "", wantErr: true, errMsg: "must be"},
|
||||
{name: "invalid flat", value: "flat", wantErr: true, errMsg: "must be"},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
cfg := &Import{
|
||||
UnixFSDAGLayout: *NewOptionalString(tt.value),
|
||||
}
|
||||
|
||||
err := ValidateImportConfig(cfg)
|
||||
|
||||
if tt.wantErr {
|
||||
if err == nil {
|
||||
t.Errorf("expected error for value=%q, got nil", tt.value)
|
||||
} else if tt.errMsg != "" && !strings.Contains(err.Error(), tt.errMsg) {
|
||||
t.Errorf("error = %v, want error containing %q", err, tt.errMsg)
|
||||
}
|
||||
} else {
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error for value=%q: %v", tt.value, err)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestImport_HAMTSizeEstimationMode(t *testing.T) {
|
||||
tests := []struct {
|
||||
cfg string
|
||||
want io.SizeEstimationMode
|
||||
}{
|
||||
{HAMTSizeEstimationLinks, io.SizeEstimationLinks},
|
||||
{HAMTSizeEstimationBlock, io.SizeEstimationBlock},
|
||||
{HAMTSizeEstimationDisabled, io.SizeEstimationDisabled},
|
||||
{"", io.SizeEstimationLinks}, // default (unset returns default)
|
||||
{"unknown", io.SizeEstimationLinks}, // fallback to default
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.cfg, func(t *testing.T) {
|
||||
var imp Import
|
||||
if tt.cfg != "" {
|
||||
imp.UnixFSHAMTDirectorySizeEstimation = *NewOptionalString(tt.cfg)
|
||||
}
|
||||
got := imp.HAMTSizeEstimationMode()
|
||||
if got != tt.want {
|
||||
t.Errorf("Import.HAMTSizeEstimationMode() with %q = %v, want %v", tt.cfg, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@ -312,45 +312,33 @@ fetching may be degraded.
|
||||
return nil
|
||||
},
|
||||
},
|
||||
"unixfs-v0-2015": {
|
||||
Description: `Legacy UnixFS import profile for backward-compatible CID generation.
|
||||
Produces CIDv0 with no raw leaves, sha2-256, 256 KiB chunks, and
|
||||
link-based HAMT size estimation. Use only when legacy CIDs are required.
|
||||
See https://github.com/ipfs/specs/pull/499. Alias: legacy-cid-v0`,
|
||||
Transform: applyUnixFSv02015,
|
||||
},
|
||||
"legacy-cid-v0": {
|
||||
Description: `Makes UnixFS import produce legacy CIDv0 with no raw leaves, sha2-256 and 256 KiB chunks. This is likely the least optimal preset, use only if legacy behavior is required.`,
|
||||
Transform: func(c *Config) error {
|
||||
c.Import.CidVersion = *NewOptionalInteger(0)
|
||||
c.Import.UnixFSRawLeaves = False
|
||||
c.Import.UnixFSChunker = *NewOptionalString("size-262144")
|
||||
c.Import.HashFunction = *NewOptionalString("sha2-256")
|
||||
c.Import.UnixFSFileMaxLinks = *NewOptionalInteger(174)
|
||||
c.Import.UnixFSDirectoryMaxLinks = *NewOptionalInteger(0)
|
||||
c.Import.UnixFSHAMTDirectoryMaxFanout = *NewOptionalInteger(256)
|
||||
c.Import.UnixFSHAMTDirectorySizeThreshold = *NewOptionalBytes("256KiB")
|
||||
return nil
|
||||
},
|
||||
Description: `Alias for unixfs-v0-2015 profile.`,
|
||||
Transform: applyUnixFSv02015,
|
||||
},
|
||||
"test-cid-v1": {
|
||||
Description: `Makes UnixFS import produce CIDv1 with raw leaves, sha2-256 and 1 MiB chunks (max 174 links per file, 256 per HAMT node, switch dir to HAMT above 256KiB).`,
|
||||
"unixfs-v1-2025": {
|
||||
Description: `Recommended UnixFS import profile for cross-implementation CID determinism.
|
||||
Uses CIDv1, raw leaves, sha2-256, 1 MiB chunks, 1024 links per file node,
|
||||
256 HAMT fanout, and block-based size estimation for HAMT threshold.
|
||||
See https://github.com/ipfs/specs/pull/499`,
|
||||
Transform: func(c *Config) error {
|
||||
c.Import.CidVersion = *NewOptionalInteger(1)
|
||||
c.Import.UnixFSRawLeaves = True
|
||||
c.Import.UnixFSChunker = *NewOptionalString("size-1048576")
|
||||
c.Import.HashFunction = *NewOptionalString("sha2-256")
|
||||
c.Import.UnixFSFileMaxLinks = *NewOptionalInteger(174)
|
||||
c.Import.UnixFSDirectoryMaxLinks = *NewOptionalInteger(0)
|
||||
c.Import.UnixFSHAMTDirectoryMaxFanout = *NewOptionalInteger(256)
|
||||
c.Import.UnixFSHAMTDirectorySizeThreshold = *NewOptionalBytes("256KiB")
|
||||
return nil
|
||||
},
|
||||
},
|
||||
"test-cid-v1-wide": {
|
||||
Description: `Makes UnixFS import produce CIDv1 with raw leaves, sha2-256 and 1MiB chunks and wider file DAGs (max 1024 links per every node type, switch dir to HAMT above 1MiB).`,
|
||||
Transform: func(c *Config) error {
|
||||
c.Import.CidVersion = *NewOptionalInteger(1)
|
||||
c.Import.UnixFSRawLeaves = True
|
||||
c.Import.UnixFSChunker = *NewOptionalString("size-1048576") // 1MiB
|
||||
c.Import.UnixFSChunker = *NewOptionalString("size-1048576") // 1 MiB
|
||||
c.Import.HashFunction = *NewOptionalString("sha2-256")
|
||||
c.Import.UnixFSFileMaxLinks = *NewOptionalInteger(1024)
|
||||
c.Import.UnixFSDirectoryMaxLinks = *NewOptionalInteger(0) // no limit here, use size-based Import.UnixFSHAMTDirectorySizeThreshold instead
|
||||
c.Import.UnixFSHAMTDirectoryMaxFanout = *NewOptionalInteger(1024)
|
||||
c.Import.UnixFSHAMTDirectorySizeThreshold = *NewOptionalBytes("1MiB") // 1MiB
|
||||
c.Import.UnixFSDirectoryMaxLinks = *NewOptionalInteger(0)
|
||||
c.Import.UnixFSHAMTDirectoryMaxFanout = *NewOptionalInteger(256)
|
||||
c.Import.UnixFSHAMTDirectorySizeThreshold = *NewOptionalBytes("256KiB")
|
||||
c.Import.UnixFSHAMTDirectorySizeEstimation = *NewOptionalString(HAMTSizeEstimationBlock)
|
||||
c.Import.UnixFSDAGLayout = *NewOptionalString(DAGLayoutBalanced)
|
||||
return nil
|
||||
},
|
||||
},
|
||||
@ -435,3 +423,18 @@ func mapKeys(m map[string]struct{}) []string {
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// applyUnixFSv02015 applies the legacy UnixFS v0 (2015) import settings.
|
||||
func applyUnixFSv02015(c *Config) error {
|
||||
c.Import.CidVersion = *NewOptionalInteger(0)
|
||||
c.Import.UnixFSRawLeaves = False
|
||||
c.Import.UnixFSChunker = *NewOptionalString("size-262144") // 256 KiB
|
||||
c.Import.HashFunction = *NewOptionalString("sha2-256")
|
||||
c.Import.UnixFSFileMaxLinks = *NewOptionalInteger(174)
|
||||
c.Import.UnixFSDirectoryMaxLinks = *NewOptionalInteger(0)
|
||||
c.Import.UnixFSHAMTDirectoryMaxFanout = *NewOptionalInteger(256)
|
||||
c.Import.UnixFSHAMTDirectorySizeThreshold = *NewOptionalBytes("256KiB")
|
||||
c.Import.UnixFSHAMTDirectorySizeEstimation = *NewOptionalString(HAMTSizeEstimationLinks)
|
||||
c.Import.UnixFSDAGLayout = *NewOptionalString(DAGLayoutBalanced)
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -15,6 +15,7 @@ import (
|
||||
|
||||
"github.com/cheggaaa/pb"
|
||||
"github.com/ipfs/boxo/files"
|
||||
uio "github.com/ipfs/boxo/ipld/unixfs/io"
|
||||
mfs "github.com/ipfs/boxo/mfs"
|
||||
"github.com/ipfs/boxo/path"
|
||||
"github.com/ipfs/boxo/verifcid"
|
||||
@ -68,6 +69,7 @@ const (
|
||||
mtimeNsecsOptionName = "mtime-nsecs"
|
||||
fastProvideRootOptionName = "fast-provide-root"
|
||||
fastProvideWaitOptionName = "fast-provide-wait"
|
||||
emptyDirsOptionName = "empty-dirs"
|
||||
)
|
||||
|
||||
const (
|
||||
@ -147,6 +149,18 @@ to find it in the future:
|
||||
See 'ipfs files --help' to learn more about using MFS
|
||||
for keeping track of added files and directories.
|
||||
|
||||
SYMLINK HANDLING:
|
||||
|
||||
By default, symbolic links are preserved as UnixFS symlink nodes that store
|
||||
the target path. Use --dereference-symlinks to resolve symlinks to their
|
||||
target content instead:
|
||||
|
||||
> ipfs add -r --dereference-symlinks ./mydir
|
||||
|
||||
This resolves all symlinks, including CLI arguments and those found inside
|
||||
directories. Symlinks to files become regular file content, symlinks to
|
||||
directories are traversed and their contents are added.
|
||||
|
||||
CHUNKING EXAMPLES:
|
||||
|
||||
The chunker option, '-s', specifies the chunking strategy that dictates
|
||||
@ -200,11 +214,13 @@ https://github.com/ipfs/kubo/blob/master/docs/config.md#import
|
||||
Options: []cmds.Option{
|
||||
// Input Processing
|
||||
cmds.OptionRecursivePath, // a builtin option that allows recursive paths (-r, --recursive)
|
||||
cmds.OptionDerefArgs, // a builtin option that resolves passed in filesystem links (--dereference-args)
|
||||
cmds.OptionDerefArgs, // DEPRECATED: use --dereference-symlinks instead
|
||||
cmds.OptionStdinName, // a builtin option that optionally allows wrapping stdin into a named file
|
||||
cmds.OptionHidden,
|
||||
cmds.OptionIgnore,
|
||||
cmds.OptionIgnoreRules,
|
||||
cmds.BoolOption(emptyDirsOptionName, "E", "Include empty directories in the import.").WithDefault(config.DefaultUnixFSIncludeEmptyDirs),
|
||||
cmds.OptionDerefSymlinks, // resolve symlinks to their target content
|
||||
// Output Control
|
||||
cmds.BoolOption(quietOptionName, "q", "Write minimal output."),
|
||||
cmds.BoolOption(quieterOptionName, "Q", "Write only final hash."),
|
||||
@ -274,7 +290,7 @@ https://github.com/ipfs/kubo/blob/master/docs/config.md#import
|
||||
}
|
||||
|
||||
progress, _ := req.Options[progressOptionName].(bool)
|
||||
trickle, _ := req.Options[trickleOptionName].(bool)
|
||||
trickle, trickleSet := req.Options[trickleOptionName].(bool)
|
||||
wrap, _ := req.Options[wrapOptionName].(bool)
|
||||
onlyHash, _ := req.Options[onlyHashOptionName].(bool)
|
||||
silent, _ := req.Options[silentOptionName].(bool)
|
||||
@ -285,6 +301,7 @@ https://github.com/ipfs/kubo/blob/master/docs/config.md#import
|
||||
maxFileLinks, maxFileLinksSet := req.Options[maxFileLinksOptionName].(int)
|
||||
maxDirectoryLinks, maxDirectoryLinksSet := req.Options[maxDirectoryLinksOptionName].(int)
|
||||
maxHAMTFanout, maxHAMTFanoutSet := req.Options[maxHAMTFanoutOptionName].(int)
|
||||
var sizeEstimationMode uio.SizeEstimationMode
|
||||
nocopy, _ := req.Options[noCopyOptionName].(bool)
|
||||
fscache, _ := req.Options[fstoreCacheOptionName].(bool)
|
||||
cidVer, cidVerSet := req.Options[cidVersionOptionName].(int)
|
||||
@ -312,6 +329,17 @@ https://github.com/ipfs/kubo/blob/master/docs/config.md#import
|
||||
mtimeNsecs, _ := req.Options[mtimeNsecsOptionName].(uint)
|
||||
fastProvideRoot, fastProvideRootSet := req.Options[fastProvideRootOptionName].(bool)
|
||||
fastProvideWait, fastProvideWaitSet := req.Options[fastProvideWaitOptionName].(bool)
|
||||
emptyDirs, _ := req.Options[emptyDirsOptionName].(bool)
|
||||
|
||||
// Note: --dereference-args is deprecated but still works for backwards compatibility.
|
||||
// The help text marks it as DEPRECATED. Users should use --dereference-symlinks instead,
|
||||
// which is a superset (resolves both CLI arg symlinks AND nested symlinks in directories).
|
||||
|
||||
// Wire --trickle from config
|
||||
if !trickleSet && !cfg.Import.UnixFSDAGLayout.IsDefault() {
|
||||
layout := cfg.Import.UnixFSDAGLayout.WithDefault(config.DefaultUnixFSDAGLayout)
|
||||
trickle = layout == config.DAGLayoutTrickle
|
||||
}
|
||||
|
||||
if chunker == "" {
|
||||
chunker = cfg.Import.UnixFSChunker.WithDefault(config.DefaultUnixFSChunker)
|
||||
@ -348,6 +376,9 @@ https://github.com/ipfs/kubo/blob/master/docs/config.md#import
|
||||
maxHAMTFanout = int(cfg.Import.UnixFSHAMTDirectoryMaxFanout.WithDefault(config.DefaultUnixFSHAMTDirectoryMaxFanout))
|
||||
}
|
||||
|
||||
// SizeEstimationMode is always set from config (no CLI flag)
|
||||
sizeEstimationMode = cfg.Import.HAMTSizeEstimationMode()
|
||||
|
||||
fastProvideRoot = config.ResolveBoolFromConfig(fastProvideRoot, fastProvideRootSet, cfg.Import.FastProvideRoot, config.DefaultFastProvideRoot)
|
||||
fastProvideWait = config.ResolveBoolFromConfig(fastProvideWait, fastProvideWaitSet, cfg.Import.FastProvideWait, config.DefaultFastProvideWait)
|
||||
|
||||
@ -409,6 +440,8 @@ https://github.com/ipfs/kubo/blob/master/docs/config.md#import
|
||||
|
||||
options.Unixfs.PreserveMode(preserveMode),
|
||||
options.Unixfs.PreserveMtime(preserveMtime),
|
||||
|
||||
options.Unixfs.IncludeEmptyDirs(emptyDirs),
|
||||
}
|
||||
|
||||
if mode != 0 {
|
||||
@ -441,6 +474,9 @@ https://github.com/ipfs/kubo/blob/master/docs/config.md#import
|
||||
opts = append(opts, options.Unixfs.MaxHAMTFanout(maxHAMTFanout))
|
||||
}
|
||||
|
||||
// SizeEstimationMode is always set from config
|
||||
opts = append(opts, options.Unixfs.SizeEstimationMode(sizeEstimationMode))
|
||||
|
||||
if trickle {
|
||||
opts = append(opts, options.Unixfs.Layout(options.TrickleLayout))
|
||||
}
|
||||
|
||||
@ -28,6 +28,7 @@ import (
|
||||
offline "github.com/ipfs/boxo/exchange/offline"
|
||||
dag "github.com/ipfs/boxo/ipld/merkledag"
|
||||
ft "github.com/ipfs/boxo/ipld/unixfs"
|
||||
uio "github.com/ipfs/boxo/ipld/unixfs/io"
|
||||
mfs "github.com/ipfs/boxo/mfs"
|
||||
"github.com/ipfs/boxo/path"
|
||||
cid "github.com/ipfs/go-cid"
|
||||
@ -499,7 +500,12 @@ being GC'ed.
|
||||
return err
|
||||
}
|
||||
|
||||
prefix, err := getPrefixNew(req)
|
||||
cfg, err := nd.Repo.Config()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
prefix, err := getPrefixNew(req, &cfg.Import)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -550,7 +556,9 @@ being GC'ed.
|
||||
|
||||
mkParents, _ := req.Options[filesParentsOptionName].(bool)
|
||||
if mkParents {
|
||||
err := ensureContainingDirectoryExists(nd.FilesRoot, dst, prefix)
|
||||
maxDirLinks := int(cfg.Import.UnixFSDirectoryMaxLinks.WithDefault(config.DefaultUnixFSDirectoryMaxLinks))
|
||||
sizeEstimationMode := cfg.Import.HAMTSizeEstimationMode()
|
||||
err := ensureContainingDirectoryExists(nd.FilesRoot, dst, prefix, maxDirLinks, &sizeEstimationMode)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -989,9 +997,13 @@ stat' on the file or any of its ancestors.
|
||||
WARNING:
|
||||
|
||||
The CID produced by 'files write' will be different from 'ipfs add' because
|
||||
'ipfs file write' creates a trickle-dag optimized for append-only operations
|
||||
'ipfs files write' creates a trickle-dag optimized for append-only operations.
|
||||
See '--trickle' in 'ipfs add --help' for more information.
|
||||
|
||||
NOTE: The 'Import.UnixFSFileMaxLinks' config option does not apply to this command.
|
||||
Trickle DAG has a fixed internal structure optimized for append operations.
|
||||
To use configurable max-links, use 'ipfs add' with balanced DAG layout.
|
||||
|
||||
If you want to add a file without modifying an existing one,
|
||||
use 'ipfs add' with '--to-files':
|
||||
|
||||
@ -1048,7 +1060,7 @@ See '--to-files' in 'ipfs add --help' for more information.
|
||||
rawLeaves = cfg.Import.UnixFSRawLeaves.WithDefault(config.DefaultUnixFSRawLeaves)
|
||||
}
|
||||
|
||||
prefix, err := getPrefixNew(req)
|
||||
prefix, err := getPrefixNew(req, &cfg.Import)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -1059,7 +1071,9 @@ See '--to-files' in 'ipfs add --help' for more information.
|
||||
}
|
||||
|
||||
if mkParents {
|
||||
err := ensureContainingDirectoryExists(nd.FilesRoot, path, prefix)
|
||||
maxDirLinks := int(cfg.Import.UnixFSDirectoryMaxLinks.WithDefault(config.DefaultUnixFSDirectoryMaxLinks))
|
||||
sizeEstimationMode := cfg.Import.HAMTSizeEstimationMode()
|
||||
err := ensureContainingDirectoryExists(nd.FilesRoot, path, prefix, maxDirLinks, &sizeEstimationMode)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -1163,6 +1177,11 @@ Examples:
|
||||
return err
|
||||
}
|
||||
|
||||
cfg, err := n.Repo.Config()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dashp, _ := req.Options[filesParentsOptionName].(bool)
|
||||
dirtomake, err := checkPath(req.Arguments[0])
|
||||
if err != nil {
|
||||
@ -1175,16 +1194,21 @@ Examples:
|
||||
return err
|
||||
}
|
||||
|
||||
prefix, err := getPrefix(req)
|
||||
prefix, err := getPrefix(req, &cfg.Import)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
root := n.FilesRoot
|
||||
|
||||
maxDirLinks := int(cfg.Import.UnixFSDirectoryMaxLinks.WithDefault(config.DefaultUnixFSDirectoryMaxLinks))
|
||||
sizeEstimationMode := cfg.Import.HAMTSizeEstimationMode()
|
||||
|
||||
err = mfs.Mkdir(root, dirtomake, mfs.MkdirOpts{
|
||||
Mkparents: dashp,
|
||||
Flush: flush,
|
||||
CidBuilder: prefix,
|
||||
Mkparents: dashp,
|
||||
Flush: flush,
|
||||
CidBuilder: prefix,
|
||||
MaxLinks: maxDirLinks,
|
||||
SizeEstimationMode: &sizeEstimationMode,
|
||||
})
|
||||
|
||||
return err
|
||||
@ -1262,7 +1286,9 @@ Change the CID version or hash function of the root node of a given path.
|
||||
|
||||
flush, _ := req.Options[filesFlushOptionName].(bool)
|
||||
|
||||
prefix, err := getPrefix(req)
|
||||
// Note: files chcid is for explicitly changing CID format, so we don't
|
||||
// fall back to Import config here. If no options are provided, it does nothing.
|
||||
prefix, err := getPrefix(req, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -1420,10 +1446,20 @@ func removePath(filesRoot *mfs.Root, path string, force bool, dashr bool) error
|
||||
return pdir.Flush()
|
||||
}
|
||||
|
||||
func getPrefixNew(req *cmds.Request) (cid.Builder, error) {
|
||||
func getPrefixNew(req *cmds.Request, importCfg *config.Import) (cid.Builder, error) {
|
||||
cidVer, cidVerSet := req.Options[filesCidVersionOptionName].(int)
|
||||
hashFunStr, hashFunSet := req.Options[filesHashOptionName].(string)
|
||||
|
||||
// Fall back to Import config if CLI options not set
|
||||
if !cidVerSet && importCfg != nil && !importCfg.CidVersion.IsDefault() {
|
||||
cidVer = int(importCfg.CidVersion.WithDefault(config.DefaultCidVersion))
|
||||
cidVerSet = true
|
||||
}
|
||||
if !hashFunSet && importCfg != nil && !importCfg.HashFunction.IsDefault() {
|
||||
hashFunStr = importCfg.HashFunction.WithDefault(config.DefaultHashFunction)
|
||||
hashFunSet = true
|
||||
}
|
||||
|
||||
if !cidVerSet && !hashFunSet {
|
||||
return nil, nil
|
||||
}
|
||||
@ -1449,10 +1485,20 @@ func getPrefixNew(req *cmds.Request) (cid.Builder, error) {
|
||||
return &prefix, nil
|
||||
}
|
||||
|
||||
func getPrefix(req *cmds.Request) (cid.Builder, error) {
|
||||
func getPrefix(req *cmds.Request, importCfg *config.Import) (cid.Builder, error) {
|
||||
cidVer, cidVerSet := req.Options[filesCidVersionOptionName].(int)
|
||||
hashFunStr, hashFunSet := req.Options[filesHashOptionName].(string)
|
||||
|
||||
// Fall back to Import config if CLI options not set
|
||||
if !cidVerSet && importCfg != nil && !importCfg.CidVersion.IsDefault() {
|
||||
cidVer = int(importCfg.CidVersion.WithDefault(config.DefaultCidVersion))
|
||||
cidVerSet = true
|
||||
}
|
||||
if !hashFunSet && importCfg != nil && !importCfg.HashFunction.IsDefault() {
|
||||
hashFunStr = importCfg.HashFunction.WithDefault(config.DefaultHashFunction)
|
||||
hashFunSet = true
|
||||
}
|
||||
|
||||
if !cidVerSet && !hashFunSet {
|
||||
return nil, nil
|
||||
}
|
||||
@ -1478,7 +1524,7 @@ func getPrefix(req *cmds.Request) (cid.Builder, error) {
|
||||
return &prefix, nil
|
||||
}
|
||||
|
||||
func ensureContainingDirectoryExists(r *mfs.Root, path string, builder cid.Builder) error {
|
||||
func ensureContainingDirectoryExists(r *mfs.Root, path string, builder cid.Builder, maxLinks int, sizeEstimationMode *uio.SizeEstimationMode) error {
|
||||
dirtomake := gopath.Dir(path)
|
||||
|
||||
if dirtomake == "/" {
|
||||
@ -1486,8 +1532,10 @@ func ensureContainingDirectoryExists(r *mfs.Root, path string, builder cid.Build
|
||||
}
|
||||
|
||||
return mfs.Mkdir(r, dirtomake, mfs.MkdirOpts{
|
||||
Mkparents: true,
|
||||
CidBuilder: builder,
|
||||
Mkparents: true,
|
||||
CidBuilder: builder,
|
||||
MaxLinks: maxLinks,
|
||||
SizeEstimationMode: sizeEstimationMode,
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
@ -177,12 +177,18 @@ func (api *UnixfsAPI) Add(ctx context.Context, files files.Node, opts ...options
|
||||
if settings.MaxHAMTFanoutSet {
|
||||
fileAdder.MaxHAMTFanout = settings.MaxHAMTFanout
|
||||
}
|
||||
if settings.SizeEstimationModeSet {
|
||||
fileAdder.SizeEstimationMode = settings.SizeEstimationMode
|
||||
}
|
||||
fileAdder.NoCopy = settings.NoCopy
|
||||
fileAdder.CidBuilder = prefix
|
||||
fileAdder.PreserveMode = settings.PreserveMode
|
||||
fileAdder.PreserveMtime = settings.PreserveMtime
|
||||
fileAdder.FileMode = settings.Mode
|
||||
fileAdder.FileMtime = settings.Mtime
|
||||
if settings.IncludeEmptyDirsSet {
|
||||
fileAdder.IncludeEmptyDirs = settings.IncludeEmptyDirs
|
||||
}
|
||||
|
||||
switch settings.Layout {
|
||||
case options.BalancedLayout:
|
||||
|
||||
@ -24,16 +24,18 @@ type UnixfsAddSettings struct {
|
||||
CidVersion int
|
||||
MhType uint64
|
||||
|
||||
Inline bool
|
||||
InlineLimit int
|
||||
RawLeaves bool
|
||||
RawLeavesSet bool
|
||||
MaxFileLinks int
|
||||
MaxFileLinksSet bool
|
||||
MaxDirectoryLinks int
|
||||
MaxDirectoryLinksSet bool
|
||||
MaxHAMTFanout int
|
||||
MaxHAMTFanoutSet bool
|
||||
Inline bool
|
||||
InlineLimit int
|
||||
RawLeaves bool
|
||||
RawLeavesSet bool
|
||||
MaxFileLinks int
|
||||
MaxFileLinksSet bool
|
||||
MaxDirectoryLinks int
|
||||
MaxDirectoryLinksSet bool
|
||||
MaxHAMTFanout int
|
||||
MaxHAMTFanoutSet bool
|
||||
SizeEstimationMode *io.SizeEstimationMode
|
||||
SizeEstimationModeSet bool
|
||||
|
||||
Chunker string
|
||||
Layout Layout
|
||||
@ -48,10 +50,12 @@ type UnixfsAddSettings struct {
|
||||
Silent bool
|
||||
Progress bool
|
||||
|
||||
PreserveMode bool
|
||||
PreserveMtime bool
|
||||
Mode os.FileMode
|
||||
Mtime time.Time
|
||||
PreserveMode bool
|
||||
PreserveMtime bool
|
||||
Mode os.FileMode
|
||||
Mtime time.Time
|
||||
IncludeEmptyDirs bool
|
||||
IncludeEmptyDirsSet bool
|
||||
}
|
||||
|
||||
type UnixfsLsSettings struct {
|
||||
@ -93,10 +97,12 @@ func UnixfsAddOptions(opts ...UnixfsAddOption) (*UnixfsAddSettings, cid.Prefix,
|
||||
Silent: false,
|
||||
Progress: false,
|
||||
|
||||
PreserveMode: false,
|
||||
PreserveMtime: false,
|
||||
Mode: 0,
|
||||
Mtime: time.Time{},
|
||||
PreserveMode: false,
|
||||
PreserveMtime: false,
|
||||
Mode: 0,
|
||||
Mtime: time.Time{},
|
||||
IncludeEmptyDirs: true, // default: include empty directories
|
||||
IncludeEmptyDirsSet: false,
|
||||
}
|
||||
|
||||
for _, opt := range opts {
|
||||
@ -235,6 +241,15 @@ func (unixfsOpts) MaxHAMTFanout(n int) UnixfsAddOption {
|
||||
}
|
||||
}
|
||||
|
||||
// SizeEstimationMode specifies how directory size is estimated for HAMT sharding decisions.
|
||||
func (unixfsOpts) SizeEstimationMode(mode io.SizeEstimationMode) UnixfsAddOption {
|
||||
return func(settings *UnixfsAddSettings) error {
|
||||
settings.SizeEstimationMode = &mode
|
||||
settings.SizeEstimationModeSet = true
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Inline tells the adder to inline small blocks into CIDs
|
||||
func (unixfsOpts) Inline(enable bool) UnixfsAddOption {
|
||||
return func(settings *UnixfsAddSettings) error {
|
||||
@ -396,3 +411,12 @@ func (unixfsOpts) Mtime(seconds int64, nsecs uint32) UnixfsAddOption {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// IncludeEmptyDirs tells the adder to include empty directories in the DAG
|
||||
func (unixfsOpts) IncludeEmptyDirs(include bool) UnixfsAddOption {
|
||||
return func(settings *UnixfsAddSettings) error {
|
||||
settings.IncludeEmptyDirs = include
|
||||
settings.IncludeEmptyDirsSet = true
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
@ -26,6 +26,7 @@ import (
|
||||
"github.com/ipfs/go-cid"
|
||||
ipld "github.com/ipfs/go-ipld-format"
|
||||
logging "github.com/ipfs/go-log/v2"
|
||||
"github.com/ipfs/kubo/config"
|
||||
coreiface "github.com/ipfs/kubo/core/coreiface"
|
||||
|
||||
"github.com/ipfs/kubo/tracing"
|
||||
@ -52,49 +53,52 @@ func NewAdder(ctx context.Context, p pin.Pinner, bs bstore.GCLocker, ds ipld.DAG
|
||||
bufferedDS := ipld.NewBufferedDAG(ctx, ds)
|
||||
|
||||
return &Adder{
|
||||
ctx: ctx,
|
||||
pinning: p,
|
||||
gcLocker: bs,
|
||||
dagService: ds,
|
||||
bufferedDS: bufferedDS,
|
||||
Progress: false,
|
||||
Pin: true,
|
||||
Trickle: false,
|
||||
MaxLinks: ihelper.DefaultLinksPerBlock,
|
||||
MaxHAMTFanout: uio.DefaultShardWidth,
|
||||
Chunker: "",
|
||||
ctx: ctx,
|
||||
pinning: p,
|
||||
gcLocker: bs,
|
||||
dagService: ds,
|
||||
bufferedDS: bufferedDS,
|
||||
Progress: false,
|
||||
Pin: true,
|
||||
Trickle: false,
|
||||
MaxLinks: ihelper.DefaultLinksPerBlock,
|
||||
MaxHAMTFanout: uio.DefaultShardWidth,
|
||||
Chunker: "",
|
||||
IncludeEmptyDirs: config.DefaultUnixFSIncludeEmptyDirs,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Adder holds the switches passed to the `add` command.
|
||||
type Adder struct {
|
||||
ctx context.Context
|
||||
pinning pin.Pinner
|
||||
gcLocker bstore.GCLocker
|
||||
dagService ipld.DAGService
|
||||
bufferedDS *ipld.BufferedDAG
|
||||
Out chan<- interface{}
|
||||
Progress bool
|
||||
Pin bool
|
||||
PinName string
|
||||
Trickle bool
|
||||
RawLeaves bool
|
||||
MaxLinks int
|
||||
MaxDirectoryLinks int
|
||||
MaxHAMTFanout int
|
||||
Silent bool
|
||||
NoCopy bool
|
||||
Chunker string
|
||||
mroot *mfs.Root
|
||||
unlocker bstore.Unlocker
|
||||
tempRoot cid.Cid
|
||||
CidBuilder cid.Builder
|
||||
liveNodes uint64
|
||||
ctx context.Context
|
||||
pinning pin.Pinner
|
||||
gcLocker bstore.GCLocker
|
||||
dagService ipld.DAGService
|
||||
bufferedDS *ipld.BufferedDAG
|
||||
Out chan<- interface{}
|
||||
Progress bool
|
||||
Pin bool
|
||||
PinName string
|
||||
Trickle bool
|
||||
RawLeaves bool
|
||||
MaxLinks int
|
||||
MaxDirectoryLinks int
|
||||
MaxHAMTFanout int
|
||||
SizeEstimationMode *uio.SizeEstimationMode
|
||||
Silent bool
|
||||
NoCopy bool
|
||||
Chunker string
|
||||
mroot *mfs.Root
|
||||
unlocker bstore.Unlocker
|
||||
tempRoot cid.Cid
|
||||
CidBuilder cid.Builder
|
||||
liveNodes uint64
|
||||
|
||||
PreserveMode bool
|
||||
PreserveMtime bool
|
||||
FileMode os.FileMode
|
||||
FileMtime time.Time
|
||||
PreserveMode bool
|
||||
PreserveMtime bool
|
||||
FileMode os.FileMode
|
||||
FileMtime time.Time
|
||||
IncludeEmptyDirs bool
|
||||
}
|
||||
|
||||
func (adder *Adder) mfsRoot() (*mfs.Root, error) {
|
||||
@ -104,9 +108,10 @@ func (adder *Adder) mfsRoot() (*mfs.Root, error) {
|
||||
|
||||
// Note, this adds it to DAGService already.
|
||||
mr, err := mfs.NewEmptyRoot(adder.ctx, adder.dagService, nil, nil, mfs.MkdirOpts{
|
||||
CidBuilder: adder.CidBuilder,
|
||||
MaxLinks: adder.MaxDirectoryLinks,
|
||||
MaxHAMTFanout: adder.MaxHAMTFanout,
|
||||
CidBuilder: adder.CidBuilder,
|
||||
MaxLinks: adder.MaxDirectoryLinks,
|
||||
MaxHAMTFanout: adder.MaxHAMTFanout,
|
||||
SizeEstimationMode: adder.SizeEstimationMode,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@ -270,11 +275,12 @@ func (adder *Adder) addNode(node ipld.Node, path string) error {
|
||||
dir := gopath.Dir(path)
|
||||
if dir != "." {
|
||||
opts := mfs.MkdirOpts{
|
||||
Mkparents: true,
|
||||
Flush: false,
|
||||
CidBuilder: adder.CidBuilder,
|
||||
MaxLinks: adder.MaxDirectoryLinks,
|
||||
MaxHAMTFanout: adder.MaxHAMTFanout,
|
||||
Mkparents: true,
|
||||
Flush: false,
|
||||
CidBuilder: adder.CidBuilder,
|
||||
MaxLinks: adder.MaxDirectoryLinks,
|
||||
MaxHAMTFanout: adder.MaxHAMTFanout,
|
||||
SizeEstimationMode: adder.SizeEstimationMode,
|
||||
}
|
||||
if err := mfs.Mkdir(mr, dir, opts); err != nil {
|
||||
return err
|
||||
@ -480,15 +486,34 @@ func (adder *Adder) addFile(path string, file files.File) error {
|
||||
func (adder *Adder) addDir(ctx context.Context, path string, dir files.Directory, toplevel bool) error {
|
||||
log.Infof("adding directory: %s", path)
|
||||
|
||||
// Peek at first entry to check if directory is empty.
|
||||
// We advance the iterator once here and continue from this position
|
||||
// in the processing loop below. This avoids allocating a slice to
|
||||
// collect all entries just to check for emptiness.
|
||||
it := dir.Entries()
|
||||
hasEntry := it.Next()
|
||||
if !hasEntry {
|
||||
if err := it.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
// Directory is empty. Skip it unless IncludeEmptyDirs is set or
|
||||
// this is the toplevel directory (we always include the root).
|
||||
if !adder.IncludeEmptyDirs && !toplevel {
|
||||
log.Debugf("skipping empty directory: %s", path)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// if we need to store mode or modification time then create a new root which includes that data
|
||||
if toplevel && (adder.FileMode != 0 || !adder.FileMtime.IsZero()) {
|
||||
mr, err := mfs.NewEmptyRoot(ctx, adder.dagService, nil, nil,
|
||||
mfs.MkdirOpts{
|
||||
CidBuilder: adder.CidBuilder,
|
||||
MaxLinks: adder.MaxDirectoryLinks,
|
||||
MaxHAMTFanout: adder.MaxHAMTFanout,
|
||||
ModTime: adder.FileMtime,
|
||||
Mode: adder.FileMode,
|
||||
CidBuilder: adder.CidBuilder,
|
||||
MaxLinks: adder.MaxDirectoryLinks,
|
||||
MaxHAMTFanout: adder.MaxHAMTFanout,
|
||||
ModTime: adder.FileMtime,
|
||||
Mode: adder.FileMode,
|
||||
SizeEstimationMode: adder.SizeEstimationMode,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
@ -502,26 +527,28 @@ func (adder *Adder) addDir(ctx context.Context, path string, dir files.Directory
|
||||
return err
|
||||
}
|
||||
err = mfs.Mkdir(mr, path, mfs.MkdirOpts{
|
||||
Mkparents: true,
|
||||
Flush: false,
|
||||
CidBuilder: adder.CidBuilder,
|
||||
Mode: adder.FileMode,
|
||||
ModTime: adder.FileMtime,
|
||||
MaxLinks: adder.MaxDirectoryLinks,
|
||||
MaxHAMTFanout: adder.MaxHAMTFanout,
|
||||
Mkparents: true,
|
||||
Flush: false,
|
||||
CidBuilder: adder.CidBuilder,
|
||||
Mode: adder.FileMode,
|
||||
ModTime: adder.FileMtime,
|
||||
MaxLinks: adder.MaxDirectoryLinks,
|
||||
MaxHAMTFanout: adder.MaxHAMTFanout,
|
||||
SizeEstimationMode: adder.SizeEstimationMode,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
it := dir.Entries()
|
||||
for it.Next() {
|
||||
// Process directory entries. The iterator was already advanced once above
|
||||
// to peek for emptiness, so we start from that position.
|
||||
for hasEntry {
|
||||
fpath := gopath.Join(path, it.Name())
|
||||
err := adder.addFileNode(ctx, fpath, it.Node(), false)
|
||||
if err != nil {
|
||||
if err := adder.addFileNode(ctx, fpath, it.Node(), false); err != nil {
|
||||
return err
|
||||
}
|
||||
hasEntry = it.Next()
|
||||
}
|
||||
|
||||
return it.Err()
|
||||
|
||||
@ -243,7 +243,24 @@ func Files(strategy string) func(mctx helpers.MetricsCtx, lc fx.Lifecycle, repo
|
||||
prov = nil
|
||||
}
|
||||
|
||||
root, err := mfs.NewRoot(ctx, dag, nd, pf, prov)
|
||||
// Get configured settings from Import config
|
||||
cfg, err := repo.Config()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get config: %w", err)
|
||||
}
|
||||
chunkerGen := cfg.Import.UnixFSSplitterFunc()
|
||||
maxDirLinks := int(cfg.Import.UnixFSDirectoryMaxLinks.WithDefault(config.DefaultUnixFSDirectoryMaxLinks))
|
||||
maxHAMTFanout := int(cfg.Import.UnixFSHAMTDirectoryMaxFanout.WithDefault(config.DefaultUnixFSHAMTDirectoryMaxFanout))
|
||||
hamtShardingSize := int(cfg.Import.UnixFSHAMTDirectorySizeThreshold.WithDefault(config.DefaultUnixFSHAMTDirectorySizeThreshold))
|
||||
sizeEstimationMode := cfg.Import.HAMTSizeEstimationMode()
|
||||
|
||||
root, err := mfs.NewRoot(ctx, dag, nd, pf, prov,
|
||||
mfs.WithChunker(chunkerGen),
|
||||
mfs.WithMaxLinks(maxDirLinks),
|
||||
mfs.WithMaxHAMTFanout(maxHAMTFanout),
|
||||
mfs.WithHAMTShardingSize(hamtShardingSize),
|
||||
mfs.WithSizeEstimationMode(sizeEstimationMode),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize MFS root from %s stored at %s: %w. "+
|
||||
"If corrupted, use 'ipfs files chroot' to reset (see --help)", nd.Cid(), FilesRootDatastoreKey, err)
|
||||
|
||||
@ -438,12 +438,13 @@ func IPFS(ctx context.Context, bcfg *BuildCfg) fx.Option {
|
||||
return fx.Error(err)
|
||||
}
|
||||
|
||||
// Auto-sharding settings
|
||||
shardSingThresholdInt := cfg.Import.UnixFSHAMTDirectorySizeThreshold.WithDefault(config.DefaultUnixFSHAMTDirectorySizeThreshold)
|
||||
// Directory sharding settings from Import config.
|
||||
// These globals affect both `ipfs add` and MFS (`ipfs files` API).
|
||||
shardSizeThreshold := cfg.Import.UnixFSHAMTDirectorySizeThreshold.WithDefault(config.DefaultUnixFSHAMTDirectorySizeThreshold)
|
||||
shardMaxFanout := cfg.Import.UnixFSHAMTDirectoryMaxFanout.WithDefault(config.DefaultUnixFSHAMTDirectoryMaxFanout)
|
||||
// TODO: avoid overriding this globally, see if we can extend Directory interface like Get/SetMaxLinks from https://github.com/ipfs/boxo/pull/906
|
||||
uio.HAMTShardingSize = int(shardSingThresholdInt)
|
||||
uio.HAMTShardingSize = int(shardSizeThreshold)
|
||||
uio.DefaultShardWidth = int(shardMaxFanout)
|
||||
uio.HAMTSizeEstimation = cfg.Import.HAMTSizeEstimationMode()
|
||||
|
||||
providerStrategy := cfg.Provide.Strategy.WithDefault(config.DefaultProvideStrategy)
|
||||
|
||||
|
||||
@ -10,6 +10,7 @@ This release was brought to you by the [Shipyard](https://ipshipyard.com/) team.
|
||||
|
||||
- [Overview](#overview)
|
||||
- [🔦 Highlights](#-highlights)
|
||||
- [🔢 UnixFS CID Profiles (IPIP-499)](#-unixfs-cid-profiles-ipip-499)
|
||||
- [🧹 Automatic cleanup of interrupted imports](#-automatic-cleanup-of-interrupted-imports)
|
||||
- [Routing V1 HTTP API now exposed by default](#routing-v1-http-api-now-exposed-by-default)
|
||||
- [Track total size when adding pins](#track-total-size-when-adding-pins)
|
||||
@ -34,6 +35,39 @@ This release was brought to you by the [Shipyard](https://ipshipyard.com/) team.
|
||||
|
||||
### 🔦 Highlights
|
||||
|
||||
#### 🔢 UnixFS CID Profiles (IPIP-499)
|
||||
|
||||
[IPIP-499](https://github.com/ipfs/specs/pull/499) CID Profiles are presets that pin down how files get split into blocks and organized into directories. Useful when you need the same CID for the same data across different software or versions.
|
||||
|
||||
**New configuration [profiles](https://github.com/ipfs/kubo/blob/master/docs/config.md#profiles)**
|
||||
|
||||
- `unixfs-v1-2025`: modern CIDv1 profile with improved defaults
|
||||
- `unixfs-v0-2015` (alias `legacy-cid-v0`): best-effort legacy CIDv0 behavior
|
||||
|
||||
Apply with: `ipfs config profile apply unixfs-v1-2025`
|
||||
|
||||
The `test-cid-v1` and `test-cid-v1-wide` profiles have been removed. Use `unixfs-v1-2025` or manually set specific `Import.*` settings instead.
|
||||
|
||||
**New [`Import.*`](https://github.com/ipfs/kubo/blob/master/docs/config.md#import) options**
|
||||
|
||||
- `Import.UnixFSHAMTDirectorySizeEstimation`: estimation mode (`links`, `block`, or `disabled`)
|
||||
- `Import.UnixFSDAGLayout`: DAG layout (`balanced` or `trickle`)
|
||||
|
||||
**New [`ipfs add`](https://docs.ipfs.tech/reference/kubo/cli/#ipfs-add) CLI flags**
|
||||
|
||||
- `--dereference-symlinks` resolves all symlinks to their target content, replacing the deprecated `--dereference-args` which only resolved CLI argument symlinks
|
||||
- `--empty-dirs` / `-E` controls inclusion of empty directories (default: true)
|
||||
- `--hidden` / `-H` includes hidden files (default: false)
|
||||
- `--trickle` implicit default can be adjusted via `Import.UnixFSDAGLayout`
|
||||
|
||||
**`ipfs files write` fix for CIDv1 directories**
|
||||
|
||||
When writing to MFS directories that use CIDv1 (via `--cid-version=1` or `ipfs files chcid`), single-block files now produce raw block CIDs (like `bafkrei...`), matching the behavior of `ipfs add --raw-leaves`. Previously, MFS would wrap single-block files in dag-pb even when raw leaves were enabled. CIDv0 directories continue to use dag-pb.
|
||||
|
||||
**HAMT Threshold Fix**
|
||||
|
||||
HAMT directory sharding threshold changed from `>=` to `>` to match the Go docs and JS implementation ([ipfs/boxo@6707376](https://github.com/ipfs/boxo/commit/6707376002a3d4ba64895749ce9be2e00d265ed5)). A directory exactly at 256 KiB now stays as a basic directory instead of converting to HAMT. This is a theoretical breaking change, but unlikely to impact real-world users as it requires a directory to be exactly at the threshold boundary. If you depend on the old behavior, adjust [`Import.UnixFSHAMTShardingSize`](https://github.com/ipfs/kubo/blob/master/docs/config.md#importunixfshamtshardingsize) to be 1 byte lower.
|
||||
|
||||
#### 🧹 Automatic cleanup of interrupted imports
|
||||
|
||||
If you cancel `ipfs add` or `ipfs dag import` mid-operation, Kubo now automatically cleans up incomplete data on the next daemon start. Previously, interrupted imports would leave orphan blocks in your repository that were difficult to identify and remove without pins and running explicit garbage collection.
|
||||
|
||||
101
docs/config.md
101
docs/config.md
@ -242,6 +242,8 @@ config file at runtime.
|
||||
- [`Import.UnixFSDirectoryMaxLinks`](#importunixfsdirectorymaxlinks)
|
||||
- [`Import.UnixFSHAMTDirectoryMaxFanout`](#importunixfshamtdirectorymaxfanout)
|
||||
- [`Import.UnixFSHAMTDirectorySizeThreshold`](#importunixfshamtdirectorysizethreshold)
|
||||
- [`Import.UnixFSHAMTDirectorySizeEstimation`](#importunixfshamtdirectorysizeestimation)
|
||||
- [`Import.UnixFSDAGLayout`](#importunixfsdaglayout)
|
||||
- [`Version`](#version)
|
||||
- [`Version.AgentSuffix`](#versionagentsuffix)
|
||||
- [`Version.SwarmCheckEnabled`](#versionswarmcheckenabled)
|
||||
@ -263,9 +265,9 @@ config file at runtime.
|
||||
- [`lowpower` profile](#lowpower-profile)
|
||||
- [`announce-off` profile](#announce-off-profile)
|
||||
- [`announce-on` profile](#announce-on-profile)
|
||||
- [`unixfs-v0-2015` profile](#unixfs-v0-2015-profile)
|
||||
- [`legacy-cid-v0` profile](#legacy-cid-v0-profile)
|
||||
- [`test-cid-v1` profile](#test-cid-v1-profile)
|
||||
- [`test-cid-v1-wide` profile](#test-cid-v1-wide-profile)
|
||||
- [`unixfs-v1-2025` profile](#unixfs-v1-2025-profile)
|
||||
- [Security](#security)
|
||||
- [Port and Network Exposure](#port-and-network-exposure)
|
||||
- [Security Best Practices](#security-best-practices)
|
||||
@ -3656,9 +3658,11 @@ Type: `flag`
|
||||
|
||||
## `Import`
|
||||
|
||||
Options to configure the default options used for ingesting data, in commands such as `ipfs add` or `ipfs block put`. All affected commands are detailed per option.
|
||||
Options to configure the default parameters used for ingesting data, in commands such as `ipfs add` or `ipfs block put`. All affected commands are detailed per option.
|
||||
|
||||
Note that using flags will override the options defined here.
|
||||
These options implement [IPIP-499: UnixFS CID Profiles](https://github.com/ipfs/specs/pull/499) for reproducible CID generation across IPFS implementations. Instead of configuring individual options, you can apply a predefined profile with `ipfs config profile apply <profile-name>`. See [Profiles](#profiles) for available options like `unixfs-v1-2025`.
|
||||
|
||||
Note that using CLI flags will override the options defined here.
|
||||
|
||||
### `Import.CidVersion`
|
||||
|
||||
@ -3838,6 +3842,42 @@ Default: `256KiB` (may change, inspect `DefaultUnixFSHAMTDirectorySizeThreshold`
|
||||
|
||||
Type: [`optionalBytes`](#optionalbytes)
|
||||
|
||||
### `Import.UnixFSHAMTDirectorySizeEstimation`
|
||||
|
||||
Controls how directory size is estimated when deciding whether to switch
|
||||
from a basic UnixFS directory to HAMT sharding.
|
||||
|
||||
Accepted values:
|
||||
|
||||
- `links` (default): Legacy estimation using sum of link names and CID byte lengths.
|
||||
- `block`: Full serialized dag-pb block size for accurate threshold decisions.
|
||||
- `disabled`: Disable HAMT sharding entirely (directories always remain basic).
|
||||
|
||||
The `block` estimation is recommended for new profiles as it provides more
|
||||
accurate threshold decisions and better cross-implementation consistency.
|
||||
See [IPIP-499](https://github.com/ipfs/specs/pull/499) for more details.
|
||||
|
||||
Commands affected: `ipfs add`
|
||||
|
||||
Default: `links`
|
||||
|
||||
Type: `optionalString`
|
||||
|
||||
### `Import.UnixFSDAGLayout`
|
||||
|
||||
Controls the DAG layout used when chunking files.
|
||||
|
||||
Accepted values:
|
||||
|
||||
- `balanced` (default): Balanced DAG layout with uniform leaf depth.
|
||||
- `trickle`: Trickle DAG layout optimized for streaming.
|
||||
|
||||
Commands affected: `ipfs add`
|
||||
|
||||
Default: `balanced`
|
||||
|
||||
Type: `optionalString`
|
||||
|
||||
## `Version`
|
||||
|
||||
Options to configure agent version announced to the swarm, and leveraging
|
||||
@ -3881,7 +3921,7 @@ applied with the `--profile` flag to `ipfs init` or with the `ipfs config profil
|
||||
apply` command. When a profile is applied a backup of the configuration file
|
||||
will be created in `$IPFS_PATH`.
|
||||
|
||||
Configuration profiles can be applied additively. For example, both the `test-cid-v1` and `lowpower` profiles can be applied one after the other.
|
||||
Configuration profiles can be applied additively. For example, both the `unixfs-v1-2025` and `lowpower` profiles can be applied one after the other.
|
||||
The available configuration profiles are listed below. You can also find them
|
||||
documented in `ipfs config profile --help`.
|
||||
|
||||
@ -4038,42 +4078,35 @@ Disables [Provide](#provide) system (and announcing to Amino DHT).
|
||||
|
||||
(Re-)enables [Provide](#provide) system (reverts [`announce-off` profile](#announce-off-profile)).
|
||||
|
||||
### `unixfs-v0-2015` profile
|
||||
|
||||
Legacy UnixFS import profile for backward-compatible CID generation.
|
||||
Produces CIDv0 with no raw leaves, sha2-256, 256 KiB chunks, and
|
||||
link-based HAMT size estimation.
|
||||
|
||||
See <https://github.com/ipfs/kubo/blob/master/config/profile.go> for exact [`Import.*`](#import) settings.
|
||||
|
||||
> [!NOTE]
|
||||
> Use only when legacy CIDs are required. For new projects, use [`unixfs-v1-2025`](#unixfs-v1-2025-profile).
|
||||
>
|
||||
> See [IPIP-499](https://github.com/ipfs/specs/pull/499) for more details.
|
||||
|
||||
### `legacy-cid-v0` profile
|
||||
|
||||
Makes UnixFS import (`ipfs add`) produce legacy CIDv0 with no raw leaves, sha2-256 and 256 KiB chunks.
|
||||
Alias for [`unixfs-v0-2015`](#unixfs-v0-2015-profile) profile.
|
||||
|
||||
### `unixfs-v1-2025` profile
|
||||
|
||||
Recommended UnixFS import profile for cross-implementation CID determinism.
|
||||
Uses CIDv1, raw leaves, sha2-256, 1 MiB chunks, 1024 links per file node,
|
||||
256 HAMT fanout, and block-based size estimation for HAMT threshold.
|
||||
|
||||
See <https://github.com/ipfs/kubo/blob/master/config/profile.go> for exact [`Import.*`](#import) settings.
|
||||
|
||||
> [!NOTE]
|
||||
> This profile is provided for legacy users and should not be used for new projects.
|
||||
|
||||
### `test-cid-v1` profile
|
||||
|
||||
Makes UnixFS import (`ipfs add`) produce modern CIDv1 with raw leaves, sha2-256
|
||||
and 1 MiB chunks (max 174 links per file, 256 per HAMT node, switch dir to HAMT
|
||||
above 256KiB).
|
||||
|
||||
See <https://github.com/ipfs/kubo/blob/master/config/profile.go> for exact [`Import.*`](#import) settings.
|
||||
|
||||
> [!NOTE]
|
||||
> [`Import.*`](#import) settings applied by this profile MAY change in future release. Provided for testing purposes.
|
||||
> This profile ensures CID consistency across different IPFS implementations.
|
||||
>
|
||||
> Follow [kubo#4143](https://github.com/ipfs/kubo/issues/4143) for more details,
|
||||
> and provide feedback in [discuss.ipfs.tech/t/should-we-profile-cids](https://discuss.ipfs.tech/t/should-we-profile-cids/18507) or [ipfs/specs#499](https://github.com/ipfs/specs/pull/499).
|
||||
|
||||
### `test-cid-v1-wide` profile
|
||||
|
||||
Makes UnixFS import (`ipfs add`) produce modern CIDv1 with raw leaves, sha2-256
|
||||
and 1 MiB chunks and wider file DAGs (max 1024 links per every node type,
|
||||
switch dir to HAMT above 1MiB).
|
||||
|
||||
See <https://github.com/ipfs/kubo/blob/master/config/profile.go> for exact [`Import.*`](#import) settings.
|
||||
|
||||
> [!NOTE]
|
||||
> [`Import.*`](#import) settings applied by this profile MAY change in future release. Provided for testing purposes.
|
||||
>
|
||||
> Follow [kubo#4143](https://github.com/ipfs/kubo/issues/4143) for more details,
|
||||
> and provide feedback in [discuss.ipfs.tech/t/should-we-profile-cids](https://discuss.ipfs.tech/t/should-we-profile-cids/18507) or [ipfs/specs#499](https://github.com/ipfs/specs/pull/499).
|
||||
> See [IPIP-499](https://github.com/ipfs/specs/pull/499) for more details.
|
||||
|
||||
## Security
|
||||
|
||||
|
||||
@ -7,7 +7,7 @@ go 1.25
|
||||
replace github.com/ipfs/kubo => ./../../..
|
||||
|
||||
require (
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204011824-2688767ff981
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204203152-f188f79fd412
|
||||
github.com/ipfs/kubo v0.0.0-00010101000000-000000000000
|
||||
github.com/libp2p/go-libp2p v0.47.0
|
||||
github.com/multiformats/go-multiaddr v0.16.1
|
||||
@ -85,7 +85,7 @@ require (
|
||||
github.com/ipfs/go-ds-pebble v0.5.9 // indirect
|
||||
github.com/ipfs/go-dsqueue v0.1.2 // indirect
|
||||
github.com/ipfs/go-fs-lock v0.1.1 // indirect
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260203151407-4b3827ebb483 // indirect
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260204204540-af9bcbaf5709 // indirect
|
||||
github.com/ipfs/go-ipfs-ds-help v1.1.1 // indirect
|
||||
github.com/ipfs/go-ipfs-pq v0.0.4 // indirect
|
||||
github.com/ipfs/go-ipfs-redirects-file v0.1.2 // indirect
|
||||
|
||||
@ -267,8 +267,8 @@ github.com/ipfs-shipyard/nopfs/ipfs v0.25.0 h1:OqNqsGZPX8zh3eFMO8Lf8EHRRnSGBMqcd
|
||||
github.com/ipfs-shipyard/nopfs/ipfs v0.25.0/go.mod h1:BxhUdtBgOXg1B+gAPEplkg/GpyTZY+kCMSfsJvvydqU=
|
||||
github.com/ipfs/bbloom v0.0.4 h1:Gi+8EGJ2y5qiD5FbsbpX/TMNcJw8gSqr7eyjHa4Fhvs=
|
||||
github.com/ipfs/bbloom v0.0.4/go.mod h1:cS9YprKXpoZ9lT0n/Mw/a6/aFV6DTjTLYHeA+gyqMG0=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204011824-2688767ff981 h1:Q3XjjicNTpok8gD0WwbLYZpmbRoykNTiCLbpj3EjnPc=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204011824-2688767ff981/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204203152-f188f79fd412 h1:nfRIkMIhetCWD8jw5ya+FY+jn9ii2c+U5gdkmSS4L1Q=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204203152-f188f79fd412/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
|
||||
github.com/ipfs/go-bitfield v1.1.0 h1:fh7FIo8bSwaJEh6DdTWbCeZ1eqOaOkKFI74SCnsWbGA=
|
||||
github.com/ipfs/go-bitfield v1.1.0/go.mod h1:paqf1wjq/D2BBmzfTVFlJQ9IlFOZpg422HL0HqsGWHU=
|
||||
github.com/ipfs/go-block-format v0.0.3/go.mod h1:4LmD4ZUw0mhO+JSKdpWwrzATiEfM7WWgQ8H5l6P8MVk=
|
||||
@ -303,8 +303,8 @@ github.com/ipfs/go-dsqueue v0.1.2 h1:jBMsgvT9Pj9l3cqI0m5jYpW/aWDYkW4Us6EuzrcSGbs
|
||||
github.com/ipfs/go-dsqueue v0.1.2/go.mod h1:OU94YuMVUIF/ctR7Ysov9PI4gOa2XjPGN9nd8imSv78=
|
||||
github.com/ipfs/go-fs-lock v0.1.1 h1:TecsP/Uc7WqYYatasreZQiP9EGRy4ZnKoG4yXxR33nw=
|
||||
github.com/ipfs/go-fs-lock v0.1.1/go.mod h1:2goSXMCw7QfscHmSe09oXiR34DQeUdm+ei+dhonqly0=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260203151407-4b3827ebb483 h1:FnQqL92YxPX08/dcqE4cCSqEzwVGSdj2wprWHX+cUtM=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260203151407-4b3827ebb483/go.mod h1:YmhRbpaLKg40i9Ogj2+L41tJ+8x50fF8u1FJJD/WNhc=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260204204540-af9bcbaf5709 h1:0JiurWPnR7ZtjYW8XdfThOcOU5WlVVGQ1JY4FHHgyu8=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260204204540-af9bcbaf5709/go.mod h1:yZeTCte5zTH66bbEpLPkSog3/ImppCD00DMP7NjYmys=
|
||||
github.com/ipfs/go-ipfs-delay v0.0.0-20181109222059-70721b86a9a8/go.mod h1:8SP1YXK1M1kXuc4KJZINY3TQQ03J2rwBG9QfXmbRPrw=
|
||||
github.com/ipfs/go-ipfs-delay v0.0.1 h1:r/UXYyRcddO6thwOnhiznIAiSvxMECGgtv35Xs1IeRQ=
|
||||
github.com/ipfs/go-ipfs-delay v0.0.1/go.mod h1:8SP1YXK1M1kXuc4KJZINY3TQQ03J2rwBG9QfXmbRPrw=
|
||||
|
||||
4
go.mod
4
go.mod
@ -21,7 +21,7 @@ require (
|
||||
github.com/hashicorp/go-version v1.8.0
|
||||
github.com/ipfs-shipyard/nopfs v0.0.14
|
||||
github.com/ipfs-shipyard/nopfs/ipfs v0.25.0
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204011824-2688767ff981
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204203152-f188f79fd412
|
||||
github.com/ipfs/go-block-format v0.2.3
|
||||
github.com/ipfs/go-cid v0.6.0
|
||||
github.com/ipfs/go-cidutil v0.1.0
|
||||
@ -33,7 +33,7 @@ require (
|
||||
github.com/ipfs/go-ds-measure v0.2.2
|
||||
github.com/ipfs/go-ds-pebble v0.5.9
|
||||
github.com/ipfs/go-fs-lock v0.1.1
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260203151407-4b3827ebb483
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260204204540-af9bcbaf5709
|
||||
github.com/ipfs/go-ipld-cbor v0.2.1
|
||||
github.com/ipfs/go-ipld-format v0.6.3
|
||||
github.com/ipfs/go-ipld-git v0.1.1
|
||||
|
||||
8
go.sum
8
go.sum
@ -337,8 +337,8 @@ github.com/ipfs-shipyard/nopfs/ipfs v0.25.0 h1:OqNqsGZPX8zh3eFMO8Lf8EHRRnSGBMqcd
|
||||
github.com/ipfs-shipyard/nopfs/ipfs v0.25.0/go.mod h1:BxhUdtBgOXg1B+gAPEplkg/GpyTZY+kCMSfsJvvydqU=
|
||||
github.com/ipfs/bbloom v0.0.4 h1:Gi+8EGJ2y5qiD5FbsbpX/TMNcJw8gSqr7eyjHa4Fhvs=
|
||||
github.com/ipfs/bbloom v0.0.4/go.mod h1:cS9YprKXpoZ9lT0n/Mw/a6/aFV6DTjTLYHeA+gyqMG0=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204011824-2688767ff981 h1:Q3XjjicNTpok8gD0WwbLYZpmbRoykNTiCLbpj3EjnPc=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204011824-2688767ff981/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204203152-f188f79fd412 h1:nfRIkMIhetCWD8jw5ya+FY+jn9ii2c+U5gdkmSS4L1Q=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204203152-f188f79fd412/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
|
||||
github.com/ipfs/go-bitfield v1.1.0 h1:fh7FIo8bSwaJEh6DdTWbCeZ1eqOaOkKFI74SCnsWbGA=
|
||||
github.com/ipfs/go-bitfield v1.1.0/go.mod h1:paqf1wjq/D2BBmzfTVFlJQ9IlFOZpg422HL0HqsGWHU=
|
||||
github.com/ipfs/go-block-format v0.0.3/go.mod h1:4LmD4ZUw0mhO+JSKdpWwrzATiEfM7WWgQ8H5l6P8MVk=
|
||||
@ -373,8 +373,8 @@ github.com/ipfs/go-dsqueue v0.1.2 h1:jBMsgvT9Pj9l3cqI0m5jYpW/aWDYkW4Us6EuzrcSGbs
|
||||
github.com/ipfs/go-dsqueue v0.1.2/go.mod h1:OU94YuMVUIF/ctR7Ysov9PI4gOa2XjPGN9nd8imSv78=
|
||||
github.com/ipfs/go-fs-lock v0.1.1 h1:TecsP/Uc7WqYYatasreZQiP9EGRy4ZnKoG4yXxR33nw=
|
||||
github.com/ipfs/go-fs-lock v0.1.1/go.mod h1:2goSXMCw7QfscHmSe09oXiR34DQeUdm+ei+dhonqly0=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260203151407-4b3827ebb483 h1:FnQqL92YxPX08/dcqE4cCSqEzwVGSdj2wprWHX+cUtM=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260203151407-4b3827ebb483/go.mod h1:YmhRbpaLKg40i9Ogj2+L41tJ+8x50fF8u1FJJD/WNhc=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260204204540-af9bcbaf5709 h1:0JiurWPnR7ZtjYW8XdfThOcOU5WlVVGQ1JY4FHHgyu8=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260204204540-af9bcbaf5709/go.mod h1:yZeTCte5zTH66bbEpLPkSog3/ImppCD00DMP7NjYmys=
|
||||
github.com/ipfs/go-ipfs-delay v0.0.0-20181109222059-70721b86a9a8/go.mod h1:8SP1YXK1M1kXuc4KJZINY3TQQ03J2rwBG9QfXmbRPrw=
|
||||
github.com/ipfs/go-ipfs-delay v0.0.1 h1:r/UXYyRcddO6thwOnhiznIAiSvxMECGgtv35Xs1IeRQ=
|
||||
github.com/ipfs/go-ipfs-delay v0.0.1/go.mod h1:8SP1YXK1M1kXuc4KJZINY3TQQ03J2rwBG9QfXmbRPrw=
|
||||
|
||||
@ -8,7 +8,6 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/dustin/go-humanize"
|
||||
"github.com/ipfs/kubo/config"
|
||||
"github.com/ipfs/kubo/test/cli/harness"
|
||||
"github.com/ipfs/kubo/test/cli/testutils"
|
||||
@ -40,11 +39,6 @@ func TestAdd(t *testing.T) {
|
||||
shortStringCidV1Sha512 = "bafkrgqbqt3gerhas23vuzrapkdeqf4vu2dwxp3srdj6hvg6nhsug2tgyn6mj3u23yx7utftq3i2ckw2fwdh5qmhid5qf3t35yvkc5e5ottlw6"
|
||||
)
|
||||
|
||||
const (
|
||||
cidV0Length = 34 // cidv0 sha2-256
|
||||
cidV1Length = 36 // cidv1 sha2-256
|
||||
)
|
||||
|
||||
t.Run("produced cid version: implicit default (CIDv0)", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
@ -166,7 +160,7 @@ func TestAdd(t *testing.T) {
|
||||
//
|
||||
// UnixFSChunker=size-262144 (256KiB)
|
||||
// Import.UnixFSFileMaxLinks=174
|
||||
node := harness.NewT(t).NewNode().Init("--profile=legacy-cid-v0") // legacy-cid-v0 for determinism across all params
|
||||
node := harness.NewT(t).NewNode().Init("--profile=unixfs-v0-2015") // unixfs-v0-2015 for determinism across all params
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.UnixFSChunker = *config.NewOptionalString("size-262144") // 256 KiB chunks
|
||||
cfg.Import.UnixFSFileMaxLinks = *config.NewOptionalInteger(174) // max 174 per level
|
||||
@ -187,266 +181,243 @@ func TestAdd(t *testing.T) {
|
||||
require.Equal(t, "QmbBftNHWmjSWKLC49dMVrfnY8pjrJYntiAXirFJ7oJrNk", cidStr)
|
||||
})
|
||||
|
||||
t.Run("ipfs init --profile=legacy-cid-v0 sets config that produces legacy CIDv0", func(t *testing.T) {
|
||||
// Profile-specific threshold tests are in cid_profiles_test.go (TestCIDProfiles).
|
||||
// Tests here cover general ipfs add behavior not tied to specific profiles.
|
||||
|
||||
t.Run("ipfs add --hidden", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init("--profile=legacy-cid-v0")
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
cidStr := node.IPFSAddStr(shortString)
|
||||
require.Equal(t, shortStringCidV0, cidStr)
|
||||
})
|
||||
// Helper to create test directory with hidden file
|
||||
setupTestDir := func(t *testing.T, node *harness.Node) string {
|
||||
testDir, err := os.MkdirTemp(node.Dir, "hidden-test")
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, os.WriteFile(filepath.Join(testDir, "visible.txt"), []byte("visible"), 0o644))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(testDir, ".hidden"), []byte("hidden"), 0o644))
|
||||
return testDir
|
||||
}
|
||||
|
||||
t.Run("ipfs init --profile=legacy-cid-v0 applies UnixFSChunker=size-262144 and UnixFSFileMaxLinks", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
seed := "v0-seed"
|
||||
profile := "--profile=legacy-cid-v0"
|
||||
|
||||
t.Run("under UnixFSFileMaxLinks=174", func(t *testing.T) {
|
||||
t.Run("default excludes hidden files", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
// Add 44544KiB file:
|
||||
// 174 * 256KiB should fit in single DAG layer
|
||||
cidStr := node.IPFSAddDeterministic("44544KiB", seed)
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 174, len(root.Links))
|
||||
// expect same CID every time
|
||||
require.Equal(t, "QmUbBALi174SnogsUzLpYbD4xPiBSFANF4iztWCsHbMKh2", cidStr)
|
||||
|
||||
testDir := setupTestDir(t, node)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", testDir).Stdout.Trimmed()
|
||||
lsOutput := node.IPFS("ls", cidStr).Stdout.Trimmed()
|
||||
require.Contains(t, lsOutput, "visible.txt")
|
||||
require.NotContains(t, lsOutput, ".hidden")
|
||||
})
|
||||
|
||||
t.Run("above UnixFSFileMaxLinks=174", func(t *testing.T) {
|
||||
t.Run("--hidden includes hidden files", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
// add 256KiB (one more block), it should force rebalancing DAG and moving most to second layer
|
||||
cidStr := node.IPFSAddDeterministic("44800KiB", seed)
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 2, len(root.Links))
|
||||
// expect same CID every time
|
||||
require.Equal(t, "QmepeWtdmS1hHXx1oZXsPUv6bMrfRRKfZcoPPU4eEfjnbf", cidStr)
|
||||
|
||||
testDir := setupTestDir(t, node)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", "--hidden", testDir).Stdout.Trimmed()
|
||||
lsOutput := node.IPFS("ls", cidStr).Stdout.Trimmed()
|
||||
require.Contains(t, lsOutput, "visible.txt")
|
||||
require.Contains(t, lsOutput, ".hidden")
|
||||
})
|
||||
|
||||
t.Run("-H includes hidden files", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
testDir := setupTestDir(t, node)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", "-H", testDir).Stdout.Trimmed()
|
||||
lsOutput := node.IPFS("ls", cidStr).Stdout.Trimmed()
|
||||
require.Contains(t, lsOutput, "visible.txt")
|
||||
require.Contains(t, lsOutput, ".hidden")
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("ipfs init --profile=legacy-cid-v0 applies UnixFSHAMTDirectoryMaxFanout=256 and UnixFSHAMTDirectorySizeThreshold=256KiB", func(t *testing.T) {
|
||||
t.Run("ipfs add --empty-dirs", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
seed := "hamt-legacy-cid-v0"
|
||||
profile := "--profile=legacy-cid-v0"
|
||||
|
||||
t.Run("under UnixFSHAMTDirectorySizeThreshold=256KiB", func(t *testing.T) {
|
||||
// Helper to create test directory with empty subdirectory
|
||||
setupTestDir := func(t *testing.T, node *harness.Node) string {
|
||||
testDir, err := os.MkdirTemp(node.Dir, "empty-dirs-test")
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, os.Mkdir(filepath.Join(testDir, "empty-subdir"), 0o755))
|
||||
require.NoError(t, os.WriteFile(filepath.Join(testDir, "file.txt"), []byte("content"), 0o644))
|
||||
return testDir
|
||||
}
|
||||
|
||||
t.Run("default includes empty directories", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
randDir, err := os.MkdirTemp(node.Dir, seed)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create directory with a lot of files that have filenames which together take close to UnixFSHAMTDirectorySizeThreshold in total
|
||||
err = createDirectoryForHAMT(randDir, cidV0Length, "255KiB", seed)
|
||||
require.NoError(t, err)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", randDir).Stdout.Trimmed()
|
||||
|
||||
// Confirm the number of links is more than UnixFSHAMTDirectorySizeThreshold (indicating regular "basic" directory"
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 903, len(root.Links))
|
||||
testDir := setupTestDir(t, node)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", testDir).Stdout.Trimmed()
|
||||
require.Contains(t, node.IPFS("ls", cidStr).Stdout.Trimmed(), "empty-subdir")
|
||||
})
|
||||
|
||||
t.Run("above UnixFSHAMTDirectorySizeThreshold=256KiB", func(t *testing.T) {
|
||||
t.Run("--empty-dirs=true includes empty directories", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
randDir, err := os.MkdirTemp(node.Dir, seed)
|
||||
require.NoError(t, err)
|
||||
testDir := setupTestDir(t, node)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", "--empty-dirs=true", testDir).Stdout.Trimmed()
|
||||
require.Contains(t, node.IPFS("ls", cidStr).Stdout.Trimmed(), "empty-subdir")
|
||||
})
|
||||
|
||||
// Create directory with a lot of files that have filenames which together take close to UnixFSHAMTDirectorySizeThreshold in total
|
||||
err = createDirectoryForHAMT(randDir, cidV0Length, "257KiB", seed)
|
||||
require.NoError(t, err)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", randDir).Stdout.Trimmed()
|
||||
t.Run("--empty-dirs=false excludes empty directories", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Confirm this time, the number of links is less than UnixFSHAMTDirectorySizeThreshold
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 252, len(root.Links))
|
||||
testDir := setupTestDir(t, node)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", "--empty-dirs=false", testDir).Stdout.Trimmed()
|
||||
lsOutput := node.IPFS("ls", cidStr).Stdout.Trimmed()
|
||||
require.NotContains(t, lsOutput, "empty-subdir")
|
||||
require.Contains(t, lsOutput, "file.txt")
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("ipfs init --profile=test-cid-v1 produces CIDv1 with raw leaves", func(t *testing.T) {
|
||||
t.Run("ipfs add symlink handling", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init("--profile=test-cid-v1")
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
cidStr := node.IPFSAddStr(shortString)
|
||||
require.Equal(t, shortStringCidV1, cidStr) // raw leaf
|
||||
})
|
||||
// Helper to create test directory structure:
|
||||
// testDir/
|
||||
// target.txt (file with "target content")
|
||||
// link.txt -> target.txt (symlink at top level)
|
||||
// subdir/
|
||||
// subsubdir/
|
||||
// nested-target.txt (file with "nested content")
|
||||
// nested-link.txt -> nested-target.txt (symlink in sub-sub directory)
|
||||
setupTestDir := func(t *testing.T, node *harness.Node) string {
|
||||
testDir, err := os.MkdirTemp(node.Dir, "deref-symlinks-test")
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("ipfs init --profile=test-cid-v1 applies UnixFSChunker=size-1048576", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
seed := "v1-seed"
|
||||
profile := "--profile=test-cid-v1"
|
||||
// Top-level file and symlink
|
||||
targetFile := filepath.Join(testDir, "target.txt")
|
||||
require.NoError(t, os.WriteFile(targetFile, []byte("target content"), 0o644))
|
||||
require.NoError(t, os.Symlink("target.txt", filepath.Join(testDir, "link.txt")))
|
||||
|
||||
t.Run("under UnixFSFileMaxLinks=174", func(t *testing.T) {
|
||||
// Nested file and symlink in sub-sub directory
|
||||
subsubdir := filepath.Join(testDir, "subdir", "subsubdir")
|
||||
require.NoError(t, os.MkdirAll(subsubdir, 0o755))
|
||||
nestedTarget := filepath.Join(subsubdir, "nested-target.txt")
|
||||
require.NoError(t, os.WriteFile(nestedTarget, []byte("nested content"), 0o644))
|
||||
require.NoError(t, os.Symlink("nested-target.txt", filepath.Join(subsubdir, "nested-link.txt")))
|
||||
|
||||
return testDir
|
||||
}
|
||||
|
||||
t.Run("default preserves symlinks", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
// Add 174MiB file:
|
||||
// 174 * 1MiB should fit in single layer
|
||||
cidStr := node.IPFSAddDeterministic("174MiB", seed)
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 174, len(root.Links))
|
||||
// expect same CID every time
|
||||
require.Equal(t, "bafybeigwduxcf2aawppv3isnfeshnimkyplvw3hthxjhr2bdeje4tdaicu", cidStr)
|
||||
|
||||
testDir := setupTestDir(t, node)
|
||||
|
||||
// Add directory with symlink (default: preserve)
|
||||
dirCID := node.IPFS("add", "-r", "-Q", testDir).Stdout.Trimmed()
|
||||
|
||||
// Get and verify symlinks are preserved
|
||||
outDir, err := os.MkdirTemp(node.Dir, "symlink-get-out")
|
||||
require.NoError(t, err)
|
||||
node.IPFS("get", "-o", outDir, dirCID)
|
||||
|
||||
// Check top-level symlink is preserved
|
||||
linkPath := filepath.Join(outDir, "link.txt")
|
||||
fi, err := os.Lstat(linkPath)
|
||||
require.NoError(t, err)
|
||||
require.True(t, fi.Mode()&os.ModeSymlink != 0, "link.txt should be a symlink")
|
||||
target, err := os.Readlink(linkPath)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "target.txt", target)
|
||||
|
||||
// Check nested symlink is preserved
|
||||
nestedLinkPath := filepath.Join(outDir, "subdir", "subsubdir", "nested-link.txt")
|
||||
fi, err = os.Lstat(nestedLinkPath)
|
||||
require.NoError(t, err)
|
||||
require.True(t, fi.Mode()&os.ModeSymlink != 0, "nested-link.txt should be a symlink")
|
||||
})
|
||||
|
||||
t.Run("above UnixFSFileMaxLinks=174", func(t *testing.T) {
|
||||
// --dereference-args is deprecated but still works for backwards compatibility.
|
||||
// It only resolves symlinks passed as CLI arguments, NOT symlinks found
|
||||
// during directory traversal. Use --dereference-symlinks instead.
|
||||
t.Run("--dereference-args resolves CLI args only", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
// add +1MiB (one more block), it should force rebalancing DAG and moving most to second layer
|
||||
cidStr := node.IPFSAddDeterministic("175MiB", seed)
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 2, len(root.Links))
|
||||
// expect same CID every time
|
||||
require.Equal(t, "bafybeidhd7lo2n2v7lta5yamob3xwhbxcczmmtmhquwhjesi35jntf7mpu", cidStr)
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("ipfs init --profile=test-cid-v1 applies UnixFSHAMTDirectoryMaxFanout=256 and UnixFSHAMTDirectorySizeThreshold=256KiB", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
seed := "hamt-cid-v1"
|
||||
profile := "--profile=test-cid-v1"
|
||||
|
||||
t.Run("under UnixFSHAMTDirectorySizeThreshold=256KiB", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
randDir, err := os.MkdirTemp(node.Dir, seed)
|
||||
require.NoError(t, err)
|
||||
testDir := setupTestDir(t, node)
|
||||
symlinkPath := filepath.Join(testDir, "link.txt")
|
||||
targetPath := filepath.Join(testDir, "target.txt")
|
||||
|
||||
// Create directory with a lot of files that have filenames which together take close to UnixFSHAMTDirectorySizeThreshold in total
|
||||
err = createDirectoryForHAMT(randDir, cidV1Length, "255KiB", seed)
|
||||
require.NoError(t, err)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", randDir).Stdout.Trimmed()
|
||||
symlinkCID := node.IPFS("add", "-Q", "--dereference-args", symlinkPath).Stdout.Trimmed()
|
||||
targetCID := node.IPFS("add", "-Q", targetPath).Stdout.Trimmed()
|
||||
|
||||
// Confirm the number of links is more than UnixFSHAMTDirectoryMaxFanout (indicating regular "basic" directory"
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 897, len(root.Links))
|
||||
// CIDs should match because --dereference-args resolves the symlink
|
||||
require.Equal(t, targetCID, symlinkCID,
|
||||
"--dereference-args should resolve CLI arg symlink to target content")
|
||||
|
||||
// Now add the directory recursively with --dereference-args
|
||||
// Nested symlinks should NOT be resolved (only CLI args are resolved)
|
||||
dirCID := node.IPFS("add", "-r", "-Q", "--dereference-args", testDir).Stdout.Trimmed()
|
||||
|
||||
outDir, err := os.MkdirTemp(node.Dir, "deref-args-out")
|
||||
require.NoError(t, err)
|
||||
node.IPFS("get", "-o", outDir, dirCID)
|
||||
|
||||
// Nested symlink should still be a symlink (not dereferenced)
|
||||
nestedLinkPath := filepath.Join(outDir, "subdir", "subsubdir", "nested-link.txt")
|
||||
fi, err := os.Lstat(nestedLinkPath)
|
||||
require.NoError(t, err)
|
||||
require.True(t, fi.Mode()&os.ModeSymlink != 0,
|
||||
"--dereference-args should NOT resolve nested symlinks, only CLI args")
|
||||
})
|
||||
|
||||
t.Run("above UnixFSHAMTDirectorySizeThreshold=256KiB", func(t *testing.T) {
|
||||
// --dereference-symlinks resolves ALL symlinks: both CLI arguments AND
|
||||
// symlinks found during directory traversal. This is a superset of
|
||||
// the deprecated --dereference-args behavior.
|
||||
t.Run("--dereference-symlinks resolves all symlinks", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
randDir, err := os.MkdirTemp(node.Dir, seed)
|
||||
testDir := setupTestDir(t, node)
|
||||
symlinkPath := filepath.Join(testDir, "link.txt")
|
||||
targetPath := filepath.Join(testDir, "target.txt")
|
||||
|
||||
symlinkCID := node.IPFS("add", "-Q", "--dereference-symlinks", symlinkPath).Stdout.Trimmed()
|
||||
targetCID := node.IPFS("add", "-Q", targetPath).Stdout.Trimmed()
|
||||
|
||||
require.Equal(t, targetCID, symlinkCID,
|
||||
"--dereference-symlinks should resolve CLI arg symlink (like --dereference-args)")
|
||||
|
||||
// Test 2: Nested symlinks in sub-sub directory are ALSO resolved
|
||||
dirCID := node.IPFS("add", "-r", "-Q", "--dereference-symlinks", testDir).Stdout.Trimmed()
|
||||
|
||||
outDir, err := os.MkdirTemp(node.Dir, "deref-symlinks-out")
|
||||
require.NoError(t, err)
|
||||
node.IPFS("get", "-o", outDir, dirCID)
|
||||
|
||||
// Create directory with a lot of files that have filenames which together take close to UnixFSHAMTDirectorySizeThreshold in total
|
||||
err = createDirectoryForHAMT(randDir, cidV1Length, "257KiB", seed)
|
||||
// Top-level symlink should be dereferenced to regular file
|
||||
linkPath := filepath.Join(outDir, "link.txt")
|
||||
fi, err := os.Lstat(linkPath)
|
||||
require.NoError(t, err)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", randDir).Stdout.Trimmed()
|
||||
|
||||
// Confirm this time, the number of links is less than UnixFSHAMTDirectoryMaxFanout
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 252, len(root.Links))
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("ipfs init --profile=test-cid-v1-wide applies UnixFSChunker=size-1048576 and UnixFSFileMaxLinks=1024", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
seed := "v1-seed-1024"
|
||||
profile := "--profile=test-cid-v1-wide"
|
||||
|
||||
t.Run("under UnixFSFileMaxLinks=1024", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
// Add 174MiB file:
|
||||
// 1024 * 1MiB should fit in single layer
|
||||
cidStr := node.IPFSAddDeterministic("1024MiB", seed)
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 1024, len(root.Links))
|
||||
// expect same CID every time
|
||||
require.Equal(t, "bafybeiej5w63ir64oxgkr5htqmlerh5k2rqflurn2howimexrlkae64xru", cidStr)
|
||||
})
|
||||
|
||||
t.Run("above UnixFSFileMaxLinks=1024", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
// add +1MiB (one more block), it should force rebalancing DAG and moving most to second layer
|
||||
cidStr := node.IPFSAddDeterministic("1025MiB", seed)
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 2, len(root.Links))
|
||||
// expect same CID every time
|
||||
require.Equal(t, "bafybeieilp2qx24pe76hxrxe6bpef5meuxto3kj5dd6mhb5kplfeglskdm", cidStr)
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("ipfs init --profile=test-cid-v1-wide applies UnixFSHAMTDirectoryMaxFanout=256 and UnixFSHAMTDirectorySizeThreshold=1MiB", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
seed := "hamt-cid-v1"
|
||||
profile := "--profile=test-cid-v1-wide"
|
||||
|
||||
t.Run("under UnixFSHAMTDirectorySizeThreshold=1MiB", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
randDir, err := os.MkdirTemp(node.Dir, seed)
|
||||
require.False(t, fi.Mode()&os.ModeSymlink != 0,
|
||||
"link.txt should be dereferenced to regular file")
|
||||
content, err := os.ReadFile(linkPath)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "target content", string(content))
|
||||
|
||||
// Create directory with a lot of files that have filenames which together take close to UnixFSHAMTDirectorySizeThreshold in total
|
||||
err = createDirectoryForHAMT(randDir, cidV1Length, "1023KiB", seed)
|
||||
// Nested symlink in sub-sub directory should ALSO be dereferenced
|
||||
nestedLinkPath := filepath.Join(outDir, "subdir", "subsubdir", "nested-link.txt")
|
||||
fi, err = os.Lstat(nestedLinkPath)
|
||||
require.NoError(t, err)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", randDir).Stdout.Trimmed()
|
||||
|
||||
// Confirm the number of links is more than UnixFSHAMTDirectoryMaxFanout (indicating regular "basic" directory"
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 3599, len(root.Links))
|
||||
})
|
||||
|
||||
t.Run("above UnixFSHAMTDirectorySizeThreshold=1MiB", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(profile)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
randDir, err := os.MkdirTemp(node.Dir, seed)
|
||||
require.False(t, fi.Mode()&os.ModeSymlink != 0,
|
||||
"nested-link.txt should be dereferenced (--dereference-symlinks resolves ALL symlinks)")
|
||||
nestedContent, err := os.ReadFile(nestedLinkPath)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create directory with a lot of files that have filenames which together take close to UnixFSHAMTDirectorySizeThreshold in total
|
||||
err = createDirectoryForHAMT(randDir, cidV1Length, "1025KiB", seed)
|
||||
require.NoError(t, err)
|
||||
cidStr := node.IPFS("add", "-r", "-Q", randDir).Stdout.Trimmed()
|
||||
|
||||
// Confirm this time, the number of links is less than UnixFSHAMTDirectoryMaxFanout
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 992, len(root.Links))
|
||||
require.Equal(t, "nested content", string(nestedContent))
|
||||
})
|
||||
})
|
||||
}
|
||||
@ -627,30 +598,46 @@ func TestAddFastProvide(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
// createDirectoryForHAMT aims to create enough files with long names for the directory block to be close to the UnixFSHAMTDirectorySizeThreshold.
|
||||
// The calculation is based on boxo's HAMTShardingSize and sizeBelowThreshold which calculates ballpark size of the block
|
||||
// by adding length of link names and the binary cid length.
|
||||
// See https://github.com/ipfs/boxo/blob/6c5a07602aed248acc86598f30ab61923a54a83e/ipld/unixfs/io/directory.go#L491
|
||||
func createDirectoryForHAMT(dirPath string, cidLength int, unixfsNodeSizeTarget, seed string) error {
|
||||
hamtThreshold, err := humanize.ParseBytes(unixfsNodeSizeTarget)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// createDirectoryForHAMTLinksEstimation creates a directory with the specified number
|
||||
// of files for testing links-based size estimation (size = sum of nameLen + cidLen).
|
||||
// Used by legacy profiles (unixfs-v0-2015).
|
||||
//
|
||||
// The lastNameLen parameter allows the last file to have a different name length,
|
||||
// enabling exact +1 byte threshold tests.
|
||||
func createDirectoryForHAMTLinksEstimation(dirPath string, numFiles, nameLen, lastNameLen int, seed string) error {
|
||||
return createDeterministicFiles(dirPath, numFiles, nameLen, lastNameLen, seed)
|
||||
}
|
||||
|
||||
// Calculate how many files with long filenames are needed to hit UnixFSHAMTDirectorySizeThreshold
|
||||
nameLen := 255 // max that works across windows/macos/linux
|
||||
// createDirectoryForHAMTBlockEstimation creates a directory with the specified number
|
||||
// of files for testing block-based size estimation (LinkSerializedSize with protobuf overhead).
|
||||
// Used by modern profiles (unixfs-v1-2025).
|
||||
//
|
||||
// The lastNameLen parameter allows the last file to have a different name length,
|
||||
// enabling exact +1 byte threshold tests.
|
||||
func createDirectoryForHAMTBlockEstimation(dirPath string, numFiles, nameLen, lastNameLen int, seed string) error {
|
||||
return createDeterministicFiles(dirPath, numFiles, nameLen, lastNameLen, seed)
|
||||
}
|
||||
|
||||
// createDeterministicFiles creates numFiles files with deterministic names.
|
||||
// Files 0 to numFiles-2 have nameLen characters, and the last file has lastNameLen characters.
|
||||
// Each file contains "x" (1 byte) for non-zero tsize in directory links.
|
||||
func createDeterministicFiles(dirPath string, numFiles, nameLen, lastNameLen int, seed string) error {
|
||||
alphabetLen := len(testutils.AlphabetEasy)
|
||||
numFiles := int(hamtThreshold) / (nameLen + cidLength)
|
||||
|
||||
// Deterministic pseudo-random bytes for static CID
|
||||
drand, err := testutils.DeterministicRandomReader(unixfsNodeSizeTarget, seed)
|
||||
// Deterministic pseudo-random bytes for static filenames
|
||||
drand, err := testutils.DeterministicRandomReader("1MiB", seed)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Create necessary files in a single, flat directory
|
||||
for i := 0; i < numFiles; i++ {
|
||||
buf := make([]byte, nameLen)
|
||||
// Use lastNameLen for the final file
|
||||
currentNameLen := nameLen
|
||||
if i == numFiles-1 {
|
||||
currentNameLen = lastNameLen
|
||||
}
|
||||
|
||||
buf := make([]byte, currentNameLen)
|
||||
_, err := io.ReadFull(drand, buf)
|
||||
if err != nil {
|
||||
return err
|
||||
@ -658,21 +645,17 @@ func createDirectoryForHAMT(dirPath string, cidLength int, unixfsNodeSizeTarget,
|
||||
|
||||
// Convert deterministic pseudo-random bytes to ASCII
|
||||
var sb strings.Builder
|
||||
|
||||
for _, b := range buf {
|
||||
// Map byte to printable ASCII range (33-126)
|
||||
char := testutils.AlphabetEasy[int(b)%alphabetLen]
|
||||
sb.WriteRune(char)
|
||||
}
|
||||
filename := sb.String()[:nameLen]
|
||||
filename := sb.String()[:currentNameLen]
|
||||
filePath := filepath.Join(dirPath, filename)
|
||||
|
||||
// Create empty file
|
||||
f, err := os.Create(filePath)
|
||||
if err != nil {
|
||||
// Create file with 1-byte content for non-zero tsize
|
||||
if err := os.WriteFile(filePath, []byte("x"), 0o644); err != nil {
|
||||
return err
|
||||
}
|
||||
f.Close()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
724
test/cli/cid_profiles_test.go
Normal file
724
test/cli/cid_profiles_test.go
Normal file
@ -0,0 +1,724 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
ft "github.com/ipfs/boxo/ipld/unixfs"
|
||||
"github.com/ipfs/kubo/test/cli/harness"
|
||||
"github.com/ipfs/kubo/test/cli/testutils"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// cidProfileExpectations defines expected behaviors for a UnixFS import profile.
|
||||
// This allows DRY testing of multiple profiles with the same test logic.
|
||||
//
|
||||
// Each profile is tested against threshold boundaries to verify:
|
||||
// - CID format (version, hash function, raw leaves vs dag-pb wrapped)
|
||||
// - File chunking (UnixFSChunker size threshold)
|
||||
// - DAG structure (UnixFSFileMaxLinks rebalancing threshold)
|
||||
// - Directory sharding (HAMTThreshold for flat vs HAMT directories)
|
||||
type cidProfileExpectations struct {
|
||||
// Profile identification
|
||||
Name string // canonical profile name from IPIP-499
|
||||
ProfileArgs []string // args to pass to ipfs init (empty for default behavior)
|
||||
|
||||
// CID format expectations
|
||||
CIDVersion int // 0 or 1
|
||||
HashFunc string // e.g., "sha2-256"
|
||||
RawLeaves bool // true = raw codec for small files, false = dag-pb wrapped
|
||||
|
||||
// File chunking expectations (UnixFSChunker config)
|
||||
ChunkSize int // chunk size in bytes (e.g., 262144 for 256KiB, 1048576 for 1MiB)
|
||||
ChunkSizeHuman string // human-readable chunk size (e.g., "256KiB", "1MiB")
|
||||
FileMaxLinks int // max links before DAG rebalancing (UnixFSFileMaxLinks config)
|
||||
|
||||
// HAMT directory sharding expectations (UnixFSHAMTDirectory* config).
|
||||
// Threshold behavior: boxo converts to HAMT when size > HAMTThreshold (not >=).
|
||||
// This means a directory exactly at the threshold stays as a basic (flat) directory.
|
||||
HAMTFanout int // max links per HAMT shard bucket (256)
|
||||
HAMTThreshold int // sharding threshold in bytes (262144 = 256 KiB)
|
||||
HAMTSizeEstimation string // "block" (protobuf size) or "links" (legacy name+cid)
|
||||
|
||||
// Test vector parameters for threshold boundary tests.
|
||||
// - DirBasic: size == threshold (stays basic)
|
||||
// - DirHAMT: size > threshold (converts to HAMT)
|
||||
// For block estimation, last filename length is adjusted to hit exact thresholds.
|
||||
DirBasicNameLen int // filename length for basic directory (files 0 to N-2)
|
||||
DirBasicLastNameLen int // filename length for last file (0 = same as DirBasicNameLen)
|
||||
DirBasicFiles int // file count for basic directory (at exact threshold)
|
||||
DirHAMTNameLen int // filename length for HAMT directory (files 0 to N-2)
|
||||
DirHAMTLastNameLen int // filename length for last file (0 = same as DirHAMTNameLen)
|
||||
DirHAMTFiles int // total file count for HAMT directory (over threshold)
|
||||
|
||||
// Expected deterministic CIDs for test vectors.
|
||||
// These serve as regression tests to detect unintended changes in CID generation.
|
||||
|
||||
// SmallFileCID is the deterministic CID for "hello world" string.
|
||||
// Tests basic CID format (version, codec, hash).
|
||||
SmallFileCID string
|
||||
|
||||
// FileAtChunkSizeCID is the deterministic CID for a file exactly at chunk size.
|
||||
// This file fits in a single block with no links:
|
||||
// - v0-2015: dag-pb wrapped TFile node (CIDv0)
|
||||
// - v1-2025: raw leaf block (CIDv1)
|
||||
FileAtChunkSizeCID string
|
||||
|
||||
// FileOverChunkSizeCID is the deterministic CID for a file 1 byte over chunk size.
|
||||
// This file requires 2 chunks, producing a root dag-pb node with 2 links:
|
||||
// - v0-2015: links point to dag-pb wrapped TFile leaf nodes
|
||||
// - v1-2025: links point to raw leaf blocks
|
||||
FileOverChunkSizeCID string
|
||||
|
||||
// FileAtMaxLinksCID is the deterministic CID for a file at UnixFSFileMaxLinks threshold.
|
||||
// File size = maxLinks * chunkSize, producing a single-layer DAG with exactly maxLinks children.
|
||||
FileAtMaxLinksCID string
|
||||
|
||||
// FileOverMaxLinksCID is the deterministic CID for a file 1 byte over max links threshold.
|
||||
// The +1 byte requires an additional chunk, forcing DAG rebalancing to 2 layers.
|
||||
FileOverMaxLinksCID string
|
||||
|
||||
// DirBasicCID is the deterministic CID for a directory exactly at HAMTThreshold.
|
||||
// With > comparison (not >=), directory at exact threshold stays as basic (flat) directory.
|
||||
DirBasicCID string
|
||||
|
||||
// DirHAMTCID is the deterministic CID for a directory 1 byte over HAMTThreshold.
|
||||
// Crossing the threshold converts the directory to a HAMT sharded structure.
|
||||
DirHAMTCID string
|
||||
}
|
||||
|
||||
// unixfsV02015 is the legacy profile for backward-compatible CID generation.
|
||||
// Alias: legacy-cid-v0
|
||||
var unixfsV02015 = cidProfileExpectations{
|
||||
Name: "unixfs-v0-2015",
|
||||
ProfileArgs: []string{"--profile=unixfs-v0-2015"},
|
||||
|
||||
CIDVersion: 0,
|
||||
HashFunc: "sha2-256",
|
||||
RawLeaves: false,
|
||||
|
||||
ChunkSize: 262144, // 256 KiB
|
||||
ChunkSizeHuman: "256KiB",
|
||||
FileMaxLinks: 174,
|
||||
|
||||
HAMTFanout: 256,
|
||||
HAMTThreshold: 262144, // 256 KiB
|
||||
HAMTSizeEstimation: "links",
|
||||
DirBasicNameLen: 30, // 4096 * (30 + 34) = 262144 exactly at threshold
|
||||
DirBasicFiles: 4096, // 4096 * 64 = 262144 (stays basic with >)
|
||||
DirHAMTNameLen: 31, // 4033 * (31 + 34) = 262145 exactly +1 over threshold
|
||||
DirHAMTLastNameLen: 0, // 0 = same as DirHAMTNameLen (uniform filenames)
|
||||
DirHAMTFiles: 4033, // 4033 * 65 = 262145 (becomes HAMT)
|
||||
|
||||
SmallFileCID: "Qmf412jQZiuVUtdgnB36FXFX7xg5V6KEbSJ4dpQuhkLyfD", // "hello world" dag-pb wrapped
|
||||
FileAtChunkSizeCID: "QmWmRj3dFDZdb6ABvbmKhEL6TmPbAfBZ1t5BxsEyJrcZhE", // 262144 bytes with seed "chunk-v0-seed"
|
||||
FileOverChunkSizeCID: "QmYyLxtzZyW22zpoVAtKANLRHpDjZtNeDjQdJrcQNWoRkJ", // 262145 bytes with seed "chunk-v0-seed"
|
||||
FileAtMaxLinksCID: "QmUbBALi174SnogsUzLpYbD4xPiBSFANF4iztWCsHbMKh2", // 174*256KiB bytes with seed "v0-seed"
|
||||
FileOverMaxLinksCID: "QmV81WL765sC8DXsRhE5fJv2rwhS4icHRaf3J9Zk5FdRnW", // 174*256KiB+1 bytes with seed "v0-seed"
|
||||
DirBasicCID: "QmX5GtRk3TSSEHtdrykgqm4eqMEn3n2XhfkFAis5fjyZmN", // 4096 files at threshold
|
||||
DirHAMTCID: "QmeMiJzmhpJAUgynAcxTQYek5PPKgdv3qEvFsdV3XpVnvP", // 4033 files +1 over threshold
|
||||
}
|
||||
|
||||
// unixfsV12025 is the recommended profile for cross-implementation CID determinism.
|
||||
var unixfsV12025 = cidProfileExpectations{
|
||||
Name: "unixfs-v1-2025",
|
||||
ProfileArgs: []string{"--profile=unixfs-v1-2025"},
|
||||
|
||||
CIDVersion: 1,
|
||||
HashFunc: "sha2-256",
|
||||
RawLeaves: true,
|
||||
|
||||
ChunkSize: 1048576, // 1 MiB
|
||||
ChunkSizeHuman: "1MiB",
|
||||
FileMaxLinks: 1024,
|
||||
|
||||
HAMTFanout: 256,
|
||||
HAMTThreshold: 262144, // 256 KiB
|
||||
HAMTSizeEstimation: "block",
|
||||
// Block size = numFiles * linkSize + 4 bytes overhead
|
||||
// LinkSerializedSize(11, 36, 1) = 55, LinkSerializedSize(21, 36, 1) = 65, LinkSerializedSize(22, 36, 1) = 66
|
||||
DirBasicNameLen: 11, // 4765 files * 55 bytes
|
||||
DirBasicLastNameLen: 21, // last file: 65 bytes; total: 4765*55 + 65 + 4 = 262144 (at threshold)
|
||||
DirBasicFiles: 4766, // stays basic with > comparison
|
||||
DirHAMTNameLen: 11, // 4765 files * 55 bytes
|
||||
DirHAMTLastNameLen: 22, // last file: 66 bytes; total: 4765*55 + 66 + 4 = 262145 (+1 over threshold)
|
||||
DirHAMTFiles: 4766, // becomes HAMT
|
||||
|
||||
SmallFileCID: "bafkreifzjut3te2nhyekklss27nh3k72ysco7y32koao5eei66wof36n5e", // "hello world" raw leaf
|
||||
FileAtChunkSizeCID: "bafkreiacndfy443ter6qr2tmbbdhadvxxheowwf75s6zehscklu6ezxmta", // 1048576 bytes with seed "chunk-v1-seed"
|
||||
FileOverChunkSizeCID: "bafybeigmix7t42i6jacydtquhet7srwvgpizfg7gjbq7627d35mjomtu64", // 1048577 bytes with seed "chunk-v1-seed"
|
||||
FileAtMaxLinksCID: "bafybeihmf37wcuvtx4hpu7he5zl5qaf2ineo2lqlfrapokkm5zzw7zyhvm", // 1024*1MiB bytes with seed "v1-2025-seed"
|
||||
FileOverMaxLinksCID: "bafybeibdsi225ugbkmpbdohnxioyab6jsqrmkts3twhpvfnzp77xtzpyhe", // 1024*1MiB+1 bytes with seed "v1-2025-seed"
|
||||
DirBasicCID: "bafybeic3h7rwruealwxkacabdy45jivq2crwz6bufb5ljwupn36gicplx4", // 4766 files at 262144 bytes (threshold)
|
||||
DirHAMTCID: "bafybeiegvuterwurhdtkikfhbxcldohmxp566vpjdofhzmnhv6o4freidu", // 4766 files at 262145 bytes (+1 over)
|
||||
}
|
||||
|
||||
// defaultProfile points to the profile that matches Kubo's implicit default behavior.
|
||||
// Today this is unixfs-v0-2015. When Kubo changes defaults, update this pointer.
|
||||
var defaultProfile = unixfsV02015
|
||||
|
||||
const (
|
||||
cidV0Length = 34 // CIDv0 sha2-256
|
||||
cidV1Length = 36 // CIDv1 sha2-256
|
||||
)
|
||||
|
||||
// TestCIDProfiles generates deterministic test vectors for CID profile verification.
|
||||
// Set CID_PROFILES_CAR_OUTPUT environment variable to export CAR files.
|
||||
// Example: CID_PROFILES_CAR_OUTPUT=/tmp/cid-profiles go test -run TestCIDProfiles -v
|
||||
func TestCIDProfiles(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
carOutputDir := os.Getenv("CID_PROFILES_CAR_OUTPUT")
|
||||
exportCARs := carOutputDir != ""
|
||||
if exportCARs {
|
||||
if err := os.MkdirAll(carOutputDir, 0o755); err != nil {
|
||||
t.Fatalf("failed to create CAR output directory: %v", err)
|
||||
}
|
||||
t.Logf("CAR export enabled, writing to: %s", carOutputDir)
|
||||
}
|
||||
|
||||
// Test both IPIP-499 profiles
|
||||
for _, profile := range []cidProfileExpectations{unixfsV02015, unixfsV12025} {
|
||||
t.Run(profile.Name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
runProfileTests(t, profile, carOutputDir, exportCARs)
|
||||
})
|
||||
}
|
||||
|
||||
// Test default behavior (no profile specified)
|
||||
t.Run("default", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
// Default behavior should match defaultProfile (currently unixfs-v0-2015)
|
||||
defaultExp := defaultProfile
|
||||
defaultExp.Name = "default"
|
||||
defaultExp.ProfileArgs = nil // no profile args = default behavior
|
||||
runProfileTests(t, defaultExp, carOutputDir, exportCARs)
|
||||
})
|
||||
}
|
||||
|
||||
// runProfileTests runs all test vectors for a given profile.
|
||||
// Tests verify threshold behaviors for:
|
||||
// - Small files (CID format verification)
|
||||
// - UnixFSChunker threshold (single block vs multi-block)
|
||||
// - UnixFSFileMaxLinks threshold (single-layer vs rebalanced DAG)
|
||||
// - HAMTThreshold (basic flat directory vs HAMT sharded)
|
||||
func runProfileTests(t *testing.T, exp cidProfileExpectations, carOutputDir string, exportCARs bool) {
|
||||
cidLen := cidV0Length
|
||||
if exp.CIDVersion == 1 {
|
||||
cidLen = cidV1Length
|
||||
}
|
||||
|
||||
// Test: small file produces correct CID format
|
||||
// Verifies the profile sets the expected CID version, hash function, and leaf encoding.
|
||||
t.Run("small file produces correct CID format", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(exp.ProfileArgs...)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Use "hello world" for determinism
|
||||
cidStr := node.IPFSAddStr("hello world")
|
||||
|
||||
// Verify CID version (v0 starts with "Qm", v1 with "b")
|
||||
verifyCIDVersion(t, node, cidStr, exp.CIDVersion)
|
||||
|
||||
// Verify hash function (sha2-256 for both profiles)
|
||||
verifyHashFunction(t, node, cidStr, exp.HashFunc)
|
||||
|
||||
// Verify raw leaves vs dag-pb wrapped
|
||||
// - v0-2015: dag-pb codec (wrapped)
|
||||
// - v1-2025: raw codec (raw leaves)
|
||||
verifyRawLeaves(t, node, cidStr, exp.RawLeaves)
|
||||
|
||||
// Verify deterministic CID matches expected value
|
||||
if exp.SmallFileCID != "" {
|
||||
require.Equal(t, exp.SmallFileCID, cidStr, "expected deterministic CID for small file")
|
||||
}
|
||||
|
||||
if exportCARs {
|
||||
carPath := filepath.Join(carOutputDir, exp.Name+"_small-file.car")
|
||||
require.NoError(t, node.IPFSDagExport(cidStr, carPath))
|
||||
t.Logf("exported: %s -> %s", cidStr, carPath)
|
||||
}
|
||||
})
|
||||
|
||||
// Test: file at UnixFSChunker threshold (single block)
|
||||
// A file exactly at chunk size fits in one block with no links.
|
||||
// - v0-2015 (256KiB): produces dag-pb wrapped TFile node
|
||||
// - v1-2025 (1MiB): produces raw leaf block
|
||||
t.Run("file at UnixFSChunker threshold (single block)", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(exp.ProfileArgs...)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// File exactly at chunk size = single block (no links)
|
||||
seed := chunkSeedForProfile(exp)
|
||||
cidStr := node.IPFSAddDeterministicBytes(int64(exp.ChunkSize), seed)
|
||||
|
||||
// Verify block structure based on raw leaves setting
|
||||
if exp.RawLeaves {
|
||||
// v1-2025: single block is a raw leaf (no dag-pb structure)
|
||||
codec := node.IPFS("cid", "format", "-f", "%c", cidStr).Stdout.Trimmed()
|
||||
require.Equal(t, "raw", codec, "single block file is raw leaf")
|
||||
} else {
|
||||
// v0-2015: single block is a dag-pb node with no links (TFile type)
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 0, len(root.Links), "single block file has no links")
|
||||
fsType, err := node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.TFile, fsType, "single block file is dag-pb wrapped (TFile)")
|
||||
}
|
||||
|
||||
verifyHashFunction(t, node, cidStr, exp.HashFunc)
|
||||
|
||||
if exp.FileAtChunkSizeCID != "" {
|
||||
require.Equal(t, exp.FileAtChunkSizeCID, cidStr, "expected deterministic CID for file at chunk size")
|
||||
}
|
||||
|
||||
if exportCARs {
|
||||
carPath := filepath.Join(carOutputDir, exp.Name+"_file-at-chunk-size.car")
|
||||
require.NoError(t, node.IPFSDagExport(cidStr, carPath))
|
||||
t.Logf("exported: %s -> %s", cidStr, carPath)
|
||||
}
|
||||
})
|
||||
|
||||
// Test: file 1 byte over UnixFSChunker threshold (2 blocks)
|
||||
// A file 1 byte over chunk size requires 2 chunks.
|
||||
// Root is a dag-pb node with 2 links. Leaf encoding depends on profile:
|
||||
// - v0-2015: leaf blocks are dag-pb wrapped TFile nodes
|
||||
// - v1-2025: leaf blocks are raw codec blocks
|
||||
t.Run("file 1 byte over UnixFSChunker threshold (2 blocks)", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(exp.ProfileArgs...)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// File +1 byte over chunk size = 2 blocks
|
||||
seed := chunkSeedForProfile(exp)
|
||||
cidStr := node.IPFSAddDeterministicBytes(int64(exp.ChunkSize)+1, seed)
|
||||
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 2, len(root.Links), "file over chunk size has 2 links")
|
||||
|
||||
// Verify leaf block encoding
|
||||
for _, link := range root.Links {
|
||||
if exp.RawLeaves {
|
||||
// v1-2025: leaves are raw blocks
|
||||
leafCodec := node.IPFS("cid", "format", "-f", "%c", link.Hash.Slash).Stdout.Trimmed()
|
||||
require.Equal(t, "raw", leafCodec, "leaf blocks are raw, not dag-pb")
|
||||
} else {
|
||||
// v0-2015: leaves are dag-pb wrapped (TFile type)
|
||||
leafType, err := node.UnixFSDataType(link.Hash.Slash)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.TFile, leafType, "leaf blocks are dag-pb wrapped (TFile)")
|
||||
}
|
||||
}
|
||||
|
||||
verifyHashFunction(t, node, cidStr, exp.HashFunc)
|
||||
|
||||
if exp.FileOverChunkSizeCID != "" {
|
||||
require.Equal(t, exp.FileOverChunkSizeCID, cidStr, "expected deterministic CID for file over chunk size")
|
||||
}
|
||||
|
||||
if exportCARs {
|
||||
carPath := filepath.Join(carOutputDir, exp.Name+"_file-over-chunk-size.car")
|
||||
require.NoError(t, node.IPFSDagExport(cidStr, carPath))
|
||||
t.Logf("exported: %s -> %s", cidStr, carPath)
|
||||
}
|
||||
})
|
||||
|
||||
// Test: file at UnixFSFileMaxLinks threshold (single layer)
|
||||
// A file of exactly maxLinks * chunkSize bytes fits in a single DAG layer.
|
||||
// - v0-2015: 174 links (174 * 256KiB = ~44.6MiB)
|
||||
// - v1-2025: 1024 links (1024 * 1MiB = 1GiB)
|
||||
t.Run("file at UnixFSFileMaxLinks threshold (single layer)", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(exp.ProfileArgs...)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// File size = maxLinks * chunkSize (exactly at threshold)
|
||||
fileSize := fileAtMaxLinksBytes(exp)
|
||||
seed := seedForProfile(exp)
|
||||
cidStr := node.IPFSAddDeterministicBytes(fileSize, seed)
|
||||
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, exp.FileMaxLinks, len(root.Links),
|
||||
"expected exactly %d links at max", exp.FileMaxLinks)
|
||||
|
||||
verifyHashFunction(t, node, cidStr, exp.HashFunc)
|
||||
|
||||
if exp.FileAtMaxLinksCID != "" {
|
||||
require.Equal(t, exp.FileAtMaxLinksCID, cidStr, "expected deterministic CID for file at max links")
|
||||
}
|
||||
|
||||
if exportCARs {
|
||||
carPath := filepath.Join(carOutputDir, exp.Name+"_file-at-max-links.car")
|
||||
require.NoError(t, node.IPFSDagExport(cidStr, carPath))
|
||||
t.Logf("exported: %s -> %s", cidStr, carPath)
|
||||
}
|
||||
})
|
||||
|
||||
// Test: file 1 byte over UnixFSFileMaxLinks threshold (rebalanced DAG)
|
||||
// Adding 1 byte requires an additional chunk, exceeding maxLinks.
|
||||
// This triggers DAG rebalancing: chunks are grouped into intermediate nodes,
|
||||
// producing a 2-layer DAG with 2 links at the root.
|
||||
t.Run("file 1 byte over UnixFSFileMaxLinks threshold (rebalanced DAG)", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(exp.ProfileArgs...)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// +1 byte over max links threshold triggers DAG rebalancing
|
||||
fileSize := fileOverMaxLinksBytes(exp)
|
||||
seed := seedForProfile(exp)
|
||||
cidStr := node.IPFSAddDeterministicBytes(fileSize, seed)
|
||||
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, 2, len(root.Links), "expected 2 links after DAG rebalancing")
|
||||
|
||||
verifyHashFunction(t, node, cidStr, exp.HashFunc)
|
||||
|
||||
if exp.FileOverMaxLinksCID != "" {
|
||||
require.Equal(t, exp.FileOverMaxLinksCID, cidStr, "expected deterministic CID for rebalanced file")
|
||||
}
|
||||
|
||||
if exportCARs {
|
||||
carPath := filepath.Join(carOutputDir, exp.Name+"_file-over-max-links.car")
|
||||
require.NoError(t, node.IPFSDagExport(cidStr, carPath))
|
||||
t.Logf("exported: %s -> %s", cidStr, carPath)
|
||||
}
|
||||
})
|
||||
|
||||
// Test: directory at HAMTThreshold (basic flat dir)
|
||||
// A directory exactly at HAMTThreshold stays as a basic (flat) UnixFS directory.
|
||||
// Threshold uses > comparison (not >=), so size == threshold stays basic.
|
||||
// Size estimation method depends on profile:
|
||||
// - v0-2015 "links": size = sum(nameLen + cidLen)
|
||||
// - v1-2025 "block": size = serialized protobuf block size
|
||||
t.Run("directory at HAMTThreshold (basic flat dir)", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(exp.ProfileArgs...)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Use consistent seed for deterministic CIDs
|
||||
seed := hamtSeedForProfile(exp)
|
||||
randDir, err := os.MkdirTemp(node.Dir, seed)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create basic (flat) directory exactly at threshold
|
||||
basicLastNameLen := exp.DirBasicLastNameLen
|
||||
if basicLastNameLen == 0 {
|
||||
basicLastNameLen = exp.DirBasicNameLen
|
||||
}
|
||||
if exp.HAMTSizeEstimation == "block" {
|
||||
err = createDirectoryForHAMTBlockEstimation(randDir, exp.DirBasicFiles, exp.DirBasicNameLen, basicLastNameLen, seed)
|
||||
} else {
|
||||
err = createDirectoryForHAMTLinksEstimation(randDir, exp.DirBasicFiles, exp.DirBasicNameLen, basicLastNameLen, seed)
|
||||
}
|
||||
require.NoError(t, err)
|
||||
|
||||
cidStr := node.IPFS("add", "-r", "-Q", randDir).Stdout.Trimmed()
|
||||
|
||||
// Verify UnixFS type is TDirectory (1), not THAMTShard (5)
|
||||
fsType, err := node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.TDirectory, fsType, "expected basic directory (type=1) at exact threshold")
|
||||
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.Equal(t, exp.DirBasicFiles, len(root.Links),
|
||||
"expected basic directory with %d links", exp.DirBasicFiles)
|
||||
|
||||
verifyHashFunction(t, node, cidStr, exp.HashFunc)
|
||||
|
||||
// Verify size is exactly at threshold
|
||||
if exp.HAMTSizeEstimation == "block" {
|
||||
blockSize := getBlockSize(t, node, cidStr)
|
||||
require.Equal(t, exp.HAMTThreshold, blockSize,
|
||||
"expected basic directory block size to be exactly at threshold (%d), got %d", exp.HAMTThreshold, blockSize)
|
||||
}
|
||||
if exp.HAMTSizeEstimation == "links" {
|
||||
linksSize := 0
|
||||
for _, link := range root.Links {
|
||||
linksSize += len(link.Name) + cidLen
|
||||
}
|
||||
require.Equal(t, exp.HAMTThreshold, linksSize,
|
||||
"expected basic directory links size to be exactly at threshold (%d), got %d", exp.HAMTThreshold, linksSize)
|
||||
}
|
||||
|
||||
if exp.DirBasicCID != "" {
|
||||
require.Equal(t, exp.DirBasicCID, cidStr, "expected deterministic CID for basic directory")
|
||||
}
|
||||
|
||||
if exportCARs {
|
||||
carPath := filepath.Join(carOutputDir, exp.Name+"_dir-basic.car")
|
||||
require.NoError(t, node.IPFSDagExport(cidStr, carPath))
|
||||
t.Logf("exported: %s (%d files) -> %s", cidStr, exp.DirBasicFiles, carPath)
|
||||
}
|
||||
})
|
||||
|
||||
// Test: directory 1 byte over HAMTThreshold (HAMT sharded)
|
||||
// A directory 1 byte over HAMTThreshold is converted to a HAMT sharded structure.
|
||||
// HAMT distributes entries across buckets using consistent hashing.
|
||||
// Root has at most HAMTFanout links (256), with entries distributed across buckets.
|
||||
t.Run("directory 1 byte over HAMTThreshold (HAMT sharded)", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init(exp.ProfileArgs...)
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Use consistent seed for deterministic CIDs
|
||||
seed := hamtSeedForProfile(exp)
|
||||
randDir, err := os.MkdirTemp(node.Dir, seed)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create HAMT (sharded) directory exactly +1 byte over threshold
|
||||
lastNameLen := exp.DirHAMTLastNameLen
|
||||
if lastNameLen == 0 {
|
||||
lastNameLen = exp.DirHAMTNameLen
|
||||
}
|
||||
if exp.HAMTSizeEstimation == "block" {
|
||||
err = createDirectoryForHAMTBlockEstimation(randDir, exp.DirHAMTFiles, exp.DirHAMTNameLen, lastNameLen, seed)
|
||||
} else {
|
||||
err = createDirectoryForHAMTLinksEstimation(randDir, exp.DirHAMTFiles, exp.DirHAMTNameLen, lastNameLen, seed)
|
||||
}
|
||||
require.NoError(t, err)
|
||||
|
||||
cidStr := node.IPFS("add", "-r", "-Q", randDir).Stdout.Trimmed()
|
||||
|
||||
// Verify UnixFS type is THAMTShard (5), not TDirectory (1)
|
||||
fsType, err := node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.THAMTShard, fsType, "expected HAMT directory (type=5) when over threshold")
|
||||
|
||||
// HAMT root has at most fanout links (actual count depends on hash distribution)
|
||||
root, err := node.InspectPBNode(cidStr)
|
||||
assert.NoError(t, err)
|
||||
require.LessOrEqual(t, len(root.Links), exp.HAMTFanout,
|
||||
"expected HAMT directory root to have <= %d links", exp.HAMTFanout)
|
||||
|
||||
verifyHashFunction(t, node, cidStr, exp.HashFunc)
|
||||
|
||||
if exp.DirHAMTCID != "" {
|
||||
require.Equal(t, exp.DirHAMTCID, cidStr, "expected deterministic CID for HAMT directory")
|
||||
}
|
||||
|
||||
if exportCARs {
|
||||
carPath := filepath.Join(carOutputDir, exp.Name+"_dir-hamt.car")
|
||||
require.NoError(t, node.IPFSDagExport(cidStr, carPath))
|
||||
t.Logf("exported: %s (%d files, HAMT root links: %d) -> %s",
|
||||
cidStr, exp.DirHAMTFiles, len(root.Links), carPath)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// verifyCIDVersion checks that the CID has the expected version.
|
||||
func verifyCIDVersion(t *testing.T, _ *harness.Node, cidStr string, expectedVersion int) {
|
||||
t.Helper()
|
||||
if expectedVersion == 0 {
|
||||
require.True(t, strings.HasPrefix(cidStr, "Qm"),
|
||||
"expected CIDv0 (starts with Qm), got: %s", cidStr)
|
||||
} else {
|
||||
require.True(t, strings.HasPrefix(cidStr, "b"),
|
||||
"expected CIDv1 (base32, starts with b), got: %s", cidStr)
|
||||
}
|
||||
}
|
||||
|
||||
// verifyHashFunction checks that the CID uses the expected hash function.
|
||||
func verifyHashFunction(t *testing.T, node *harness.Node, cidStr, expectedHash string) {
|
||||
t.Helper()
|
||||
// Use ipfs cid format to get hash function info
|
||||
// Format string %h gives the hash function name
|
||||
res := node.IPFS("cid", "format", "-f", "%h", cidStr)
|
||||
hashFunc := strings.TrimSpace(res.Stdout.String())
|
||||
require.Equal(t, expectedHash, hashFunc,
|
||||
"expected hash function %s, got %s for CID %s", expectedHash, hashFunc, cidStr)
|
||||
}
|
||||
|
||||
// verifyRawLeaves checks whether the CID represents a raw leaf or dag-pb wrapped block.
|
||||
// For CIDv1: raw leaves have codec 0x55 (raw), wrapped have codec 0x70 (dag-pb).
|
||||
// For CIDv0: always dag-pb (no raw leaves possible).
|
||||
func verifyRawLeaves(t *testing.T, node *harness.Node, cidStr string, expectRaw bool) {
|
||||
t.Helper()
|
||||
// Use ipfs cid format to get codec info
|
||||
// Format string %c gives the codec name
|
||||
res := node.IPFS("cid", "format", "-f", "%c", cidStr)
|
||||
codec := strings.TrimSpace(res.Stdout.String())
|
||||
|
||||
if expectRaw {
|
||||
require.Equal(t, "raw", codec,
|
||||
"expected raw codec for raw leaves, got %s for CID %s", codec, cidStr)
|
||||
} else {
|
||||
require.Equal(t, "dag-pb", codec,
|
||||
"expected dag-pb codec for wrapped leaves, got %s for CID %s", codec, cidStr)
|
||||
}
|
||||
}
|
||||
|
||||
// getBlockSize returns the size of a block in bytes using ipfs block stat.
|
||||
func getBlockSize(t *testing.T, node *harness.Node, cidStr string) int {
|
||||
t.Helper()
|
||||
res := node.IPFS("block", "stat", "--enc=json", cidStr)
|
||||
var stat struct {
|
||||
Size int `json:"Size"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal(res.Stdout.Bytes(), &stat))
|
||||
return stat.Size
|
||||
}
|
||||
|
||||
// fileAtMaxLinksBytes returns the file size in bytes that produces exactly FileMaxLinks chunks.
|
||||
func fileAtMaxLinksBytes(exp cidProfileExpectations) int64 {
|
||||
return int64(exp.FileMaxLinks) * int64(exp.ChunkSize)
|
||||
}
|
||||
|
||||
// fileOverMaxLinksBytes returns the file size in bytes that triggers DAG rebalancing (+1 byte over max links threshold).
|
||||
func fileOverMaxLinksBytes(exp cidProfileExpectations) int64 {
|
||||
return int64(exp.FileMaxLinks)*int64(exp.ChunkSize) + 1
|
||||
}
|
||||
|
||||
// seedForProfile returns the deterministic seed used in add_test.go for file max links tests.
|
||||
func seedForProfile(exp cidProfileExpectations) string {
|
||||
switch exp.Name {
|
||||
case "unixfs-v0-2015", "default":
|
||||
return "v0-seed"
|
||||
case "unixfs-v1-2025":
|
||||
return "v1-2025-seed"
|
||||
default:
|
||||
return exp.Name + "-seed"
|
||||
}
|
||||
}
|
||||
|
||||
// chunkSeedForProfile returns the deterministic seed for chunk threshold tests.
|
||||
func chunkSeedForProfile(exp cidProfileExpectations) string {
|
||||
switch exp.Name {
|
||||
case "unixfs-v0-2015", "default":
|
||||
return "chunk-v0-seed"
|
||||
case "unixfs-v1-2025":
|
||||
return "chunk-v1-seed"
|
||||
default:
|
||||
return "chunk-" + exp.Name + "-seed"
|
||||
}
|
||||
}
|
||||
|
||||
// hamtSeedForProfile returns the deterministic seed for HAMT directory tests.
|
||||
// Uses the same seed for both under/at threshold tests to ensure consistency.
|
||||
func hamtSeedForProfile(exp cidProfileExpectations) string {
|
||||
switch exp.Name {
|
||||
case "unixfs-v0-2015", "default":
|
||||
return "hamt-unixfs-v0-2015"
|
||||
case "unixfs-v1-2025":
|
||||
return "hamt-unixfs-v1-2025"
|
||||
default:
|
||||
return "hamt-" + exp.Name
|
||||
}
|
||||
}
|
||||
|
||||
// TestDefaultMatchesExpectedProfile verifies that default ipfs add behavior
|
||||
// matches the expected profile (currently unixfs-v0-2015).
|
||||
func TestDefaultMatchesExpectedProfile(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Small file test
|
||||
cidDefault := node.IPFSAddStr("x")
|
||||
|
||||
// Same file with explicit profile
|
||||
nodeWithProfile := harness.NewT(t).NewNode().Init(defaultProfile.ProfileArgs...)
|
||||
nodeWithProfile.StartDaemon()
|
||||
defer nodeWithProfile.StopDaemon()
|
||||
|
||||
cidWithProfile := nodeWithProfile.IPFSAddStr("x")
|
||||
|
||||
require.Equal(t, cidWithProfile, cidDefault,
|
||||
"default behavior should match %s profile", defaultProfile.Name)
|
||||
}
|
||||
|
||||
// TestProtobufHelpers verifies the protobuf size calculation helpers.
|
||||
func TestProtobufHelpers(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
t.Run("VarintLen", func(t *testing.T) {
|
||||
// Varint encoding: 7 bits per byte, MSB indicates continuation
|
||||
cases := []struct {
|
||||
value uint64
|
||||
expected int
|
||||
}{
|
||||
{0, 1},
|
||||
{127, 1}, // 0x7F - max 1-byte varint
|
||||
{128, 2}, // 0x80 - min 2-byte varint
|
||||
{16383, 2}, // 0x3FFF - max 2-byte varint
|
||||
{16384, 3}, // 0x4000 - min 3-byte varint
|
||||
{2097151, 3}, // 0x1FFFFF - max 3-byte varint
|
||||
{2097152, 4}, // 0x200000 - min 4-byte varint
|
||||
{268435455, 4}, // 0xFFFFFFF - max 4-byte varint
|
||||
{268435456, 5}, // 0x10000000 - min 5-byte varint
|
||||
{34359738367, 5}, // 0x7FFFFFFFF - max 5-byte varint
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
got := testutils.VarintLen(tc.value)
|
||||
require.Equal(t, tc.expected, got, "VarintLen(%d)", tc.value)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("LinkSerializedSize", func(t *testing.T) {
|
||||
// Test typical cases for directory links
|
||||
cases := []struct {
|
||||
nameLen int
|
||||
cidLen int
|
||||
tsize uint64
|
||||
expected int
|
||||
}{
|
||||
// 255-char name, CIDv0 (34 bytes), tsize=0
|
||||
// Inner: 1+1+34 + 1+2+255 + 1+1 = 296
|
||||
// Outer: 1 + 2 + 296 = 299
|
||||
{255, 34, 0, 299},
|
||||
// 255-char name, CIDv1 (36 bytes), tsize=0
|
||||
// Inner: 1+1+36 + 1+2+255 + 1+1 = 298
|
||||
// Outer: 1 + 2 + 298 = 301
|
||||
{255, 36, 0, 301},
|
||||
// Short name (10 chars), CIDv1, tsize=0
|
||||
// Inner: 1+1+36 + 1+1+10 + 1+1 = 52
|
||||
// Outer: 1 + 1 + 52 = 54
|
||||
{10, 36, 0, 54},
|
||||
// 255-char name, CIDv1, large tsize
|
||||
// Inner: 1+1+36 + 1+2+255 + 1+5 = 302 (tsize uses 5-byte varint)
|
||||
// Outer: 1 + 2 + 302 = 305
|
||||
{255, 36, 34359738367, 305},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
got := testutils.LinkSerializedSize(tc.nameLen, tc.cidLen, tc.tsize)
|
||||
require.Equal(t, tc.expected, got, "LinkSerializedSize(%d, %d, %d)", tc.nameLen, tc.cidLen, tc.tsize)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("EstimateFilesForBlockThreshold", func(t *testing.T) {
|
||||
threshold := 262144
|
||||
nameLen := 255
|
||||
cidLen := 36
|
||||
var tsize uint64 = 0
|
||||
|
||||
numFiles := testutils.EstimateFilesForBlockThreshold(threshold, nameLen, cidLen, tsize)
|
||||
require.Equal(t, 870, numFiles, "expected 870 files for threshold 262144")
|
||||
|
||||
numFilesUnder := testutils.EstimateFilesForBlockThreshold(threshold-1, nameLen, cidLen, tsize)
|
||||
require.Equal(t, 870, numFilesUnder, "expected 870 files for threshold 262143")
|
||||
|
||||
numFilesOver := testutils.EstimateFilesForBlockThreshold(262185, nameLen, cidLen, tsize)
|
||||
require.Equal(t, 871, numFilesOver, "expected 871 files for threshold 262185")
|
||||
})
|
||||
}
|
||||
147
test/cli/dag_layout_test.go
Normal file
147
test/cli/dag_layout_test.go
Normal file
@ -0,0 +1,147 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/ipfs/kubo/test/cli/harness"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// TestBalancedDAGLayout verifies that kubo uses the "balanced" DAG layout
|
||||
// (all leaves at same depth) rather than "balanced-packed" (varying leaf depths).
|
||||
//
|
||||
// DAG layout differences across implementations:
|
||||
//
|
||||
// - balanced: kubo, helia (all leaves at same depth, uniform traversal distance)
|
||||
// - balanced-packed: singularity (trailing leaves may be at different depths)
|
||||
// - trickle: kubo --trickle (varying depths, optimized for append-only/streaming)
|
||||
//
|
||||
// kubo does not implement balanced-packed. The trickle layout also produces
|
||||
// non-uniform leaf depths but with different trade-offs: trickle is optimized
|
||||
// for append-only and streaming reads (no seeking), while balanced-packed
|
||||
// minimizes node count.
|
||||
//
|
||||
// IPIP-499 documents the balanced vs balanced-packed distinction. Files larger
|
||||
// than dag_width × chunk_size will have different CIDs between implementations
|
||||
// using different layouts.
|
||||
//
|
||||
// Set DAG_LAYOUT_CAR_OUTPUT environment variable to export CAR files.
|
||||
// Example: DAG_LAYOUT_CAR_OUTPUT=/tmp/dag-layout go test -run TestBalancedDAGLayout -v
|
||||
func TestBalancedDAGLayout(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
carOutputDir := os.Getenv("DAG_LAYOUT_CAR_OUTPUT")
|
||||
exportCARs := carOutputDir != ""
|
||||
if exportCARs {
|
||||
if err := os.MkdirAll(carOutputDir, 0755); err != nil {
|
||||
t.Fatalf("failed to create CAR output directory: %v", err)
|
||||
}
|
||||
t.Logf("CAR export enabled, writing to: %s", carOutputDir)
|
||||
}
|
||||
|
||||
t.Run("balanced layout has uniform leaf depth", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
|
||||
// Create file that triggers multi-level DAG.
|
||||
// For default v0: 175 chunks × 256KiB = ~44.8 MiB (just over 174 max links)
|
||||
// This creates a 2-level DAG where balanced layout ensures uniform depth.
|
||||
fileSize := "45MiB"
|
||||
seed := "balanced-test"
|
||||
|
||||
cidStr := node.IPFSAddDeterministic(fileSize, seed)
|
||||
|
||||
// Collect leaf depths by walking DAG
|
||||
depths := collectLeafDepths(t, node, cidStr, 0)
|
||||
|
||||
// All leaves must be at same depth for balanced layout
|
||||
require.NotEmpty(t, depths, "expected at least one leaf node")
|
||||
firstDepth := depths[0]
|
||||
for i, d := range depths {
|
||||
require.Equal(t, firstDepth, d,
|
||||
"leaf %d at depth %d, expected %d (balanced layout requires uniform leaf depth)",
|
||||
i, d, firstDepth)
|
||||
}
|
||||
t.Logf("verified %d leaves all at depth %d (CID: %s)", len(depths), firstDepth, cidStr)
|
||||
|
||||
if exportCARs {
|
||||
carPath := filepath.Join(carOutputDir, "balanced_"+fileSize+".car")
|
||||
require.NoError(t, node.IPFSDagExport(cidStr, carPath))
|
||||
t.Logf("exported: %s -> %s", cidStr, carPath)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("trickle layout has varying leaf depth", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init().StartDaemon()
|
||||
|
||||
fileSize := "45MiB"
|
||||
seed := "trickle-test"
|
||||
|
||||
// Add with trickle layout (--trickle flag).
|
||||
// Trickle produces non-uniform leaf depths, optimized for append-only
|
||||
// and streaming reads (no seeking). This subtest validates the test
|
||||
// logic by confirming we can detect varying depths.
|
||||
cidStr := node.IPFSAddDeterministic(fileSize, seed, "--trickle")
|
||||
|
||||
depths := collectLeafDepths(t, node, cidStr, 0)
|
||||
|
||||
// Trickle layout should have varying depths
|
||||
require.NotEmpty(t, depths, "expected at least one leaf node")
|
||||
minDepth, maxDepth := depths[0], depths[0]
|
||||
for _, d := range depths {
|
||||
if d < minDepth {
|
||||
minDepth = d
|
||||
}
|
||||
if d > maxDepth {
|
||||
maxDepth = d
|
||||
}
|
||||
}
|
||||
require.NotEqual(t, minDepth, maxDepth,
|
||||
"trickle layout should have varying leaf depths, got uniform depth %d", minDepth)
|
||||
t.Logf("verified %d leaves with depths ranging from %d to %d (CID: %s)", len(depths), minDepth, maxDepth, cidStr)
|
||||
|
||||
if exportCARs {
|
||||
carPath := filepath.Join(carOutputDir, "trickle_"+fileSize+".car")
|
||||
require.NoError(t, node.IPFSDagExport(cidStr, carPath))
|
||||
t.Logf("exported: %s -> %s", cidStr, carPath)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// collectLeafDepths recursively walks DAG and returns depth of each leaf node.
|
||||
// A node is a leaf if it's a raw block or a dag-pb node with no links.
|
||||
func collectLeafDepths(t *testing.T, node *harness.Node, cid string, depth int) []int {
|
||||
t.Helper()
|
||||
|
||||
// Check codec to see if this is a raw leaf
|
||||
res := node.IPFS("cid", "format", "-f", "%c", cid)
|
||||
codec := strings.TrimSpace(res.Stdout.String())
|
||||
if codec == "raw" {
|
||||
// Raw blocks are always leaves
|
||||
return []int{depth}
|
||||
}
|
||||
|
||||
// Try to inspect as dag-pb node
|
||||
pbNode, err := node.InspectPBNode(cid)
|
||||
if err != nil {
|
||||
// Can't parse as dag-pb, treat as leaf
|
||||
return []int{depth}
|
||||
}
|
||||
|
||||
// No links = leaf node
|
||||
if len(pbNode.Links) == 0 {
|
||||
return []int{depth}
|
||||
}
|
||||
|
||||
// Recurse into children
|
||||
var depths []int
|
||||
for _, link := range pbNode.Links {
|
||||
childDepths := collectLeafDepths(t, node, link.Hash.Slash, depth+1)
|
||||
depths = append(depths, childDepths...)
|
||||
}
|
||||
return depths
|
||||
}
|
||||
@ -1,11 +1,14 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
ft "github.com/ipfs/boxo/ipld/unixfs"
|
||||
"github.com/ipfs/kubo/config"
|
||||
"github.com/ipfs/kubo/test/cli/harness"
|
||||
"github.com/stretchr/testify/assert"
|
||||
@ -459,3 +462,437 @@ func TestFilesChroot(t *testing.T) {
|
||||
assert.Contains(t, res.Stderr.String(), "opening repo")
|
||||
})
|
||||
}
|
||||
|
||||
// TestFilesMFSImportConfig tests that MFS operations respect Import.* configuration settings.
|
||||
// These tests verify that `ipfs files` commands use the same import settings as `ipfs add`.
|
||||
func TestFilesMFSImportConfig(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
t.Run("files write respects Import.CidVersion=1", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.CidVersion = *config.NewOptionalInteger(1)
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Write file via MFS
|
||||
tempFile := filepath.Join(node.Dir, "test.txt")
|
||||
require.NoError(t, os.WriteFile(tempFile, []byte("hello"), 0644))
|
||||
node.IPFS("files", "write", "--create", "/test.txt", tempFile)
|
||||
|
||||
// Get CID of written file
|
||||
cidStr := node.IPFS("files", "stat", "--hash", "/test.txt").Stdout.Trimmed()
|
||||
|
||||
// Verify CIDv1 format (base32, starts with "b")
|
||||
require.True(t, strings.HasPrefix(cidStr, "b"), "expected CIDv1 (starts with b), got: %s", cidStr)
|
||||
})
|
||||
|
||||
t.Run("files write respects Import.UnixFSRawLeaves=true", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.CidVersion = *config.NewOptionalInteger(1)
|
||||
cfg.Import.UnixFSRawLeaves = config.True
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
tempFile := filepath.Join(node.Dir, "test.txt")
|
||||
require.NoError(t, os.WriteFile(tempFile, []byte("hello world"), 0644))
|
||||
node.IPFS("files", "write", "--create", "/test.txt", tempFile)
|
||||
|
||||
cidStr := node.IPFS("files", "stat", "--hash", "/test.txt").Stdout.Trimmed()
|
||||
codec := node.IPFS("cid", "format", "-f", "%c", cidStr).Stdout.Trimmed()
|
||||
require.Equal(t, "raw", codec, "expected raw codec for small file with raw leaves")
|
||||
})
|
||||
|
||||
// This test verifies CID parity for single-block files only.
|
||||
// Multi-block files will have different CIDs because MFS uses trickle DAG layout
|
||||
// while 'ipfs add' uses balanced DAG layout. See "files write vs add for multi-block" test.
|
||||
t.Run("single-block file: files write produces same CID as ipfs add", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.CidVersion = *config.NewOptionalInteger(1)
|
||||
cfg.Import.UnixFSRawLeaves = config.True
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
tempFile := filepath.Join(node.Dir, "test.txt")
|
||||
require.NoError(t, os.WriteFile(tempFile, []byte("hello world"), 0644))
|
||||
node.IPFS("files", "write", "--create", "/test.txt", tempFile)
|
||||
|
||||
mfsCid := node.IPFS("files", "stat", "--hash", "/test.txt").Stdout.Trimmed()
|
||||
addCid := node.IPFSAddStr("hello world")
|
||||
require.Equal(t, addCid, mfsCid, "MFS write should produce same CID as ipfs add for single-block files")
|
||||
})
|
||||
|
||||
t.Run("files mkdir respects Import.CidVersion=1", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.CidVersion = *config.NewOptionalInteger(1)
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
node.IPFS("files", "mkdir", "/testdir")
|
||||
cidStr := node.IPFS("files", "stat", "--hash", "/testdir").Stdout.Trimmed()
|
||||
|
||||
// Verify CIDv1 format
|
||||
require.True(t, strings.HasPrefix(cidStr, "b"), "expected CIDv1 (starts with b), got: %s", cidStr)
|
||||
})
|
||||
|
||||
t.Run("MFS subdirectory becomes HAMT when exceeding threshold", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
// Use small threshold for faster testing
|
||||
cfg.Import.UnixFSHAMTDirectorySizeThreshold = *config.NewOptionalBytes("1KiB")
|
||||
cfg.Import.UnixFSHAMTDirectorySizeEstimation = *config.NewOptionalString("block")
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
node.IPFS("files", "mkdir", "/bigdir")
|
||||
|
||||
content := "x"
|
||||
tempFile := filepath.Join(node.Dir, "content.txt")
|
||||
require.NoError(t, os.WriteFile(tempFile, []byte(content), 0644))
|
||||
|
||||
// Add enough files to exceed 1KiB threshold
|
||||
for i := range 25 {
|
||||
node.IPFS("files", "write", "--create", fmt.Sprintf("/bigdir/file%02d", i), tempFile)
|
||||
}
|
||||
|
||||
cidStr := node.IPFS("files", "stat", "--hash", "/bigdir").Stdout.Trimmed()
|
||||
fsType, err := node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.THAMTShard, fsType, "expected HAMT directory")
|
||||
})
|
||||
|
||||
t.Run("MFS root directory becomes HAMT when exceeding threshold", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.UnixFSHAMTDirectorySizeThreshold = *config.NewOptionalBytes("1KiB")
|
||||
cfg.Import.UnixFSHAMTDirectorySizeEstimation = *config.NewOptionalString("block")
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
content := "x"
|
||||
tempFile := filepath.Join(node.Dir, "content.txt")
|
||||
require.NoError(t, os.WriteFile(tempFile, []byte(content), 0644))
|
||||
|
||||
// Add files directly to root /
|
||||
for i := range 25 {
|
||||
node.IPFS("files", "write", "--create", fmt.Sprintf("/file%02d", i), tempFile)
|
||||
}
|
||||
|
||||
cidStr := node.IPFS("files", "stat", "--hash", "/").Stdout.Trimmed()
|
||||
fsType, err := node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.THAMTShard, fsType, "expected MFS root to become HAMT")
|
||||
})
|
||||
|
||||
t.Run("MFS directory reverts from HAMT to basic when items removed", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.UnixFSHAMTDirectorySizeThreshold = *config.NewOptionalBytes("1KiB")
|
||||
cfg.Import.UnixFSHAMTDirectorySizeEstimation = *config.NewOptionalString("block")
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
node.IPFS("files", "mkdir", "/testdir")
|
||||
|
||||
content := "x"
|
||||
tempFile := filepath.Join(node.Dir, "content.txt")
|
||||
require.NoError(t, os.WriteFile(tempFile, []byte(content), 0644))
|
||||
|
||||
// Add files to exceed threshold
|
||||
for i := range 25 {
|
||||
node.IPFS("files", "write", "--create", fmt.Sprintf("/testdir/file%02d", i), tempFile)
|
||||
}
|
||||
|
||||
// Verify it became HAMT
|
||||
cidStr := node.IPFS("files", "stat", "--hash", "/testdir").Stdout.Trimmed()
|
||||
fsType, err := node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.THAMTShard, fsType, "should be HAMT after adding many files")
|
||||
|
||||
// Remove files to get back below threshold
|
||||
for i := range 20 {
|
||||
node.IPFS("files", "rm", fmt.Sprintf("/testdir/file%02d", i))
|
||||
}
|
||||
|
||||
// Verify it reverted to basic directory
|
||||
cidStr = node.IPFS("files", "stat", "--hash", "/testdir").Stdout.Trimmed()
|
||||
fsType, err = node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.TDirectory, fsType, "should revert to basic directory after removing files")
|
||||
})
|
||||
|
||||
// Note: 'files write' produces DIFFERENT CIDs than 'ipfs add' for multi-block files because
|
||||
// MFS uses trickle DAG layout while 'ipfs add' uses balanced DAG layout.
|
||||
// Single-block files produce the same CID (tested above in "single-block file: files write...").
|
||||
// For multi-block CID compatibility with 'ipfs add', use 'ipfs add --to-files' instead.
|
||||
|
||||
t.Run("files cp preserves original CID", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.CidVersion = *config.NewOptionalInteger(1)
|
||||
cfg.Import.UnixFSRawLeaves = config.True
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Add file via ipfs add
|
||||
originalCid := node.IPFSAddStr("hello world")
|
||||
|
||||
// Copy to MFS
|
||||
node.IPFS("files", "cp", fmt.Sprintf("/ipfs/%s", originalCid), "/copied.txt")
|
||||
|
||||
// Verify CID is preserved
|
||||
mfsCid := node.IPFS("files", "stat", "--hash", "/copied.txt").Stdout.Trimmed()
|
||||
require.Equal(t, originalCid, mfsCid, "files cp should preserve original CID")
|
||||
})
|
||||
|
||||
t.Run("add --to-files respects Import config", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.CidVersion = *config.NewOptionalInteger(1)
|
||||
cfg.Import.UnixFSRawLeaves = config.True
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Create temp file
|
||||
tempFile := filepath.Join(node.Dir, "test.txt")
|
||||
require.NoError(t, os.WriteFile(tempFile, []byte("hello world"), 0644))
|
||||
|
||||
// Add with --to-files
|
||||
addCid := node.IPFS("add", "-Q", "--to-files=/added.txt", tempFile).Stdout.Trimmed()
|
||||
|
||||
// Verify MFS file has same CID
|
||||
mfsCid := node.IPFS("files", "stat", "--hash", "/added.txt").Stdout.Trimmed()
|
||||
require.Equal(t, addCid, mfsCid)
|
||||
|
||||
// Should be CIDv1 raw leaf
|
||||
codec := node.IPFS("cid", "format", "-f", "%c", mfsCid).Stdout.Trimmed()
|
||||
require.Equal(t, "raw", codec)
|
||||
})
|
||||
|
||||
t.Run("files mkdir respects Import.UnixFSDirectoryMaxLinks", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.CidVersion = *config.NewOptionalInteger(1)
|
||||
// Set low link threshold to trigger HAMT sharding at 5 links
|
||||
cfg.Import.UnixFSDirectoryMaxLinks = *config.NewOptionalInteger(5)
|
||||
// Also need size estimation enabled for switching to work
|
||||
cfg.Import.UnixFSHAMTDirectorySizeEstimation = *config.NewOptionalString("block")
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Create directory with 6 files (exceeds max 5 links)
|
||||
node.IPFS("files", "mkdir", "/testdir")
|
||||
|
||||
content := "x"
|
||||
tempFile := filepath.Join(node.Dir, "content.txt")
|
||||
require.NoError(t, os.WriteFile(tempFile, []byte(content), 0644))
|
||||
|
||||
for i := range 6 {
|
||||
node.IPFS("files", "write", "--create", fmt.Sprintf("/testdir/file%d.txt", i), tempFile)
|
||||
}
|
||||
|
||||
// Verify directory became HAMT sharded
|
||||
cidStr := node.IPFS("files", "stat", "--hash", "/testdir").Stdout.Trimmed()
|
||||
fsType, err := node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.THAMTShard, fsType, "expected HAMT directory after exceeding UnixFSDirectoryMaxLinks")
|
||||
})
|
||||
|
||||
t.Run("files write respects Import.UnixFSChunker", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.CidVersion = *config.NewOptionalInteger(1)
|
||||
cfg.Import.UnixFSRawLeaves = config.True
|
||||
cfg.Import.UnixFSChunker = *config.NewOptionalString("size-1024") // 1KB chunks
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Create file larger than chunk size (3KB)
|
||||
data := make([]byte, 3*1024)
|
||||
for i := range data {
|
||||
data[i] = byte(i % 256)
|
||||
}
|
||||
tempFile := filepath.Join(node.Dir, "large.bin")
|
||||
require.NoError(t, os.WriteFile(tempFile, data, 0644))
|
||||
|
||||
node.IPFS("files", "write", "--create", "/large.bin", tempFile)
|
||||
|
||||
// Verify chunking: 3KB file with 1KB chunks should have multiple child blocks
|
||||
cidStr := node.IPFS("files", "stat", "--hash", "/large.bin").Stdout.Trimmed()
|
||||
dagStatJSON := node.IPFS("dag", "stat", "--enc=json", cidStr).Stdout.Trimmed()
|
||||
var dagStat struct {
|
||||
UniqueBlocks int `json:"UniqueBlocks"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal([]byte(dagStatJSON), &dagStat))
|
||||
// With 1KB chunks on a 3KB file, we expect 4 blocks (3 leaf + 1 root)
|
||||
assert.Greater(t, dagStat.UniqueBlocks, 1, "expected more than 1 block with 1KB chunker on 3KB file")
|
||||
})
|
||||
|
||||
t.Run("files write with custom chunker produces same CID as ipfs add --trickle", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.CidVersion = *config.NewOptionalInteger(1)
|
||||
cfg.Import.UnixFSRawLeaves = config.True
|
||||
cfg.Import.UnixFSChunker = *config.NewOptionalString("size-512")
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Create test data (2KB to get multiple chunks)
|
||||
data := make([]byte, 2048)
|
||||
for i := range data {
|
||||
data[i] = byte(i % 256)
|
||||
}
|
||||
tempFile := filepath.Join(node.Dir, "test.bin")
|
||||
require.NoError(t, os.WriteFile(tempFile, data, 0644))
|
||||
|
||||
// Add via MFS
|
||||
node.IPFS("files", "write", "--create", "/test.bin", tempFile)
|
||||
mfsCid := node.IPFS("files", "stat", "--hash", "/test.bin").Stdout.Trimmed()
|
||||
|
||||
// Add via ipfs add with same chunker and trickle (MFS always uses trickle)
|
||||
addCid := node.IPFS("add", "-Q", "--chunker=size-512", "--trickle", tempFile).Stdout.Trimmed()
|
||||
|
||||
// CIDs should match when using same chunker + trickle layout
|
||||
require.Equal(t, addCid, mfsCid, "MFS and add --trickle should produce same CID with matching chunker")
|
||||
})
|
||||
|
||||
t.Run("files mkdir respects Import.UnixFSHAMTDirectoryMaxFanout", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
// Use non-default fanout of 64 (default is 256)
|
||||
cfg.Import.UnixFSHAMTDirectoryMaxFanout = *config.NewOptionalInteger(64)
|
||||
// Set low link threshold to trigger HAMT at 5 links
|
||||
cfg.Import.UnixFSDirectoryMaxLinks = *config.NewOptionalInteger(5)
|
||||
cfg.Import.UnixFSHAMTDirectorySizeEstimation = *config.NewOptionalString("disabled")
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
node.IPFS("files", "mkdir", "/testdir")
|
||||
|
||||
content := "x"
|
||||
tempFile := filepath.Join(node.Dir, "content.txt")
|
||||
require.NoError(t, os.WriteFile(tempFile, []byte(content), 0644))
|
||||
|
||||
// Add 6 files (exceeds MaxLinks=5) to trigger HAMT
|
||||
for i := range 6 {
|
||||
node.IPFS("files", "write", "--create", fmt.Sprintf("/testdir/file%d.txt", i), tempFile)
|
||||
}
|
||||
|
||||
// Verify directory became HAMT
|
||||
cidStr := node.IPFS("files", "stat", "--hash", "/testdir").Stdout.Trimmed()
|
||||
fsType, err := node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.THAMTShard, fsType, "expected HAMT directory")
|
||||
|
||||
// Verify the HAMT uses the custom fanout (64) by inspecting the UnixFS Data field.
|
||||
fanout, err := node.UnixFSHAMTFanout(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(64), fanout, "expected HAMT fanout 64")
|
||||
})
|
||||
|
||||
t.Run("files mkdir respects Import.UnixFSHAMTDirectorySizeThreshold", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
// Use very small threshold (100 bytes) to trigger HAMT quickly
|
||||
cfg.Import.UnixFSHAMTDirectorySizeThreshold = *config.NewOptionalBytes("100B")
|
||||
cfg.Import.UnixFSHAMTDirectorySizeEstimation = *config.NewOptionalString("block")
|
||||
})
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
node.IPFS("files", "mkdir", "/testdir")
|
||||
|
||||
content := "test content"
|
||||
tempFile := filepath.Join(node.Dir, "content.txt")
|
||||
require.NoError(t, os.WriteFile(tempFile, []byte(content), 0644))
|
||||
|
||||
// Add 3 files - each link adds ~40-50 bytes, so 3 should exceed 100B threshold
|
||||
for i := range 3 {
|
||||
node.IPFS("files", "write", "--create", fmt.Sprintf("/testdir/file%d.txt", i), tempFile)
|
||||
}
|
||||
|
||||
// Verify directory became HAMT due to size threshold
|
||||
cidStr := node.IPFS("files", "stat", "--hash", "/testdir").Stdout.Trimmed()
|
||||
fsType, err := node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.THAMTShard, fsType, "expected HAMT directory after exceeding size threshold")
|
||||
})
|
||||
|
||||
t.Run("config change takes effect after daemon restart", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
node := harness.NewT(t).NewNode().Init()
|
||||
|
||||
// Start with high threshold (won't trigger HAMT)
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.UnixFSHAMTDirectorySizeThreshold = *config.NewOptionalBytes("256KiB")
|
||||
cfg.Import.UnixFSHAMTDirectorySizeEstimation = *config.NewOptionalString("block")
|
||||
})
|
||||
node.StartDaemon()
|
||||
|
||||
// Create directory with some files
|
||||
node.IPFS("files", "mkdir", "/testdir")
|
||||
content := "test"
|
||||
tempFile := filepath.Join(node.Dir, "content.txt")
|
||||
require.NoError(t, os.WriteFile(tempFile, []byte(content), 0644))
|
||||
for i := range 3 {
|
||||
node.IPFS("files", "write", "--create", fmt.Sprintf("/testdir/file%d.txt", i), tempFile)
|
||||
}
|
||||
|
||||
// Verify it's still a basic directory (threshold not exceeded)
|
||||
cidStr := node.IPFS("files", "stat", "--hash", "/testdir").Stdout.Trimmed()
|
||||
fsType, err := node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.TDirectory, fsType, "should be basic directory with high threshold")
|
||||
|
||||
// Stop daemon
|
||||
node.StopDaemon()
|
||||
|
||||
// Change config to use very low threshold
|
||||
node.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Import.UnixFSHAMTDirectorySizeThreshold = *config.NewOptionalBytes("100B")
|
||||
})
|
||||
|
||||
// Restart daemon
|
||||
node.StartDaemon()
|
||||
defer node.StopDaemon()
|
||||
|
||||
// Add one more file - this should trigger HAMT conversion with new threshold
|
||||
node.IPFS("files", "write", "--create", "/testdir/file3.txt", tempFile)
|
||||
|
||||
// Verify it became HAMT (new threshold applied)
|
||||
cidStr = node.IPFS("files", "stat", "--hash", "/testdir").Stdout.Trimmed()
|
||||
fsType, err = node.UnixFSDataType(cidStr)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ft.THAMTShard, fsType, "should be HAMT after daemon restart with lower threshold")
|
||||
})
|
||||
}
|
||||
|
||||
@ -4,6 +4,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
@ -76,7 +77,8 @@ func (n *Node) IPFSAddStr(content string, args ...string) string {
|
||||
return n.IPFSAdd(strings.NewReader(content), args...)
|
||||
}
|
||||
|
||||
// IPFSAddDeterministic produces a CID of a file of a certain size, filled with deterministically generated bytes based on some seed.
|
||||
// IPFSAddDeterministic produces a CID of a file of a certain size, filled with deterministically generated bytes based on some seed.
|
||||
// Size is specified as a humanize string (e.g., "256KiB", "1MiB").
|
||||
// This ensures deterministic CID on the other end, that can be used in tests.
|
||||
func (n *Node) IPFSAddDeterministic(size string, seed string, args ...string) string {
|
||||
log.Debugf("node %d adding %s of deterministic pseudo-random data with seed %q and args: %v", n.ID, size, seed, args)
|
||||
@ -87,6 +89,17 @@ func (n *Node) IPFSAddDeterministic(size string, seed string, args ...string) st
|
||||
return n.IPFSAdd(reader, args...)
|
||||
}
|
||||
|
||||
// IPFSAddDeterministicBytes produces a CID of a file of exactly `size` bytes, filled with deterministically generated bytes based on some seed.
|
||||
// Use this when exact byte precision is needed (e.g., threshold tests at T and T+1 bytes).
|
||||
func (n *Node) IPFSAddDeterministicBytes(size int64, seed string, args ...string) string {
|
||||
log.Debugf("node %d adding %d bytes of deterministic pseudo-random data with seed %q and args: %v", n.ID, size, seed, args)
|
||||
reader, err := DeterministicRandomReaderBytes(size, seed)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return n.IPFSAdd(reader, args...)
|
||||
}
|
||||
|
||||
func (n *Node) IPFSAdd(content io.Reader, args ...string) string {
|
||||
log.Debugf("node %d adding with args: %v", n.ID, args)
|
||||
fullArgs := []string{"add", "-q"}
|
||||
@ -148,9 +161,15 @@ func (n *Node) IPFSDagImport(content io.Reader, cid string, args ...string) erro
|
||||
return res.Err
|
||||
}
|
||||
|
||||
/*
|
||||
func (n *Node) IPFSDagExport(cid string, car *os.File) error {
|
||||
log.Debugf("node %d dag export of %s to %q with args: %v", n.ID, cid, car.Name())
|
||||
// IPFSDagExport exports a DAG rooted at cid to a CAR file at carPath.
|
||||
func (n *Node) IPFSDagExport(cid string, carPath string) error {
|
||||
log.Debugf("node %d dag export of %s to %q", n.ID, cid, carPath)
|
||||
car, err := os.Create(carPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer car.Close()
|
||||
|
||||
res := n.Runner.MustRun(RunRequest{
|
||||
Path: n.IPFSBin,
|
||||
Args: []string{"dag", "export", cid},
|
||||
@ -158,4 +177,3 @@ func (n *Node) IPFSDagExport(cid string, car *os.File) error {
|
||||
})
|
||||
return res.Err
|
||||
}
|
||||
*/
|
||||
|
||||
@ -3,8 +3,77 @@ package harness
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
|
||||
mdag "github.com/ipfs/boxo/ipld/merkledag"
|
||||
ft "github.com/ipfs/boxo/ipld/unixfs"
|
||||
pb "github.com/ipfs/boxo/ipld/unixfs/pb"
|
||||
)
|
||||
|
||||
// UnixFSDataType returns the UnixFS DataType for the given CID by fetching the
|
||||
// raw block and parsing the protobuf. This directly checks the Type field in
|
||||
// the UnixFS Data message (https://specs.ipfs.tech/unixfs/#data).
|
||||
//
|
||||
// Common types:
|
||||
// - ft.TDirectory (1) = basic flat directory
|
||||
// - ft.THAMTShard (5) = HAMT sharded directory
|
||||
func (n *Node) UnixFSDataType(cid string) (pb.Data_DataType, error) {
|
||||
log.Debugf("node %d block get %s", n.ID, cid)
|
||||
|
||||
var blockData bytes.Buffer
|
||||
res := n.Runner.MustRun(RunRequest{
|
||||
Path: n.IPFSBin,
|
||||
Args: []string{"block", "get", cid},
|
||||
CmdOpts: []CmdOpt{RunWithStdout(&blockData)},
|
||||
})
|
||||
if res.Err != nil {
|
||||
return 0, res.Err
|
||||
}
|
||||
|
||||
// Parse dag-pb block
|
||||
protoNode, err := mdag.DecodeProtobuf(blockData.Bytes())
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Parse UnixFS data
|
||||
fsNode, err := ft.FSNodeFromBytes(protoNode.Data())
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
return fsNode.Type(), nil
|
||||
}
|
||||
|
||||
// UnixFSHAMTFanout returns the fanout value for a HAMT shard directory.
|
||||
// This is only valid for HAMT shards (THAMTShard type).
|
||||
func (n *Node) UnixFSHAMTFanout(cid string) (uint64, error) {
|
||||
log.Debugf("node %d block get %s for fanout", n.ID, cid)
|
||||
|
||||
var blockData bytes.Buffer
|
||||
res := n.Runner.MustRun(RunRequest{
|
||||
Path: n.IPFSBin,
|
||||
Args: []string{"block", "get", cid},
|
||||
CmdOpts: []CmdOpt{RunWithStdout(&blockData)},
|
||||
})
|
||||
if res.Err != nil {
|
||||
return 0, res.Err
|
||||
}
|
||||
|
||||
// Parse dag-pb block
|
||||
protoNode, err := mdag.DecodeProtobuf(blockData.Bytes())
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Parse UnixFS data
|
||||
fsNode, err := ft.FSNodeFromBytes(protoNode.Data())
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
return fsNode.Fanout(), nil
|
||||
}
|
||||
|
||||
// InspectPBNode uses dag-json output of 'ipfs dag get' to inspect
|
||||
// "Logical Format" of DAG-PB as defined in
|
||||
// https://web.archive.org/web/20250403194752/https://ipld.io/specs/codecs/dag-pb/spec/#logical-format
|
||||
@ -28,7 +97,6 @@ func (n *Node) InspectPBNode(cid string) (PBNode, error) {
|
||||
return root, err
|
||||
}
|
||||
return root, nil
|
||||
|
||||
}
|
||||
|
||||
// Define structs to match the JSON for
|
||||
|
||||
39
test/cli/testutils/protobuf.go
Normal file
39
test/cli/testutils/protobuf.go
Normal file
@ -0,0 +1,39 @@
|
||||
package testutils
|
||||
|
||||
import "math/bits"
|
||||
|
||||
// VarintLen returns the number of bytes needed to encode v as a protobuf varint.
|
||||
func VarintLen(v uint64) int {
|
||||
return int(9*uint32(bits.Len64(v))+64) / 64
|
||||
}
|
||||
|
||||
// LinkSerializedSize calculates the serialized size of a single PBLink in a dag-pb block.
|
||||
// This matches the calculation in boxo/ipld/unixfs/io/directory.go estimatedBlockSize().
|
||||
//
|
||||
// The protobuf wire format for a PBLink is:
|
||||
//
|
||||
// PBNode.Links wrapper tag (1 byte)
|
||||
// + varint length of inner message
|
||||
// + Hash field: tag (1) + varint(cidLen) + cidLen
|
||||
// + Name field: tag (1) + varint(nameLen) + nameLen
|
||||
// + Tsize field: tag (1) + varint(tsize)
|
||||
func LinkSerializedSize(nameLen, cidLen int, tsize uint64) int {
|
||||
// Inner link message size
|
||||
linkLen := 1 + VarintLen(uint64(cidLen)) + cidLen + // Hash field
|
||||
1 + VarintLen(uint64(nameLen)) + nameLen + // Name field
|
||||
1 + VarintLen(tsize) // Tsize field
|
||||
|
||||
// Outer wrapper: tag (1 byte) + varint(linkLen) + linkLen
|
||||
return 1 + VarintLen(uint64(linkLen)) + linkLen
|
||||
}
|
||||
|
||||
// EstimateFilesForBlockThreshold estimates how many files with given name/cid lengths
|
||||
// will fit under the block size threshold.
|
||||
// Returns the number of files that keeps the block size just under the threshold.
|
||||
func EstimateFilesForBlockThreshold(threshold, nameLen, cidLen int, tsize uint64) int {
|
||||
linkSize := LinkSerializedSize(nameLen, cidLen, tsize)
|
||||
// Base overhead for empty directory node (Data field + minimal structure)
|
||||
// Empirically determined to be 4 bytes for dag-pb directories
|
||||
baseOverhead := 4
|
||||
return (threshold - baseOverhead) / linkSize
|
||||
}
|
||||
@ -27,13 +27,19 @@ func (r *randomReader) Read(p []byte) (int, error) {
|
||||
return int(n), nil
|
||||
}
|
||||
|
||||
// createRandomReader produces specified number of pseudo-random bytes
|
||||
// from a seed.
|
||||
// DeterministicRandomReader produces specified number of pseudo-random bytes
|
||||
// from a seed. Size can be specified as a humanize string (e.g., "256KiB", "1MiB").
|
||||
func DeterministicRandomReader(sizeStr string, seed string) (io.Reader, error) {
|
||||
size, err := humanize.ParseBytes(sizeStr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return DeterministicRandomReaderBytes(int64(size), seed)
|
||||
}
|
||||
|
||||
// DeterministicRandomReaderBytes produces exactly `size` pseudo-random bytes
|
||||
// from a seed. Use this when exact byte precision is needed.
|
||||
func DeterministicRandomReaderBytes(size int64, seed string) (io.Reader, error) {
|
||||
// Hash the seed string to a 32-byte key for ChaCha20
|
||||
key := sha256.Sum256([]byte(seed))
|
||||
// Use ChaCha20 for deterministic random bytes
|
||||
@ -42,5 +48,5 @@ func DeterministicRandomReader(sizeStr string, seed string) (io.Reader, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &randomReader{cipher: cipher, remaining: int64(size)}, nil
|
||||
return &randomReader{cipher: cipher, remaining: size}, nil
|
||||
}
|
||||
|
||||
@ -135,13 +135,13 @@ require (
|
||||
github.com/huin/goupnp v1.3.0 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/ipfs/bbloom v0.0.4 // indirect
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204011824-2688767ff981 // indirect
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204203152-f188f79fd412 // indirect
|
||||
github.com/ipfs/go-bitfield v1.1.0 // indirect
|
||||
github.com/ipfs/go-block-format v0.2.3 // indirect
|
||||
github.com/ipfs/go-cid v0.6.0 // indirect
|
||||
github.com/ipfs/go-datastore v0.9.0 // indirect
|
||||
github.com/ipfs/go-dsqueue v0.1.2 // indirect
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260203151407-4b3827ebb483 // indirect
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260204204540-af9bcbaf5709 // indirect
|
||||
github.com/ipfs/go-ipfs-redirects-file v0.1.2 // indirect
|
||||
github.com/ipfs/go-ipld-cbor v0.2.1 // indirect
|
||||
github.com/ipfs/go-ipld-format v0.6.3 // indirect
|
||||
|
||||
@ -296,8 +296,8 @@ github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2
|
||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||
github.com/ipfs/bbloom v0.0.4 h1:Gi+8EGJ2y5qiD5FbsbpX/TMNcJw8gSqr7eyjHa4Fhvs=
|
||||
github.com/ipfs/bbloom v0.0.4/go.mod h1:cS9YprKXpoZ9lT0n/Mw/a6/aFV6DTjTLYHeA+gyqMG0=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204011824-2688767ff981 h1:Q3XjjicNTpok8gD0WwbLYZpmbRoykNTiCLbpj3EjnPc=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204011824-2688767ff981/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204203152-f188f79fd412 h1:nfRIkMIhetCWD8jw5ya+FY+jn9ii2c+U5gdkmSS4L1Q=
|
||||
github.com/ipfs/boxo v0.36.1-0.20260204203152-f188f79fd412/go.mod h1:92hnRXfP5ScKEIqlq9Ns7LR1dFXEVADKWVGH0fjk83k=
|
||||
github.com/ipfs/go-bitfield v1.1.0 h1:fh7FIo8bSwaJEh6DdTWbCeZ1eqOaOkKFI74SCnsWbGA=
|
||||
github.com/ipfs/go-bitfield v1.1.0/go.mod h1:paqf1wjq/D2BBmzfTVFlJQ9IlFOZpg422HL0HqsGWHU=
|
||||
github.com/ipfs/go-block-format v0.2.3 h1:mpCuDaNXJ4wrBJLrtEaGFGXkferrw5eqVvzaHhtFKQk=
|
||||
@ -314,8 +314,8 @@ github.com/ipfs/go-ds-leveldb v0.5.2 h1:6nmxlQ2zbp4LCNdJVsmHfs9GP0eylfBNxpmY1csp
|
||||
github.com/ipfs/go-ds-leveldb v0.5.2/go.mod h1:2fAwmcvD3WoRT72PzEekHBkQmBDhc39DJGoREiuGmYo=
|
||||
github.com/ipfs/go-dsqueue v0.1.2 h1:jBMsgvT9Pj9l3cqI0m5jYpW/aWDYkW4Us6EuzrcSGbs=
|
||||
github.com/ipfs/go-dsqueue v0.1.2/go.mod h1:OU94YuMVUIF/ctR7Ysov9PI4gOa2XjPGN9nd8imSv78=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260203151407-4b3827ebb483 h1:FnQqL92YxPX08/dcqE4cCSqEzwVGSdj2wprWHX+cUtM=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260203151407-4b3827ebb483/go.mod h1:YmhRbpaLKg40i9Ogj2+L41tJ+8x50fF8u1FJJD/WNhc=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260204204540-af9bcbaf5709 h1:0JiurWPnR7ZtjYW8XdfThOcOU5WlVVGQ1JY4FHHgyu8=
|
||||
github.com/ipfs/go-ipfs-cmds v0.15.1-0.20260204204540-af9bcbaf5709/go.mod h1:yZeTCte5zTH66bbEpLPkSog3/ImppCD00DMP7NjYmys=
|
||||
github.com/ipfs/go-ipfs-delay v0.0.1 h1:r/UXYyRcddO6thwOnhiznIAiSvxMECGgtv35Xs1IeRQ=
|
||||
github.com/ipfs/go-ipfs-delay v0.0.1/go.mod h1:8SP1YXK1M1kXuc4KJZINY3TQQ03J2rwBG9QfXmbRPrw=
|
||||
github.com/ipfs/go-ipfs-pq v0.0.4 h1:U7jjENWJd1jhcrR8X/xHTaph14PTAK9O+yaLJbjqgOw=
|
||||
|
||||
@ -786,6 +786,7 @@ tests_for_files_api() {
|
||||
test_expect_success "can create some files for testing ($EXTRA)" '
|
||||
create_files
|
||||
'
|
||||
# default: CIDv0, dag-pb for all files (no raw-leaves)
|
||||
ROOT_HASH=QmcwKfTMCT7AaeiD92hWjnZn9b6eh9NxnhfSzN5x2vnDpt
|
||||
CATS_HASH=Qma88m8ErTGkZHbBWGqy1C7VmEmX8wwNDWNpGyCaNmEgwC
|
||||
FILE_HASH=QmQdQt9qooenjeaNhiKHF3hBvmNteB4MQBtgu3jxgf9c7i
|
||||
@ -796,20 +797,23 @@ tests_for_files_api() {
|
||||
create_files --raw-leaves
|
||||
'
|
||||
|
||||
# partial raw-leaves: initial files created with --raw-leaves, test ops without
|
||||
if [ "$EXTRA" = "with-daemon" ]; then
|
||||
ROOT_HASH=QmTpKiKcAj4sbeesN6vrs5w3QeVmd4QmGpxRL81hHut4dZ
|
||||
CATS_HASH=QmPhPkmtUGGi8ySPHoPu1qbfryLJKKq1GYxpgLyyCruvGe
|
||||
test_files_api "($EXTRA, partial raw-leaves)"
|
||||
fi
|
||||
|
||||
ROOT_HASH=QmW3dMSU6VNd1mEdpk9S3ZYRuR1YwwoXjGaZhkyK6ru9YU
|
||||
CATS_HASH=QmPqWDEg7NoWRX8Y4vvYjZtmdg5umbfsTQ9zwNr12JoLmt
|
||||
FILE_HASH=QmRCgHeoKxCqK2Es6M6nPUDVWz19yNQPnsXGsXeuTkSKpN
|
||||
TRUNC_HASH=QmckstrVxJuecVD1FHUiURJiU9aPURZWJieeBVHJPACj8L
|
||||
# raw-leaves: single-block files become RawNode (CIDv1), dirs stay CIDv0
|
||||
ROOT_HASH=QmTHzLiSouBHVTssS8xRzmfWGAvTGhPEjtPdB6pWMQdxJX
|
||||
CATS_HASH=QmPJkzbCoBuL379TbHgwF1YbVHnKgiDa5bjqYhe6Lovdms
|
||||
FILE_HASH=bafybeibkrazpbejqh3qun7xfnsl7yofl74o4jwhxebpmtrcpavebokuqtm
|
||||
TRUNC_HASH=bafybeigwhb3q36yrm37jv5fo2ap6r6eyohckqrxmlejrenex4xlnuxiy3e
|
||||
test_files_api "($EXTRA, raw-leaves)" '' --raw-leaves
|
||||
|
||||
ROOT_HASH=QmageRWxC7wWjPv5p36NeAgBAiFdBHaNfxAehBSwzNech2
|
||||
CATS_HASH=bafybeig4cpvfu2qwwo3u4ffazhqdhyynfhnxqkzvbhrdbamauthf5mfpuq
|
||||
# cidv1 for mkdir: different from raw-leaves since mkdir forces CIDv1 dirs
|
||||
ROOT_HASH=QmTLdTaZNj8Mvq1cgYup59ZFJFv1KxptouFSZUZKeq7X3z
|
||||
CATS_HASH=bafybeihsqinttigpskqqj63wgalrny3lifvqv5ml7igrirdhlcf73l3wvm
|
||||
FILE_HASH=bafybeibkrazpbejqh3qun7xfnsl7yofl74o4jwhxebpmtrcpavebokuqtm
|
||||
TRUNC_HASH=bafybeigwhb3q36yrm37jv5fo2ap6r6eyohckqrxmlejrenex4xlnuxiy3e
|
||||
if [ "$EXTRA" = "with-daemon" ]; then
|
||||
@ -823,8 +827,10 @@ tests_for_files_api() {
|
||||
test_cmp hash_expect hash_actual
|
||||
'
|
||||
|
||||
ROOT_HASH=bafybeifxnoetaa2jetwmxubv3gqiyaknnujwkkkhdeua63kulm63dcr5wu
|
||||
test_files_api "($EXTRA, cidv1 root)"
|
||||
# cidv1 root: root upgraded to CIDv1 via chcid, all new dirs/files also CIDv1
|
||||
ROOT_HASH=bafybeickjecu37qv6ue54ofk3n4rpm4g4abuofz7yc4qn4skffy263kkou
|
||||
CATS_HASH=bafybeihsqinttigpskqqj63wgalrny3lifvqv5ml7igrirdhlcf73l3wvm
|
||||
test_files_api "($EXTRA, cidv1 root)"
|
||||
|
||||
if [ "$EXTRA" = "with-daemon" ]; then
|
||||
test_expect_success "can update root hash to blake2b-256" '
|
||||
@ -833,8 +839,9 @@ tests_for_files_api() {
|
||||
ipfs files stat --hash / > hash_actual &&
|
||||
test_cmp hash_expect hash_actual
|
||||
'
|
||||
ROOT_HASH=bafykbzaceb6jv27itwfun6wsrbaxahpqthh5be2bllsjtb3qpmly3vji4mlfk
|
||||
CATS_HASH=bafykbzacebhpn7rtcjjc5oa4zgzivhs7a6e2tq4uk4px42bubnmhpndhqtjig
|
||||
# blake2b-256 root: using blake2b-256 hash instead of sha2-256
|
||||
ROOT_HASH=bafykbzaceaebvwrjdw5rfhqqh5miaq3g42yybnrw3kxxxx43ggyttm6xn2zek
|
||||
CATS_HASH=bafykbzaceaqvpxs3dfl7su6744jgyvifbusow2tfixdy646chasdwyz2boagc
|
||||
FILE_HASH=bafykbzaceca45w2i3o3q3ctqsezdv5koakz7sxsw37ygqjg4w54m2bshzevxy
|
||||
TRUNC_HASH=bafykbzaceadeu7onzmlq7v33ytjpmo37rsqk2q6mzeqf5at55j32zxbcdbwig
|
||||
test_files_api "($EXTRA, blake2b-256 root)"
|
||||
@ -866,10 +873,12 @@ test_expect_success "enable sharding in config" '
|
||||
|
||||
test_launch_ipfs_daemon_without_network
|
||||
|
||||
# sharding cidv0: HAMT-sharded directory with 100 files, CIDv0
|
||||
SHARD_HASH=QmPkwLJTYZRGPJ8Lazr9qPdrLmswPtUjaDbEpmR9jEh1se
|
||||
test_sharding "(cidv0)"
|
||||
|
||||
SHARD_HASH=bafybeib46tpawg2d2hhlmmn2jvgio33wqkhlehxrem7wbfvqqikure37rm
|
||||
# sharding cidv1: HAMT-sharded directory with 100 files, CIDv1
|
||||
SHARD_HASH=bafybeiaulcf7c46pqg3tkud6dsvbgvlnlhjuswcwtfhxts5c2kuvmh5keu
|
||||
test_sharding "(cidv1 root)" "--cid-version=1"
|
||||
|
||||
test_kill_ipfs_daemon
|
||||
|
||||
Loading…
Reference in New Issue
Block a user