mirror of
https://github.com/ipfs/kubo.git
synced 2026-02-21 10:27:46 +08:00
Some checks are pending
CodeQL / codeql (push) Waiting to run
Docker Check / lint (push) Waiting to run
Docker Check / build (push) Waiting to run
Gateway Conformance / gateway-conformance (push) Waiting to run
Gateway Conformance / gateway-conformance-libp2p-experiment (push) Waiting to run
Go Build / go-build (push) Waiting to run
Go Check / go-check (push) Waiting to run
Go Lint / go-lint (push) Waiting to run
Go Test / unit-tests (push) Waiting to run
Go Test / cli-tests (push) Waiting to run
Go Test / example-tests (push) Waiting to run
Interop / interop-prep (push) Waiting to run
Interop / helia-interop (push) Blocked by required conditions
Interop / ipfs-webui (push) Blocked by required conditions
Sharness / sharness-test (push) Waiting to run
Spell Check / spellcheck (push) Waiting to run
* feat(config): Import.* and unixfs-v1-2025 profile
implements IPIP-499: add config options for controlling UnixFS DAG
determinism and introduces `unixfs-v1-2025` and `unixfs-v0-2015`
profiles for cross-implementation CID reproducibility.
changes:
- add Import.* fields: HAMTDirectorySizeEstimation, SymlinkMode,
DAGLayout, IncludeEmptyDirectories, IncludeHidden
- add validation for all Import.* config values
- add unixfs-v1-2025 profile (recommended for new data)
- add unixfs-v0-2015 profile (alias: legacy-cid-v0)
- remove deprecated test-cid-v1 and test-cid-v1-wide profiles
- wire Import.HAMTSizeEstimationMode() to boxo globals
- update go.mod to use boxo with SizeEstimationMode support
ref: https://specs.ipfs.tech/ipips/ipip-0499/
* feat(add): add --dereference-symlinks, --empty-dirs, --hidden CLI flags
add CLI flags for controlling file collection behavior during ipfs add:
- `--dereference-symlinks`: recursively resolve symlinks to their target
content (replaces deprecated --dereference-args which only worked on
CLI arguments). wired through go-ipfs-cmds to boxo's SerialFileOptions.
- `--empty-dirs` / `-E`: include empty directories (default: true)
- `--hidden` / `-H`: include hidden files (default: false)
these flags are CLI-only and not wired to Import.* config options because
go-ipfs-cmds library handles input file filtering before the directory
tree is passed to kubo. removed unused Import.UnixFSSymlinkMode config
option that was defined but never actually read by the CLI.
also:
- wire --trickle to Import.UnixFSDAGLayout config default
- update go-ipfs-cmds to v0.15.1-0.20260117043932-17687e216294
- add SYMLINK HANDLING section to ipfs add help text
- add CLI tests for all three flags
ref: https://github.com/ipfs/specs/pull/499
* test(add): add CID profile tests and wire SizeEstimationMode
add comprehensive test suite for UnixFS CID determinism per IPIP-499:
- verify exact HAMT threshold boundary for both estimation modes:
- v0-2015 (links): sum(name_len + cid_len) == 262144
- v1-2025 (block): serialized block size == 262144
- verify HAMT triggers at threshold + 1 byte for both profiles
- add all deterministic CIDs for cross-implementation testing
also wires SizeEstimationMode through CLI/API, allowing
Import.UnixFSHAMTSizeEstimation config to take effect.
bumps boxo to ipfs/boxo@6707376 which aligns HAMT threshold with
JS implementation (uses > instead of >=), fixing CID determinism
at the exact 256 KiB boundary.
* feat(add): --dereference-symlinks now resolves all symlinks
Previously, resolving symlinks required two flags:
- --dereference-args: resolved symlinks passed as CLI arguments
- --dereference-symlinks: resolved symlinks inside directories
Now --dereference-symlinks handles both cases. Users only need one flag
to fully dereference symlinks when adding files to IPFS.
The deprecated --dereference-args still works for backwards compatibility
but is no longer necessary.
* chore: update boxo and improve changelog
- update boxo to ebdaf07c (nil filter fix, thread-safety docs)
- simplify changelog for IPIP-499 section
- shorten test names, move context to comments
* chore: update boxo to 5cf22196
* chore: apply suggestions from code review
Co-authored-by: Andrew Gillis <11790789+gammazero@users.noreply.github.com>
* test(add): verify balanced DAG layout produces uniform leaf depth
add test that confirms kubo uses balanced layout (all leaves at same
depth) rather than balanced-packed (varying depths). creates 45MiB file
to trigger multi-level DAG and walks it to verify leaf depth uniformity.
includes trickle subtest to validate test logic can detect varying depths.
supports CAR export via DAG_LAYOUT_CAR_OUTPUT env var for test vectors.
* chore(deps): update boxo to 6141039ad8ef
switches to 6141039ad8
changes since 5cf22196ad0b:
- refactor(unixfs): use arithmetic for exact block size calculation
- refactor(unixfs): unify size tracking and make SizeEstimationMode immutable
- feat(unixfs): optimize SizeEstimationBlock and add mode/mtime tests
also clarifies that directory sharding globals affect both `ipfs add` and MFS.
* test(cli): improve HAMT threshold tests with exact +1 byte verification
- add UnixFSDataType() helper to directly check UnixFS type via protobuf
- refactor threshold tests to use exact +1 byte calculations instead of +1 file
- verify directory type directly (ft.TDirectory vs ft.THAMTShard) instead of
inferring from link count
- clean up helper function signatures by removing unused cidLength parameter
* test(cli): consolidate profile tests into cid_profiles_test.go
remove duplicate profile threshold tests from add_test.go since they
are fully covered by the data-driven tests in cid_profiles_test.go.
changes:
- improve test names to describe what threshold is being tested
- add inline documentation explaining each test's purpose
- add byte-precise helper IPFSAddDeterministicBytes for threshold tests
- remove ~200 lines of duplicated test code from add_test.go
- keep non-profile tests (pinning, symlinks, hidden files) in add_test.go
* chore: update to rebased boxo and go-ipfs-cmds PRs
* docs: add HAMT threshold fix details to changelog
* feat(mfs): use Import config for CID version and hash function
make MFS commands (files cp, files write, files mkdir, files chcid)
respect Import.CidVersion and Import.HashFunction config settings
when CLI options are not explicitly provided.
also add tests for:
- files write respects Import.UnixFSRawLeaves=true
- single-block file: files write produces same CID as ipfs add
- updated comments clarifying CID parity with ipfs add
* feat(files): wire Import.UnixFSChunker and UnixFSDirectoryMaxLinks to MFS
`ipfs files` commands now respect these Import.* config options:
- UnixFSChunker: configures chunk size for `files write`
- UnixFSDirectoryMaxLinks: triggers HAMT sharding in `files mkdir`
- UnixFSHAMTDirectorySizeEstimation: controls size estimation mode
previously, MFS used hardcoded defaults ignoring user config.
changes:
- config/import.go: add UnixFSSplitterFunc() returning chunk.SplitterGen
- core/node/core.go: pass chunker, maxLinks, sizeEstimationMode to
mfs.NewRoot() via new boxo RootOption API
- core/commands/files.go: pass maxLinks and sizeEstimationMode to
mfs.Mkdir() and ensureContainingDirectoryExists(); document that
UnixFSFileMaxLinks doesn't apply to files write (trickle DAG limitation)
- test/cli/files_test.go: add tests for UnixFSDirectoryMaxLinks and
UnixFSChunker, including CID parity test with `ipfs add --trickle`
related: boxo@54e044f1b265
* feat(files): wire Import.UnixFSHAMTDirectoryMaxFanout and UnixFSHAMTDirectorySizeThreshold
wire remaining HAMT config options to MFS root:
- Import.UnixFSHAMTDirectoryMaxFanout via mfs.WithMaxHAMTFanout
- Import.UnixFSHAMTDirectorySizeThreshold via mfs.WithHAMTShardingSize
add CLI tests:
- files mkdir respects Import.UnixFSHAMTDirectoryMaxFanout
- files mkdir respects Import.UnixFSHAMTDirectorySizeThreshold
- config change takes effect after daemon restart
add UnixFSHAMTFanout() helper to test harness
update boxo to ac97424d99ab90e097fc7c36f285988b596b6f05
* fix(mfs): single-block files in CIDv1 dirs now produce raw CIDs
problem: `ipfs files write` in CIDv1 directories wrapped single-block
files in dag-pb even when raw-leaves was enabled, producing different
CIDs than `ipfs add --raw-leaves` for the same content.
fix: boxo now collapses single-block ProtoNode wrappers (with no
metadata) to RawNode in DagModifier.GetNode(). files with mtime/mode
stay as dag-pb since raw blocks cannot store UnixFS metadata.
also fixes sparse file writes where writing past EOF would lose data
because expandSparse didn't update the internal node pointer.
updates boxo to v0.36.1-0.20260203003133-7884ae23aaff
updates t0250-files-api.sh test hashes to match new behavior
* chore(test): use Go 1.22+ range-over-int syntax
* chore: update boxo to c6829fe26860
- fix typo in files write help text
- update boxo with CI fixes (gofumpt, race condition in test)
* chore: update go-ipfs-cmds to 192ec9d15c1f
includes binary content types fix: gzip, zip, vnd.ipld.car, vnd.ipld.raw,
vnd.ipfs.ipns-record
* chore: update boxo to 0a22cde9225c
includes refactor of maxLinks check in addLinkChild (review feedback).
* ci: fix helia-interop and improve caching
skip '@helia/mfs - should have the same CID after creating a file' test
until helia implements IPIP-499 (tracking: https://github.com/ipfs/helia/issues/941)
the test fails because kubo now collapses single-block files to raw CIDs
while helia explicitly uses reduceSingleLeafToSelf: false
changes:
- run aegir directly instead of helia-interop binary (binary ignores --grep flags)
- cache node_modules keyed by @helia/interop version from npm registry
- skip npm install on cache hit (matches ipfs-webui caching pattern)
* chore: update boxo to 1e30b954
includes latest upstream changes from boxo main
* chore: update go-ipfs-cmds to 1b2a641ed6f6
* chore: update boxo to f188f79fd412
switches to boxo@main after merging https://github.com/ipfs/boxo/pull/1088
* chore: update go-ipfs-cmds to af9bcbaf5709
switches to go-ipfs-cmds@master after merging https://github.com/ipfs/go-ipfs-cmds/pull/315
---------
Co-authored-by: Andrew Gillis <11790789+gammazero@users.noreply.github.com>
441 lines
14 KiB
Go
441 lines
14 KiB
Go
package config
|
|
|
|
import (
|
|
"fmt"
|
|
"net"
|
|
"time"
|
|
)
|
|
|
|
// Transformer is a function which takes configuration and applies some filter to it.
|
|
type Transformer func(c *Config) error
|
|
|
|
// Profile contains the profile transformer the description of the profile.
|
|
type Profile struct {
|
|
// Description briefly describes the functionality of the profile.
|
|
Description string
|
|
|
|
// Transform takes ipfs configuration and applies the profile to it.
|
|
Transform Transformer
|
|
|
|
// InitOnly specifies that this profile can only be applied on init.
|
|
InitOnly bool
|
|
}
|
|
|
|
// defaultServerFilters has is a list of IPv4 and IPv6 prefixes that are private, local only, or unrouteable.
|
|
// according to https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
|
|
// and https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
|
|
var defaultServerFilters = []string{
|
|
"/ip4/10.0.0.0/ipcidr/8",
|
|
"/ip4/100.64.0.0/ipcidr/10",
|
|
"/ip4/169.254.0.0/ipcidr/16",
|
|
"/ip4/172.16.0.0/ipcidr/12",
|
|
"/ip4/192.0.0.0/ipcidr/24",
|
|
"/ip4/192.0.2.0/ipcidr/24",
|
|
"/ip4/192.168.0.0/ipcidr/16",
|
|
"/ip4/198.18.0.0/ipcidr/15",
|
|
"/ip4/198.51.100.0/ipcidr/24",
|
|
"/ip4/203.0.113.0/ipcidr/24",
|
|
"/ip4/240.0.0.0/ipcidr/4",
|
|
"/ip6/100::/ipcidr/64",
|
|
"/ip6/2001:2::/ipcidr/48",
|
|
"/ip6/2001:db8::/ipcidr/32",
|
|
"/ip6/fc00::/ipcidr/7",
|
|
"/ip6/fe80::/ipcidr/10",
|
|
}
|
|
|
|
// Profiles is a map holding configuration transformers. Docs are in docs/config.md.
|
|
var Profiles = map[string]Profile{
|
|
"server": {
|
|
Description: `Disables local host discovery, recommended when
|
|
running IPFS on machines with public IPv4 addresses.`,
|
|
|
|
Transform: func(c *Config) error {
|
|
c.Addresses.NoAnnounce = appendSingle(c.Addresses.NoAnnounce, defaultServerFilters)
|
|
c.Swarm.AddrFilters = appendSingle(c.Swarm.AddrFilters, defaultServerFilters)
|
|
c.Discovery.MDNS.Enabled = false
|
|
c.Swarm.DisableNatPortMap = true
|
|
return nil
|
|
},
|
|
},
|
|
|
|
"local-discovery": {
|
|
Description: `Sets default values to fields affected by the server
|
|
profile, enables discovery in local networks.`,
|
|
|
|
Transform: func(c *Config) error {
|
|
c.Addresses.NoAnnounce = deleteEntries(c.Addresses.NoAnnounce, defaultServerFilters)
|
|
c.Swarm.AddrFilters = deleteEntries(c.Swarm.AddrFilters, defaultServerFilters)
|
|
c.Discovery.MDNS.Enabled = true
|
|
c.Swarm.DisableNatPortMap = false
|
|
return nil
|
|
},
|
|
},
|
|
"test": {
|
|
Description: `Reduces external interference of IPFS daemon, this
|
|
is useful when using the daemon in test environments.`,
|
|
|
|
Transform: func(c *Config) error {
|
|
c.Addresses.API = Strings{"/ip4/127.0.0.1/tcp/0"}
|
|
c.Addresses.Gateway = Strings{"/ip4/127.0.0.1/tcp/0"}
|
|
c.Addresses.Swarm = []string{
|
|
"/ip4/127.0.0.1/tcp/0",
|
|
}
|
|
|
|
c.Swarm.DisableNatPortMap = true
|
|
c.Routing.LoopbackAddressesOnLanDHT = True
|
|
|
|
c.Bootstrap = []string{}
|
|
c.Discovery.MDNS.Enabled = false
|
|
c.AutoTLS.Enabled = False
|
|
c.AutoConf.Enabled = False
|
|
|
|
// Explicitly set autoconf-controlled fields to empty when autoconf is disabled
|
|
c.DNS.Resolvers = map[string]string{}
|
|
c.Routing.DelegatedRouters = []string{}
|
|
c.Ipns.DelegatedPublishers = []string{}
|
|
return nil
|
|
},
|
|
},
|
|
"default-networking": {
|
|
Description: `Restores default network settings.
|
|
Inverse profile of the test profile.`,
|
|
|
|
Transform: func(c *Config) error {
|
|
c.Addresses = addressesConfig()
|
|
|
|
// Use AutoConf system for bootstrap peers
|
|
c.Bootstrap = []string{AutoPlaceholder}
|
|
c.AutoConf.Enabled = Default
|
|
c.AutoConf.URL = nil // Clear URL to use implicit default
|
|
|
|
c.Swarm.DisableNatPortMap = false
|
|
c.Discovery.MDNS.Enabled = true
|
|
c.AutoTLS.Enabled = Default
|
|
return nil
|
|
},
|
|
},
|
|
"default-datastore": {
|
|
Description: `Configures the node to use the default datastore (flatfs).
|
|
|
|
Read the "flatfs" profile description for more information on this datastore.
|
|
|
|
This profile may only be applied when first initializing the node.
|
|
`,
|
|
|
|
InitOnly: true,
|
|
Transform: func(c *Config) error {
|
|
c.Datastore.Spec = flatfsSpec()
|
|
return nil
|
|
},
|
|
},
|
|
"flatfs": {
|
|
Description: `Configures the node to use the flatfs datastore.
|
|
|
|
This is the most battle-tested and reliable datastore.
|
|
You should use this datastore if:
|
|
|
|
* You need a very simple and very reliable datastore, and you trust your
|
|
filesystem. This datastore stores each block as a separate file in the
|
|
underlying filesystem so it's unlikely to loose data unless there's an issue
|
|
with the underlying file system.
|
|
* You need to run garbage collection in a way that reclaims free space as soon as possible.
|
|
* You want to minimize memory usage.
|
|
* You are ok with the default speed of data import, or prefer to use --nocopy.
|
|
|
|
See configuration documentation at:
|
|
https://github.com/ipfs/kubo/blob/master/docs/datastores.md#flatfs
|
|
|
|
NOTE: This profile may only be applied when first initializing node at IPFS_PATH
|
|
via 'ipfs init --profile flatfs'
|
|
`,
|
|
|
|
InitOnly: true,
|
|
Transform: func(c *Config) error {
|
|
c.Datastore.Spec = flatfsSpec()
|
|
return nil
|
|
},
|
|
},
|
|
"flatfs-measure": {
|
|
Description: `Configures the node to use the flatfs datastore with metrics tracking wrapper.
|
|
Additional '*_datastore_*' metrics will be exposed on /debug/metrics/prometheus
|
|
|
|
NOTE: This profile may only be applied when first initializing node at IPFS_PATH
|
|
via 'ipfs init --profile flatfs-measure'
|
|
`,
|
|
|
|
InitOnly: true,
|
|
Transform: func(c *Config) error {
|
|
c.Datastore.Spec = flatfsSpecMeasure()
|
|
return nil
|
|
},
|
|
},
|
|
"pebbleds": {
|
|
Description: `Configures the node to use the pebble high-performance datastore.
|
|
|
|
Pebble is a LevelDB/RocksDB inspired key-value store focused on performance
|
|
and internal usage by CockroachDB.
|
|
You should use this datastore if:
|
|
|
|
- You need a datastore that is focused on performance.
|
|
- You need reliability by default, but may choose to disable WAL for maximum performance when reliability is not critical.
|
|
- This datastore is good for multi-terabyte data sets.
|
|
- May benefit from tuning depending on read/write patterns and throughput.
|
|
- Performance is helped significantly by running on a system with plenty of memory.
|
|
|
|
See configuration documentation at:
|
|
https://github.com/ipfs/kubo/blob/master/docs/datastores.md#pebbleds
|
|
|
|
NOTE: This profile may only be applied when first initializing node at IPFS_PATH
|
|
via 'ipfs init --profile pebbleds'
|
|
`,
|
|
|
|
InitOnly: true,
|
|
Transform: func(c *Config) error {
|
|
c.Datastore.Spec = pebbleSpec()
|
|
return nil
|
|
},
|
|
},
|
|
"pebbleds-measure": {
|
|
Description: `Configures the node to use the pebble datastore with metrics tracking wrapper.
|
|
Additional '*_datastore_*' metrics will be exposed on /debug/metrics/prometheus
|
|
|
|
NOTE: This profile may only be applied when first initializing node at IPFS_PATH
|
|
via 'ipfs init --profile pebbleds-measure'
|
|
`,
|
|
|
|
InitOnly: true,
|
|
Transform: func(c *Config) error {
|
|
c.Datastore.Spec = pebbleSpecMeasure()
|
|
return nil
|
|
},
|
|
},
|
|
"badgerds": {
|
|
Description: `Configures the node to use the legacy badgerv1 datastore.
|
|
|
|
NOTE: this is badger 1.x, which has known bugs and is no longer supported by the upstream team.
|
|
It is provided here only for pre-existing users, allowing them to migrate away to more modern datastore.
|
|
|
|
Other caveats:
|
|
|
|
* This datastore will not properly reclaim space when your datastore is
|
|
smaller than several gigabytes. If you run IPFS with --enable-gc, you plan
|
|
on storing very little data in your IPFS node, and disk usage is more
|
|
critical than performance, consider using flatfs.
|
|
* This datastore uses up to several gigabytes of memory.
|
|
* Good for medium-size datastores, but may run into performance issues
|
|
if your dataset is bigger than a terabyte.
|
|
|
|
See configuration documentation at:
|
|
https://github.com/ipfs/kubo/blob/master/docs/datastores.md#badgerds
|
|
|
|
NOTE: This profile may only be applied when first initializing node at IPFS_PATH
|
|
via 'ipfs init --profile badgerds'
|
|
`,
|
|
|
|
InitOnly: true,
|
|
Transform: func(c *Config) error {
|
|
c.Datastore.Spec = badgerSpec()
|
|
return nil
|
|
},
|
|
},
|
|
"badgerds-measure": {
|
|
Description: `Configures the node to use the legacy badgerv1 datastore with metrics wrapper.
|
|
Additional '*_datastore_*' metrics will be exposed on /debug/metrics/prometheus
|
|
|
|
NOTE: This profile may only be applied when first initializing node at IPFS_PATH
|
|
via 'ipfs init --profile badgerds-measure'
|
|
`,
|
|
|
|
InitOnly: true,
|
|
Transform: func(c *Config) error {
|
|
c.Datastore.Spec = badgerSpecMeasure()
|
|
return nil
|
|
},
|
|
},
|
|
"lowpower": {
|
|
Description: `Reduces daemon overhead on the system. May affect node
|
|
functionality - performance of content discovery and data
|
|
fetching may be degraded.
|
|
`,
|
|
Transform: func(c *Config) error {
|
|
// Disable "server" services (dht, autonat, limited relay)
|
|
c.Routing.Type = NewOptionalString("autoclient")
|
|
c.AutoNAT.ServiceMode = AutoNATServiceDisabled
|
|
c.Swarm.RelayService.Enabled = False
|
|
|
|
// Keep bare minimum connections around
|
|
lowWater := int64(20)
|
|
highWater := int64(40)
|
|
gracePeriod := time.Minute
|
|
c.Swarm.ConnMgr.Type = NewOptionalString("basic")
|
|
c.Swarm.ConnMgr.LowWater = &OptionalInteger{value: &lowWater}
|
|
c.Swarm.ConnMgr.HighWater = &OptionalInteger{value: &highWater}
|
|
c.Swarm.ConnMgr.GracePeriod = &OptionalDuration{&gracePeriod}
|
|
return nil
|
|
},
|
|
},
|
|
"announce-off": {
|
|
Description: `Disables Provide system (announcing to Amino DHT).
|
|
|
|
USE WITH CAUTION:
|
|
The main use case for this is setups with manual Peering.Peers config.
|
|
Data from this node will not be announced on the DHT. This will make
|
|
DHT-based routing and data retrieval impossible if this node is the only
|
|
one hosting it, and other peers are not already connected to it.
|
|
`,
|
|
Transform: func(c *Config) error {
|
|
c.Provide.Enabled = False
|
|
c.Provide.DHT.Interval = NewOptionalDuration(0) // 0 disables periodic reprovide
|
|
return nil
|
|
},
|
|
},
|
|
"announce-on": {
|
|
Description: `Re-enables Provide system (reverts announce-off profile).`,
|
|
Transform: func(c *Config) error {
|
|
c.Provide.Enabled = True
|
|
c.Provide.DHT.Interval = NewOptionalDuration(DefaultProvideDHTInterval) // have to apply explicit default because nil would be ignored
|
|
return nil
|
|
},
|
|
},
|
|
"randomports": {
|
|
Description: `Use a random port number for swarm.`,
|
|
|
|
Transform: func(c *Config) error {
|
|
port, err := getAvailablePort()
|
|
if err != nil {
|
|
return err
|
|
}
|
|
c.Addresses.Swarm = []string{
|
|
fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", port),
|
|
fmt.Sprintf("/ip6/::/tcp/%d", port),
|
|
}
|
|
return nil
|
|
},
|
|
},
|
|
"unixfs-v0-2015": {
|
|
Description: `Legacy UnixFS import profile for backward-compatible CID generation.
|
|
Produces CIDv0 with no raw leaves, sha2-256, 256 KiB chunks, and
|
|
link-based HAMT size estimation. Use only when legacy CIDs are required.
|
|
See https://github.com/ipfs/specs/pull/499. Alias: legacy-cid-v0`,
|
|
Transform: applyUnixFSv02015,
|
|
},
|
|
"legacy-cid-v0": {
|
|
Description: `Alias for unixfs-v0-2015 profile.`,
|
|
Transform: applyUnixFSv02015,
|
|
},
|
|
"unixfs-v1-2025": {
|
|
Description: `Recommended UnixFS import profile for cross-implementation CID determinism.
|
|
Uses CIDv1, raw leaves, sha2-256, 1 MiB chunks, 1024 links per file node,
|
|
256 HAMT fanout, and block-based size estimation for HAMT threshold.
|
|
See https://github.com/ipfs/specs/pull/499`,
|
|
Transform: func(c *Config) error {
|
|
c.Import.CidVersion = *NewOptionalInteger(1)
|
|
c.Import.UnixFSRawLeaves = True
|
|
c.Import.UnixFSChunker = *NewOptionalString("size-1048576") // 1 MiB
|
|
c.Import.HashFunction = *NewOptionalString("sha2-256")
|
|
c.Import.UnixFSFileMaxLinks = *NewOptionalInteger(1024)
|
|
c.Import.UnixFSDirectoryMaxLinks = *NewOptionalInteger(0)
|
|
c.Import.UnixFSHAMTDirectoryMaxFanout = *NewOptionalInteger(256)
|
|
c.Import.UnixFSHAMTDirectorySizeThreshold = *NewOptionalBytes("256KiB")
|
|
c.Import.UnixFSHAMTDirectorySizeEstimation = *NewOptionalString(HAMTSizeEstimationBlock)
|
|
c.Import.UnixFSDAGLayout = *NewOptionalString(DAGLayoutBalanced)
|
|
return nil
|
|
},
|
|
},
|
|
"autoconf-on": {
|
|
Description: `Sets configuration to use implicit defaults from remote autoconf service.
|
|
Bootstrap peers, DNS resolvers, delegated routers, and IPNS delegated publishers are set to "auto".
|
|
This profile requires AutoConf to be enabled and configured.`,
|
|
|
|
Transform: func(c *Config) error {
|
|
c.Bootstrap = []string{AutoPlaceholder}
|
|
c.DNS.Resolvers = map[string]string{
|
|
".": AutoPlaceholder,
|
|
}
|
|
c.Routing.DelegatedRouters = []string{AutoPlaceholder}
|
|
c.Ipns.DelegatedPublishers = []string{AutoPlaceholder}
|
|
c.AutoConf.Enabled = True
|
|
if c.AutoConf.URL == nil {
|
|
c.AutoConf.URL = NewOptionalString(DefaultAutoConfURL)
|
|
}
|
|
return nil
|
|
},
|
|
},
|
|
"autoconf-off": {
|
|
Description: `Disables AutoConf and sets networking fields to empty for manual configuration.
|
|
Bootstrap peers, DNS resolvers, delegated routers, and IPNS delegated publishers are set to empty.
|
|
Use this when you want normal networking but prefer manual control over all endpoints.`,
|
|
|
|
Transform: func(c *Config) error {
|
|
c.Bootstrap = nil
|
|
c.DNS.Resolvers = nil
|
|
c.Routing.DelegatedRouters = nil
|
|
c.Ipns.DelegatedPublishers = nil
|
|
c.AutoConf.Enabled = False
|
|
return nil
|
|
},
|
|
},
|
|
}
|
|
|
|
func getAvailablePort() (port int, err error) {
|
|
ln, err := net.Listen("tcp", "[::]:0")
|
|
if err != nil {
|
|
return 0, err
|
|
}
|
|
defer ln.Close()
|
|
port = ln.Addr().(*net.TCPAddr).Port
|
|
return port, nil
|
|
}
|
|
|
|
func appendSingle(a []string, b []string) []string {
|
|
out := make([]string, 0, len(a)+len(b))
|
|
m := map[string]bool{}
|
|
for _, f := range a {
|
|
if !m[f] {
|
|
out = append(out, f)
|
|
}
|
|
m[f] = true
|
|
}
|
|
for _, f := range b {
|
|
if !m[f] {
|
|
out = append(out, f)
|
|
}
|
|
m[f] = true
|
|
}
|
|
return out
|
|
}
|
|
|
|
func deleteEntries(arr []string, del []string) []string {
|
|
m := map[string]struct{}{}
|
|
for _, f := range arr {
|
|
m[f] = struct{}{}
|
|
}
|
|
for _, f := range del {
|
|
delete(m, f)
|
|
}
|
|
return mapKeys(m)
|
|
}
|
|
|
|
func mapKeys(m map[string]struct{}) []string {
|
|
out := make([]string, 0, len(m))
|
|
for f := range m {
|
|
out = append(out, f)
|
|
}
|
|
return out
|
|
}
|
|
|
|
// applyUnixFSv02015 applies the legacy UnixFS v0 (2015) import settings.
|
|
func applyUnixFSv02015(c *Config) error {
|
|
c.Import.CidVersion = *NewOptionalInteger(0)
|
|
c.Import.UnixFSRawLeaves = False
|
|
c.Import.UnixFSChunker = *NewOptionalString("size-262144") // 256 KiB
|
|
c.Import.HashFunction = *NewOptionalString("sha2-256")
|
|
c.Import.UnixFSFileMaxLinks = *NewOptionalInteger(174)
|
|
c.Import.UnixFSDirectoryMaxLinks = *NewOptionalInteger(0)
|
|
c.Import.UnixFSHAMTDirectoryMaxFanout = *NewOptionalInteger(256)
|
|
c.Import.UnixFSHAMTDirectorySizeThreshold = *NewOptionalBytes("256KiB")
|
|
c.Import.UnixFSHAMTDirectorySizeEstimation = *NewOptionalString(HAMTSizeEstimationLinks)
|
|
c.Import.UnixFSDAGLayout = *NewOptionalString(DAGLayoutBalanced)
|
|
return nil
|
|
}
|