mirror of
https://github.com/ipfs/kubo.git
synced 2026-02-21 18:37:45 +08:00
Some checks are pending
CodeQL / codeql (push) Waiting to run
Docker Check / lint (push) Waiting to run
Docker Check / build (push) Waiting to run
Gateway Conformance / gateway-conformance (push) Waiting to run
Gateway Conformance / gateway-conformance-libp2p-experiment (push) Waiting to run
Go Build / go-build (push) Waiting to run
Go Check / go-check (push) Waiting to run
Go Lint / go-lint (push) Waiting to run
Go Test / unit-tests (push) Waiting to run
Go Test / cli-tests (push) Waiting to run
Go Test / example-tests (push) Waiting to run
Interop / interop-prep (push) Waiting to run
Interop / helia-interop (push) Blocked by required conditions
Interop / ipfs-webui (push) Blocked by required conditions
Sharness / sharness-test (push) Waiting to run
Spell Check / spellcheck (push) Waiting to run
* feat(config): Import.* and unixfs-v1-2025 profile
implements IPIP-499: add config options for controlling UnixFS DAG
determinism and introduces `unixfs-v1-2025` and `unixfs-v0-2015`
profiles for cross-implementation CID reproducibility.
changes:
- add Import.* fields: HAMTDirectorySizeEstimation, SymlinkMode,
DAGLayout, IncludeEmptyDirectories, IncludeHidden
- add validation for all Import.* config values
- add unixfs-v1-2025 profile (recommended for new data)
- add unixfs-v0-2015 profile (alias: legacy-cid-v0)
- remove deprecated test-cid-v1 and test-cid-v1-wide profiles
- wire Import.HAMTSizeEstimationMode() to boxo globals
- update go.mod to use boxo with SizeEstimationMode support
ref: https://specs.ipfs.tech/ipips/ipip-0499/
* feat(add): add --dereference-symlinks, --empty-dirs, --hidden CLI flags
add CLI flags for controlling file collection behavior during ipfs add:
- `--dereference-symlinks`: recursively resolve symlinks to their target
content (replaces deprecated --dereference-args which only worked on
CLI arguments). wired through go-ipfs-cmds to boxo's SerialFileOptions.
- `--empty-dirs` / `-E`: include empty directories (default: true)
- `--hidden` / `-H`: include hidden files (default: false)
these flags are CLI-only and not wired to Import.* config options because
go-ipfs-cmds library handles input file filtering before the directory
tree is passed to kubo. removed unused Import.UnixFSSymlinkMode config
option that was defined but never actually read by the CLI.
also:
- wire --trickle to Import.UnixFSDAGLayout config default
- update go-ipfs-cmds to v0.15.1-0.20260117043932-17687e216294
- add SYMLINK HANDLING section to ipfs add help text
- add CLI tests for all three flags
ref: https://github.com/ipfs/specs/pull/499
* test(add): add CID profile tests and wire SizeEstimationMode
add comprehensive test suite for UnixFS CID determinism per IPIP-499:
- verify exact HAMT threshold boundary for both estimation modes:
- v0-2015 (links): sum(name_len + cid_len) == 262144
- v1-2025 (block): serialized block size == 262144
- verify HAMT triggers at threshold + 1 byte for both profiles
- add all deterministic CIDs for cross-implementation testing
also wires SizeEstimationMode through CLI/API, allowing
Import.UnixFSHAMTSizeEstimation config to take effect.
bumps boxo to ipfs/boxo@6707376 which aligns HAMT threshold with
JS implementation (uses > instead of >=), fixing CID determinism
at the exact 256 KiB boundary.
* feat(add): --dereference-symlinks now resolves all symlinks
Previously, resolving symlinks required two flags:
- --dereference-args: resolved symlinks passed as CLI arguments
- --dereference-symlinks: resolved symlinks inside directories
Now --dereference-symlinks handles both cases. Users only need one flag
to fully dereference symlinks when adding files to IPFS.
The deprecated --dereference-args still works for backwards compatibility
but is no longer necessary.
* chore: update boxo and improve changelog
- update boxo to ebdaf07c (nil filter fix, thread-safety docs)
- simplify changelog for IPIP-499 section
- shorten test names, move context to comments
* chore: update boxo to 5cf22196
* chore: apply suggestions from code review
Co-authored-by: Andrew Gillis <11790789+gammazero@users.noreply.github.com>
* test(add): verify balanced DAG layout produces uniform leaf depth
add test that confirms kubo uses balanced layout (all leaves at same
depth) rather than balanced-packed (varying depths). creates 45MiB file
to trigger multi-level DAG and walks it to verify leaf depth uniformity.
includes trickle subtest to validate test logic can detect varying depths.
supports CAR export via DAG_LAYOUT_CAR_OUTPUT env var for test vectors.
* chore(deps): update boxo to 6141039ad8ef
switches to 6141039ad8
changes since 5cf22196ad0b:
- refactor(unixfs): use arithmetic for exact block size calculation
- refactor(unixfs): unify size tracking and make SizeEstimationMode immutable
- feat(unixfs): optimize SizeEstimationBlock and add mode/mtime tests
also clarifies that directory sharding globals affect both `ipfs add` and MFS.
* test(cli): improve HAMT threshold tests with exact +1 byte verification
- add UnixFSDataType() helper to directly check UnixFS type via protobuf
- refactor threshold tests to use exact +1 byte calculations instead of +1 file
- verify directory type directly (ft.TDirectory vs ft.THAMTShard) instead of
inferring from link count
- clean up helper function signatures by removing unused cidLength parameter
* test(cli): consolidate profile tests into cid_profiles_test.go
remove duplicate profile threshold tests from add_test.go since they
are fully covered by the data-driven tests in cid_profiles_test.go.
changes:
- improve test names to describe what threshold is being tested
- add inline documentation explaining each test's purpose
- add byte-precise helper IPFSAddDeterministicBytes for threshold tests
- remove ~200 lines of duplicated test code from add_test.go
- keep non-profile tests (pinning, symlinks, hidden files) in add_test.go
* chore: update to rebased boxo and go-ipfs-cmds PRs
* docs: add HAMT threshold fix details to changelog
* feat(mfs): use Import config for CID version and hash function
make MFS commands (files cp, files write, files mkdir, files chcid)
respect Import.CidVersion and Import.HashFunction config settings
when CLI options are not explicitly provided.
also add tests for:
- files write respects Import.UnixFSRawLeaves=true
- single-block file: files write produces same CID as ipfs add
- updated comments clarifying CID parity with ipfs add
* feat(files): wire Import.UnixFSChunker and UnixFSDirectoryMaxLinks to MFS
`ipfs files` commands now respect these Import.* config options:
- UnixFSChunker: configures chunk size for `files write`
- UnixFSDirectoryMaxLinks: triggers HAMT sharding in `files mkdir`
- UnixFSHAMTDirectorySizeEstimation: controls size estimation mode
previously, MFS used hardcoded defaults ignoring user config.
changes:
- config/import.go: add UnixFSSplitterFunc() returning chunk.SplitterGen
- core/node/core.go: pass chunker, maxLinks, sizeEstimationMode to
mfs.NewRoot() via new boxo RootOption API
- core/commands/files.go: pass maxLinks and sizeEstimationMode to
mfs.Mkdir() and ensureContainingDirectoryExists(); document that
UnixFSFileMaxLinks doesn't apply to files write (trickle DAG limitation)
- test/cli/files_test.go: add tests for UnixFSDirectoryMaxLinks and
UnixFSChunker, including CID parity test with `ipfs add --trickle`
related: boxo@54e044f1b265
* feat(files): wire Import.UnixFSHAMTDirectoryMaxFanout and UnixFSHAMTDirectorySizeThreshold
wire remaining HAMT config options to MFS root:
- Import.UnixFSHAMTDirectoryMaxFanout via mfs.WithMaxHAMTFanout
- Import.UnixFSHAMTDirectorySizeThreshold via mfs.WithHAMTShardingSize
add CLI tests:
- files mkdir respects Import.UnixFSHAMTDirectoryMaxFanout
- files mkdir respects Import.UnixFSHAMTDirectorySizeThreshold
- config change takes effect after daemon restart
add UnixFSHAMTFanout() helper to test harness
update boxo to ac97424d99ab90e097fc7c36f285988b596b6f05
* fix(mfs): single-block files in CIDv1 dirs now produce raw CIDs
problem: `ipfs files write` in CIDv1 directories wrapped single-block
files in dag-pb even when raw-leaves was enabled, producing different
CIDs than `ipfs add --raw-leaves` for the same content.
fix: boxo now collapses single-block ProtoNode wrappers (with no
metadata) to RawNode in DagModifier.GetNode(). files with mtime/mode
stay as dag-pb since raw blocks cannot store UnixFS metadata.
also fixes sparse file writes where writing past EOF would lose data
because expandSparse didn't update the internal node pointer.
updates boxo to v0.36.1-0.20260203003133-7884ae23aaff
updates t0250-files-api.sh test hashes to match new behavior
* chore(test): use Go 1.22+ range-over-int syntax
* chore: update boxo to c6829fe26860
- fix typo in files write help text
- update boxo with CI fixes (gofumpt, race condition in test)
* chore: update go-ipfs-cmds to 192ec9d15c1f
includes binary content types fix: gzip, zip, vnd.ipld.car, vnd.ipld.raw,
vnd.ipfs.ipns-record
* chore: update boxo to 0a22cde9225c
includes refactor of maxLinks check in addLinkChild (review feedback).
* ci: fix helia-interop and improve caching
skip '@helia/mfs - should have the same CID after creating a file' test
until helia implements IPIP-499 (tracking: https://github.com/ipfs/helia/issues/941)
the test fails because kubo now collapses single-block files to raw CIDs
while helia explicitly uses reduceSingleLeafToSelf: false
changes:
- run aegir directly instead of helia-interop binary (binary ignores --grep flags)
- cache node_modules keyed by @helia/interop version from npm registry
- skip npm install on cache hit (matches ipfs-webui caching pattern)
* chore: update boxo to 1e30b954
includes latest upstream changes from boxo main
* chore: update go-ipfs-cmds to 1b2a641ed6f6
* chore: update boxo to f188f79fd412
switches to boxo@main after merging https://github.com/ipfs/boxo/pull/1088
* chore: update go-ipfs-cmds to af9bcbaf5709
switches to go-ipfs-cmds@master after merging https://github.com/ipfs/go-ipfs-cmds/pull/315
---------
Co-authored-by: Andrew Gillis <11790789+gammazero@users.noreply.github.com>
432 lines
14 KiB
Go
432 lines
14 KiB
Go
package coreapi
|
|
|
|
import (
|
|
"context"
|
|
"errors"
|
|
"fmt"
|
|
|
|
blockservice "github.com/ipfs/boxo/blockservice"
|
|
bstore "github.com/ipfs/boxo/blockstore"
|
|
"github.com/ipfs/boxo/files"
|
|
filestore "github.com/ipfs/boxo/filestore"
|
|
merkledag "github.com/ipfs/boxo/ipld/merkledag"
|
|
dagtest "github.com/ipfs/boxo/ipld/merkledag/test"
|
|
ft "github.com/ipfs/boxo/ipld/unixfs"
|
|
unixfile "github.com/ipfs/boxo/ipld/unixfs/file"
|
|
uio "github.com/ipfs/boxo/ipld/unixfs/io"
|
|
"github.com/ipfs/boxo/mfs"
|
|
"github.com/ipfs/boxo/path"
|
|
"github.com/ipfs/boxo/provider"
|
|
cid "github.com/ipfs/go-cid"
|
|
cidutil "github.com/ipfs/go-cidutil"
|
|
ds "github.com/ipfs/go-datastore"
|
|
dssync "github.com/ipfs/go-datastore/sync"
|
|
ipld "github.com/ipfs/go-ipld-format"
|
|
logging "github.com/ipfs/go-log/v2"
|
|
"github.com/ipfs/kubo/config"
|
|
coreiface "github.com/ipfs/kubo/core/coreiface"
|
|
options "github.com/ipfs/kubo/core/coreiface/options"
|
|
"github.com/ipfs/kubo/core/coreunix"
|
|
"github.com/ipfs/kubo/tracing"
|
|
mh "github.com/multiformats/go-multihash"
|
|
"go.opentelemetry.io/otel/attribute"
|
|
"go.opentelemetry.io/otel/trace"
|
|
)
|
|
|
|
var log = logging.Logger("coreapi")
|
|
|
|
type UnixfsAPI CoreAPI
|
|
|
|
// Add builds a merkledag node from a reader, adds it to the blockstore,
|
|
// and returns the key representing that node.
|
|
func (api *UnixfsAPI) Add(ctx context.Context, files files.Node, opts ...options.UnixfsAddOption) (path.ImmutablePath, error) {
|
|
ctx, span := tracing.Span(ctx, "CoreAPI.UnixfsAPI", "Add")
|
|
defer span.End()
|
|
|
|
settings, prefix, err := options.UnixfsAddOptions(opts...)
|
|
if err != nil {
|
|
return path.ImmutablePath{}, err
|
|
}
|
|
|
|
span.SetAttributes(
|
|
attribute.String("chunker", settings.Chunker),
|
|
attribute.Int("cidversion", settings.CidVersion),
|
|
attribute.Bool("inline", settings.Inline),
|
|
attribute.Int("inlinelimit", settings.InlineLimit),
|
|
attribute.Bool("rawleaves", settings.RawLeaves),
|
|
attribute.Bool("rawleavesset", settings.RawLeavesSet),
|
|
attribute.Int("maxfilelinks", settings.MaxFileLinks),
|
|
attribute.Bool("maxfilelinksset", settings.MaxFileLinksSet),
|
|
attribute.Int("maxdirectorylinks", settings.MaxDirectoryLinks),
|
|
attribute.Bool("maxdirectorylinksset", settings.MaxDirectoryLinksSet),
|
|
attribute.Int("maxhamtfanout", settings.MaxHAMTFanout),
|
|
attribute.Bool("maxhamtfanoutset", settings.MaxHAMTFanoutSet),
|
|
attribute.Int("layout", int(settings.Layout)),
|
|
attribute.Bool("pin", settings.Pin),
|
|
attribute.String("pin-name", settings.PinName),
|
|
attribute.Bool("onlyhash", settings.OnlyHash),
|
|
attribute.Bool("fscache", settings.FsCache),
|
|
attribute.Bool("nocopy", settings.NoCopy),
|
|
attribute.Bool("silent", settings.Silent),
|
|
attribute.Bool("progress", settings.Progress),
|
|
)
|
|
|
|
cfg, err := api.repo.Config()
|
|
if err != nil {
|
|
return path.ImmutablePath{}, err
|
|
}
|
|
|
|
// check if repo will exceed storage limit if added
|
|
// TODO: this doesn't handle the case if the hashed file is already in blocks (deduplicated)
|
|
// TODO: conditional GC is disabled due to it is somehow not possible to pass the size to the daemon
|
|
//if err := corerepo.ConditionalGC(req.Context(), n, uint64(size)); err != nil {
|
|
// res.SetError(err, cmds.ErrNormal)
|
|
// return
|
|
//}
|
|
|
|
if settings.NoCopy && !(cfg.Experimental.FilestoreEnabled || cfg.Experimental.UrlstoreEnabled) {
|
|
return path.ImmutablePath{}, errors.New("either the filestore or the urlstore must be enabled to use nocopy, see: https://github.com/ipfs/kubo/blob/master/docs/experimental-features.md#ipfs-filestore")
|
|
}
|
|
|
|
addblockstore := api.blockstore
|
|
if !(settings.FsCache || settings.NoCopy) {
|
|
addblockstore = bstore.NewGCBlockstore(api.baseBlocks, api.blockstore)
|
|
}
|
|
exch := api.exchange
|
|
pinning := api.pinning
|
|
|
|
if settings.OnlyHash {
|
|
// setup a /dev/null pipeline to simulate adding the data
|
|
dstore := dssync.MutexWrap(ds.NewNullDatastore())
|
|
bs := bstore.NewBlockstore(dstore, bstore.WriteThrough(true)) // we use NewNullDatastore, so ok to always WriteThrough when OnlyHash
|
|
addblockstore = bstore.NewGCBlockstore(bs, nil) // gclocker will never be used
|
|
exch = nil // exchange will never be used
|
|
pinning = nil // pinner will never be used
|
|
}
|
|
|
|
bserv := blockservice.New(addblockstore, exch,
|
|
blockservice.WriteThrough(cfg.Datastore.WriteThrough.WithDefault(config.DefaultWriteThrough)),
|
|
) // hash security 001
|
|
|
|
var dserv ipld.DAGService = merkledag.NewDAGService(bserv)
|
|
|
|
// wrap the DAGService in a providingDAG service which provides every block written.
|
|
// note about strategies:
|
|
// - "all" gets handled directly at the blockstore so no need to provide
|
|
// - "roots" gets handled in the pinner
|
|
// - "mfs" gets handled in mfs
|
|
// We need to provide the "pinned" cases only. Added blocks are not
|
|
// going to be provided by the blockstore (wrong strategy for that),
|
|
// nor by the pinner (the pinner doesn't traverse the pinned DAG itself, it only
|
|
// handles roots). This wrapping ensures all blocks of pinned content get provided.
|
|
if settings.Pin && !settings.OnlyHash &&
|
|
(api.providingStrategy&config.ProvideStrategyPinned) != 0 {
|
|
dserv = &providingDagService{dserv, api.provider}
|
|
}
|
|
|
|
// add a sync call to the DagService
|
|
// this ensures that data written to the DagService is persisted to the underlying datastore
|
|
// TODO: propagate the Sync function from the datastore through the blockstore, blockservice and dagservice
|
|
var syncDserv *syncDagService
|
|
if settings.OnlyHash {
|
|
syncDserv = &syncDagService{
|
|
DAGService: dserv,
|
|
syncFn: func() error { return nil },
|
|
}
|
|
} else {
|
|
syncDserv = &syncDagService{
|
|
DAGService: dserv,
|
|
syncFn: func() error {
|
|
rds := api.repo.Datastore()
|
|
if err := rds.Sync(ctx, bstore.BlockPrefix); err != nil {
|
|
return err
|
|
}
|
|
return rds.Sync(ctx, filestore.FilestorePrefix)
|
|
},
|
|
}
|
|
}
|
|
|
|
// Note: the dag service gets wrapped multiple times:
|
|
// 1. providingDagService (if pinned strategy) - provides blocks as they're added
|
|
// 2. syncDagService - ensures data persistence
|
|
// 3. batchingDagService (in coreunix.Adder) - batches operations for efficiency
|
|
|
|
fileAdder, err := coreunix.NewAdder(ctx, pinning, addblockstore, syncDserv)
|
|
if err != nil {
|
|
return path.ImmutablePath{}, err
|
|
}
|
|
|
|
fileAdder.Chunker = settings.Chunker
|
|
if settings.Events != nil {
|
|
fileAdder.Out = settings.Events
|
|
fileAdder.Progress = settings.Progress
|
|
}
|
|
fileAdder.Pin = settings.Pin && !settings.OnlyHash
|
|
if settings.Pin {
|
|
fileAdder.PinName = settings.PinName
|
|
}
|
|
fileAdder.Silent = settings.Silent
|
|
fileAdder.RawLeaves = settings.RawLeaves
|
|
if settings.MaxFileLinksSet {
|
|
fileAdder.MaxLinks = settings.MaxFileLinks
|
|
}
|
|
if settings.MaxDirectoryLinksSet {
|
|
fileAdder.MaxDirectoryLinks = settings.MaxDirectoryLinks
|
|
}
|
|
|
|
if settings.MaxHAMTFanoutSet {
|
|
fileAdder.MaxHAMTFanout = settings.MaxHAMTFanout
|
|
}
|
|
if settings.SizeEstimationModeSet {
|
|
fileAdder.SizeEstimationMode = settings.SizeEstimationMode
|
|
}
|
|
fileAdder.NoCopy = settings.NoCopy
|
|
fileAdder.CidBuilder = prefix
|
|
fileAdder.PreserveMode = settings.PreserveMode
|
|
fileAdder.PreserveMtime = settings.PreserveMtime
|
|
fileAdder.FileMode = settings.Mode
|
|
fileAdder.FileMtime = settings.Mtime
|
|
if settings.IncludeEmptyDirsSet {
|
|
fileAdder.IncludeEmptyDirs = settings.IncludeEmptyDirs
|
|
}
|
|
|
|
switch settings.Layout {
|
|
case options.BalancedLayout:
|
|
// Default
|
|
case options.TrickleLayout:
|
|
fileAdder.Trickle = true
|
|
default:
|
|
return path.ImmutablePath{}, fmt.Errorf("unknown layout: %d", settings.Layout)
|
|
}
|
|
|
|
if settings.Inline {
|
|
fileAdder.CidBuilder = cidutil.InlineBuilder{
|
|
Builder: fileAdder.CidBuilder,
|
|
Limit: settings.InlineLimit,
|
|
}
|
|
}
|
|
|
|
if settings.OnlyHash {
|
|
md := dagtest.Mock()
|
|
emptyDirNode := ft.EmptyDirNode()
|
|
// Use the same prefix for the "empty" MFS root as for the file adder.
|
|
err := emptyDirNode.SetCidBuilder(fileAdder.CidBuilder)
|
|
if err != nil {
|
|
return path.ImmutablePath{}, err
|
|
}
|
|
// MFS root for OnlyHash mode: provider is nil since we're not storing/providing anything
|
|
mr, err := mfs.NewRoot(ctx, md, emptyDirNode, nil, nil)
|
|
if err != nil {
|
|
return path.ImmutablePath{}, err
|
|
}
|
|
|
|
fileAdder.SetMfsRoot(mr)
|
|
}
|
|
|
|
nd, err := fileAdder.AddAllAndPin(ctx, files)
|
|
if err != nil {
|
|
return path.ImmutablePath{}, err
|
|
}
|
|
|
|
return path.FromCid(nd.Cid()), nil
|
|
}
|
|
|
|
func (api *UnixfsAPI) Get(ctx context.Context, p path.Path) (files.Node, error) {
|
|
ctx, span := tracing.Span(ctx, "CoreAPI.UnixfsAPI", "Get", trace.WithAttributes(attribute.String("path", p.String())))
|
|
defer span.End()
|
|
|
|
ses := api.core().getSession(ctx)
|
|
|
|
nd, err := ses.ResolveNode(ctx, p)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
|
|
return unixfile.NewUnixfsFile(ctx, ses.dag, nd)
|
|
}
|
|
|
|
// Ls returns the contents of an IPFS or IPNS object(s) at path p, with the format:
|
|
// `<link base58 hash> <link size in bytes> <link name>`
|
|
func (api *UnixfsAPI) Ls(ctx context.Context, p path.Path, out chan<- coreiface.DirEntry, opts ...options.UnixfsLsOption) error {
|
|
ctx, span := tracing.Span(ctx, "CoreAPI.UnixfsAPI", "Ls", trace.WithAttributes(attribute.String("path", p.String())))
|
|
defer span.End()
|
|
|
|
defer close(out)
|
|
|
|
settings, err := options.UnixfsLsOptions(opts...)
|
|
if err != nil {
|
|
return err
|
|
}
|
|
|
|
span.SetAttributes(attribute.Bool("resolvechildren", settings.ResolveChildren))
|
|
|
|
ses := api.core().getSession(ctx)
|
|
uses := (*UnixfsAPI)(ses)
|
|
|
|
dagnode, err := ses.ResolveNode(ctx, p)
|
|
if err != nil {
|
|
return err
|
|
}
|
|
|
|
dir, err := uio.NewDirectoryFromNode(ses.dag, dagnode)
|
|
if err != nil {
|
|
if errors.Is(err, uio.ErrNotADir) {
|
|
return uses.lsFromLinks(ctx, dagnode.Links(), settings, out)
|
|
}
|
|
return err
|
|
}
|
|
|
|
return uses.lsFromDirLinks(ctx, dir, settings, out)
|
|
}
|
|
|
|
func (api *UnixfsAPI) processLink(ctx context.Context, linkres ft.LinkResult, settings *options.UnixfsLsSettings) (coreiface.DirEntry, error) {
|
|
ctx, span := tracing.Span(ctx, "CoreAPI.UnixfsAPI", "ProcessLink")
|
|
defer span.End()
|
|
if linkres.Link != nil {
|
|
span.SetAttributes(attribute.String("linkname", linkres.Link.Name), attribute.String("cid", linkres.Link.Cid.String()))
|
|
}
|
|
|
|
if linkres.Err != nil {
|
|
return coreiface.DirEntry{}, linkres.Err
|
|
}
|
|
|
|
lnk := coreiface.DirEntry{
|
|
Name: linkres.Link.Name,
|
|
Cid: linkres.Link.Cid,
|
|
}
|
|
|
|
switch lnk.Cid.Type() {
|
|
case cid.Raw:
|
|
// No need to check with raw leaves
|
|
lnk.Type = coreiface.TFile
|
|
lnk.Size = linkres.Link.Size
|
|
case cid.DagProtobuf:
|
|
if settings.ResolveChildren {
|
|
linkNode, err := linkres.Link.GetNode(ctx, api.dag)
|
|
if err != nil {
|
|
return coreiface.DirEntry{}, err
|
|
}
|
|
|
|
if pn, ok := linkNode.(*merkledag.ProtoNode); ok {
|
|
d, err := ft.FSNodeFromBytes(pn.Data())
|
|
if err != nil {
|
|
return coreiface.DirEntry{}, err
|
|
}
|
|
switch d.Type() {
|
|
case ft.TFile, ft.TRaw:
|
|
lnk.Type = coreiface.TFile
|
|
case ft.THAMTShard, ft.TDirectory, ft.TMetadata:
|
|
lnk.Type = coreiface.TDirectory
|
|
case ft.TSymlink:
|
|
lnk.Type = coreiface.TSymlink
|
|
lnk.Target = string(d.Data())
|
|
}
|
|
if !settings.UseCumulativeSize {
|
|
lnk.Size = d.FileSize()
|
|
}
|
|
lnk.Mode = d.Mode()
|
|
lnk.ModTime = d.ModTime()
|
|
}
|
|
}
|
|
|
|
if settings.UseCumulativeSize {
|
|
lnk.Size = linkres.Link.Size
|
|
}
|
|
}
|
|
|
|
return lnk, nil
|
|
}
|
|
|
|
func (api *UnixfsAPI) lsFromDirLinks(ctx context.Context, dir uio.Directory, settings *options.UnixfsLsSettings, out chan<- coreiface.DirEntry) error {
|
|
for l := range dir.EnumLinksAsync(ctx) {
|
|
dirEnt, err := api.processLink(ctx, l, settings) // TODO: perf: processing can be done in background and in parallel
|
|
if err != nil {
|
|
return err
|
|
}
|
|
select {
|
|
case out <- dirEnt:
|
|
case <-ctx.Done():
|
|
return nil
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (api *UnixfsAPI) lsFromLinks(ctx context.Context, ndlinks []*ipld.Link, settings *options.UnixfsLsSettings, out chan<- coreiface.DirEntry) error {
|
|
// Create links channel large enough to not block when writing to out is slower.
|
|
links := make(chan coreiface.DirEntry, len(ndlinks))
|
|
errs := make(chan error, 1)
|
|
go func() {
|
|
defer close(links)
|
|
defer close(errs)
|
|
for _, l := range ndlinks {
|
|
lr := ft.LinkResult{Link: &ipld.Link{Name: l.Name, Size: l.Size, Cid: l.Cid}}
|
|
lnk, err := api.processLink(ctx, lr, settings) // TODO: can be parallel if settings.Async
|
|
if err != nil {
|
|
errs <- err
|
|
return
|
|
}
|
|
select {
|
|
case links <- lnk:
|
|
case <-ctx.Done():
|
|
return
|
|
}
|
|
}
|
|
}()
|
|
|
|
for lnk := range links {
|
|
out <- lnk
|
|
}
|
|
return <-errs
|
|
}
|
|
|
|
func (api *UnixfsAPI) core() *CoreAPI {
|
|
return (*CoreAPI)(api)
|
|
}
|
|
|
|
// syncDagService is used by the Adder to ensure blocks get persisted to the underlying datastore
|
|
type syncDagService struct {
|
|
ipld.DAGService
|
|
syncFn func() error
|
|
}
|
|
|
|
func (s *syncDagService) Sync() error {
|
|
return s.syncFn()
|
|
}
|
|
|
|
type providingDagService struct {
|
|
ipld.DAGService
|
|
provider.MultihashProvider
|
|
}
|
|
|
|
func (pds *providingDagService) Add(ctx context.Context, n ipld.Node) error {
|
|
if err := pds.DAGService.Add(ctx, n); err != nil {
|
|
return err
|
|
}
|
|
// Provider errors are logged but not propagated.
|
|
// We don't want DAG operations to fail due to providing issues.
|
|
// The user's data is still stored successfully even if the
|
|
// announcement to the routing system fails temporarily.
|
|
if err := pds.StartProviding(false, n.Cid().Hash()); err != nil {
|
|
log.Errorf("failed to provide new block: %s", err)
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (pds *providingDagService) AddMany(ctx context.Context, nds []ipld.Node) error {
|
|
if err := pds.DAGService.AddMany(ctx, nds); err != nil {
|
|
return err
|
|
}
|
|
keys := make([]mh.Multihash, len(nds))
|
|
for i, n := range nds {
|
|
keys[i] = n.Cid().Hash()
|
|
}
|
|
// Same error handling philosophy as Add(): log but don't fail.
|
|
if err := pds.StartProviding(false, keys...); err != nil {
|
|
log.Errorf("failed to provide new blocks: %s", err)
|
|
}
|
|
return nil
|
|
}
|
|
|
|
var _ ipld.DAGService = (*providingDagService)(nil)
|