* feat: fast provide
* Check error from provideRoot
* do not provide if nil router
* fix(commands): prevent panic from typed nil DHTClient interface
Fixes panic when ipfsNode.DHTClient is a non-nil interface containing a
nil pointer value (typed nil). This happened when Routing.Type=delegated
or when using HTTP-only routing without DHT.
The panic occurred because:
- Go interfaces can be non-nil while containing nil pointer values
- Simple `if DHTClient == nil` checks pass, but calling methods panics
- Example: `(*ddht.DHT)(nil)` stored in interface passes nil check
Solution:
- Add HasActiveDHTClient() method to check both interface and concrete value
- Update all 7 call sites to use proper check before DHT operations
- Rename provideRoot → provideCIDSync for clarity
- Add structured logging with "fast-provide" prefix for easier filtering
- Add tests covering nil cases and valid DHT configurations
Fixes: https://github.com/ipfs/kubo/pull/11046#issuecomment-3525313349
* feat(add): split fast-provide into two flags for async/sync control
Renames --fast-provide to --fast-provide-root and adds --fast-provide-wait
to give users control over synchronous vs asynchronous providing behavior.
Changes:
- --fast-provide-root (default: true): enables immediate root CID providing
- --fast-provide-wait (default: false): controls whether to block until complete
- Default behavior: async provide (fast, non-blocking)
- Opt-in: --fast-provide-wait for guaranteed discoverability (slower, blocking)
- Can disable with --fast-provide-root=false to rely on background reproviding
Implementation:
- Async mode: launches goroutine with detached context for fire-and-forget
- Added 10 second timeout to prevent hanging on network issues
- Timeout aligns with other kubo operations (ping, DNS resolve, p2p)
- Sufficient for DHT with sweep provider or accelerated client
- Sync mode: blocks on provideCIDSync until completion (uses req.Context)
- Improved structured logging with "fast-provide-root:" prefix
- Removed redundant "root CID" from messages (already in prefix)
- Clear async/sync distinction in log messages
- Added FAST PROVIDE OPTIMIZATION section to ipfs add --help explaining:
- The problem: background queue takes time, content not immediately discoverable
- The solution: extra immediate announcement of just the root CID
- The benefit: peers can find content right away while queue handles rest
- Usage: async by default, --fast-provide-wait for guaranteed completion
Changelog:
- Added highlight section for fast root CID providing feature
- Updated TOC and overview
- Included usage examples with clear comments explaining each mode
- Emphasized this is extra announcement independent of background queue
The feature works best with sweep provider and accelerated DHT client
where provide operations are significantly faster.
* fix(add): respect Provide config in fast-provide-root
fast-provide-root should honor the same config settings as the regular
provide system:
- skip when Provide.Enabled is false
- skip when Provide.DHT.Interval is 0
- respect Provide.Strategy (all/pinned/roots/mfs/combinations)
This ensures fast-provide only runs when appropriate based on user
configuration and the nature of the content being added (pinned vs
unpinned, added to MFS or not).
* Update core/commands/add.go
---------
Co-authored-by: gammazero <11790789+gammazero@users.noreply.github.com>
Co-authored-by: Marcin Rataj <lidel@lidel.org>
This adds the ability to enable "optimistic provide" to the default
DHT client, which enables faster provides and reprovides.
For more information about optimistic provide, see:
https://protocollabs.notion.site/Optimistic-Provide-2c79745820fa45649d48de038516b814
Note that this feature only works when using non-custom router
types. This does not include the ability to enable optimistic provide
on custom routers for now, to minimize the footprint of this
experimental feature. We intend on continuing to test this and improve
the UX, which may or may not involve adding configuration for it to
custom routers. We also plan on refactoring/redesigning custom routers
more broadly so I don't want this to add more effort for maintainers
and confusion for users.
* plumb through go-datastore context changes
* update go-libp2p to v0.16.0
* use LIBP2P_TCP_REUSEPORT instead of IPFS_REUSEPORT
* use relay config
* making deprecation notice match the go-ipfs-config key
* docs(config): circuit relay v2
* docs(config): fix links and headers
* feat(config): Internal.Libp2pForceReachability
This switches to config that supports setting and reading
Internal.Libp2pForceReachability OptionalString flag
* use configuration option for static relays
* chore: go-ipfs-config v0.18.0
https://github.com/ipfs/go-ipfs-config/releases/tag/v0.18.0
* feat: circuit v1 migration prompt when Swarm.EnableRelayHop is set (#8559)
* exit when Swarm.EnableRelayHop is set
* docs: Experimental.ShardingEnabled migration
This ensures existing users of global sharding experiment get notified
that the flag no longer works + that autosharding happens automatically.
For people who NEED to keep the old behavior (eg. have no time to
migrate today) there is a note about restoring it with
`UnixFSShardingSizeThreshold`.
* chore: add dag-jose code to the cid command output
* add support for setting automatic unixfs sharding threshold from the config
* test: have tests use low cutoff for sharding to mimic old behavior
* test: change error message to match the current error
* test: Add automatic sharding/unsharding tests (#8547)
* test: refactored naming in the sharding sharness tests to make more sense
* ci: set interop test executor to convenience image for Go1.16 + Node
* ci: use interop master
Co-authored-by: Marcin Rataj <lidel@lidel.org>
Co-authored-by: Marten Seemann <martenseemann@gmail.com>
Co-authored-by: Marcin Rataj <lidel@lidel.org>
Co-authored-by: Gus Eggert <gus@gus.dev>
Co-authored-by: Lucas Molas <schomatis@gmail.com>
Otherwise, we could end up with only DHT clients and never re-bootstrap.
I've left the default go-ipfs bootstrapping code in for now as it's technically
possible to disable the DHT entirely.