* feat: use Swarm.EnableHolePunching flag within libp2p
* docs: Swarm.EnableHolePunching
Co-authored-by: Marcin Rataj <lidel@lidel.org>
Co-authored-by: Adin Schmahmann <adin.schmahmann@gmail.com>
* plumb through go-datastore context changes
* update go-libp2p to v0.16.0
* use LIBP2P_TCP_REUSEPORT instead of IPFS_REUSEPORT
* use relay config
* making deprecation notice match the go-ipfs-config key
* docs(config): circuit relay v2
* docs(config): fix links and headers
* feat(config): Internal.Libp2pForceReachability
This switches to config that supports setting and reading
Internal.Libp2pForceReachability OptionalString flag
* use configuration option for static relays
* chore: go-ipfs-config v0.18.0
https://github.com/ipfs/go-ipfs-config/releases/tag/v0.18.0
* feat: circuit v1 migration prompt when Swarm.EnableRelayHop is set (#8559)
* exit when Swarm.EnableRelayHop is set
* docs: Experimental.ShardingEnabled migration
This ensures existing users of global sharding experiment get notified
that the flag no longer works + that autosharding happens automatically.
For people who NEED to keep the old behavior (eg. have no time to
migrate today) there is a note about restoring it with
`UnixFSShardingSizeThreshold`.
* chore: add dag-jose code to the cid command output
* add support for setting automatic unixfs sharding threshold from the config
* test: have tests use low cutoff for sharding to mimic old behavior
* test: change error message to match the current error
* test: Add automatic sharding/unsharding tests (#8547)
* test: refactored naming in the sharding sharness tests to make more sense
* ci: set interop test executor to convenience image for Go1.16 + Node
* ci: use interop master
Co-authored-by: Marcin Rataj <lidel@lidel.org>
Co-authored-by: Marten Seemann <martenseemann@gmail.com>
Co-authored-by: Marcin Rataj <lidel@lidel.org>
Co-authored-by: Gus Eggert <gus@gus.dev>
Co-authored-by: Lucas Molas <schomatis@gmail.com>
* feat: extract Bitswap fx initialization to its own file
* chore: bump go-bitswap dependency
* feat: bump go-ipfs-config dependency and utilize the new Internal.Bitswap configuration options. Add documentation around the new OptionalInteger config type as well as the Internal.Bitswap options.
* docs(docs/config.md): move the table of contents towards the top of the document and update it
Co-authored-by: Petar Maymounkov <petarm@gmail.com>
Co-authored-by: Marcin Rataj <lidel@lidel.org>
Co-authored-by: Gus Eggert <877588+guseggert@users.noreply.github.com>
* feat: switch to using go-ipld-prime for codecs, path resolution, and the `dag put/get` commands
* fix: `dag put/get` not roundtripping due to an extra new line being added (https://github.com/ipfs/go-ipfs/issues/3503)
More detailed information is in the CHANGELOG.md file. Very high level:
* IPLD codecs (and their plugins) must use go-ipld-prime
* Added support for the dag-json codec
* `dag get/put` use IPLD codec names from the multicodec table
* `dag get` defaults to dag-json output instead of json, but may output with other codecs
* Data model pathing can be achieved using the /ipld prefix. For example, you can use `/ipld/QmFoo/Links/0/Hash` to traverse through a DagPB node
* With `dag get/put` the DagPB field names have been changed to match the ones in the protobuf listed in the specification
Co-authored-by: hannahhoward <hannah@hannahhoward.net>
Co-authored-by: Daniel Martí <mvdan@mvdan.cc>
Co-authored-by: acruikshank <acruikshank@example.com>
Co-authored-by: Steven Allen <steven@stebalien.com>
Co-authored-by: Will Scott <will.scott@protocol.ai>
Co-authored-by: Will Scott <will@cypherpunk.email>
Co-authored-by: Rod Vagg <rod@vagg.org>
Co-authored-by: Adin Schmahmann <adin.schmahmann@gmail.com>
Co-authored-by: Eric Myhre <hash@exultant.us>
The batched provider system is enabled when the experimental AcceleratedDHTClient is enabled
There is also an `ipfs stats provide` command which gives stats about the providing/reproviding system when the batched provider system is enabled
MVP for #6097
This feature will repeatedly reconnect (with a randomized exponential backoff)
to peers in a set of "peered" peers.
In the future, this should be extended to:
1. Include a CLI for modifying this list at runtime.
2. Include additional options for peers we want to _protect_ but not connect to.
3. Allow configuring timeouts, backoff, etc.
4. Allow groups? Possibly through textile threads.
5. Allow for runtime-only peering rules.
6. Different reconnect policies.
But this MVP should be a significant step forward.
1. Enable AutoNATService on _all_ nodes by default. If it's an issue, we can
disable it in RC3 but this will give us the best testing results.
2. Expose options to configure AutoNAT rate limiting.