* feat(libp2p): enable shared TCP listeners
* docs: switch mentions of /ws to /tcp/4001
* feat: AutoTLS.AutoWSS
This adds AutoTLS.AutoWSS flag that is set to true by default.
It will check if Addresses.Swarm contain explicit /ws listener,
and if not found, it will append one per every /tcp listener
This way existing TCP ports are reused without any extra configuration,
but we don't break user's who have custom / explicit /ws listener
already.
I also moved logger around, to include Addresses.Swarm inspection
results in `autotls` logger.
* chore: go-libp2p v0.38.1
https://github.com/libp2p/go-libp2p/releases/tag/v0.38.0https://github.com/libp2p/go-libp2p/releases/tag/v0.38.1
* docs: AutoTLS.AutoWSS and go-libp2p v0.38.x
* chore: p2p-forge/client v0.2.0
https://github.com/ipshipyard/p2p-forge/releases/tag/v0.2.0
* fix: disable libp2p.ShareTCPListener() in PNET
* chore(ci): timeout sharness after 15m
average successful run is <9 minutes, no need to wait for 20
https://github.com/ipfs/kubo/actions/workflows/sharness.yml?query=is%3Asuccess
---------
Co-authored-by: Andrew Gillis <11790789+gammazero@users.noreply.github.com>
Co-authored-by: Marcin Rataj <lidel@lidel.org>
* feat: expose BlockKeyCacheSize and enable WriteThrough when bloom filter disabled
* import/config: add BatchMaxSize and BatchMaxNodes
* config: make BlockKeyCacheSize an OptionalInteger
* config: add and wire datastore.WriteThrough option
* config: omitempty on BlockKeyCacheSize
* changelog: rewrite entry about new options for the datastore
* config: add docs for BatchMaxNodes and BatchMaxSize
* config: make WriteThrough an optional Flag
* changelog: improve description of new datastore/import options
* refactor: DefaultWriteThrough as bool
* chore: boxo v0.26.0
* docs: config and changelog fixes
* chore: update to boxo without goprocess
* Use boxo fix for registering metrics
* chore: switch to boxo main with PR 723
---------
Co-authored-by: Marcin Rataj <lidel@lidel.org>
* fix(autotls): store certificates at the location from the repo path
* docs(autotls): cert storale and other caveats
---------
Co-authored-by: Marcin Rataj <lidel@lidel.org>
(cherry picked from commit 1ca0ae0af6)
Allow configuration of the bitswap server's replace WantHave with WantBlock maximum block size using the Internal.Bitswap.WantHaveReplaceSize config item. This sets the maximum size of a block in bytes up to which we will replace a want-have with a want-block. Setting a size of 0 disables this replacement and means that block sizes are not read for WantHave requests.
See ipfs/boxo#672 for more details
Updated boxo to version with PR 672
---------
Co-authored-by: Marcin Rataj <lidel@lidel.org>
* feat: libp2p.EnableAutoNATv2
Part of https://github.com/ipfs/kubo/issues/10091
We include a flag that allows shutting down V2 in case there are issues
with it.
* docs: EnableAutoNATv2
Most of the removed options are many years old. In addition, they've all been removed in past iterations of Kubo. Some options were marked as removed in the config.md, but we still had a warning in the code to let users know they have been removed.
I think it's been long enough for all of this options, and enough Kubo iterations in order to alert the users. It is good to keep it in the config.md for now so that people can actually check. However, I think it's time to remove them from the code itself.
Updates: #9396Closes: #6831Closes: #6208
Currently the Graphsync server is not widely used due to lack of compatible software.
There have been many years yet we are unable to find any production software making use of the graphsync server in Kubo.
There exists some in the filecoin ecosystem but we are not aware of uses with Kubo.
Even in filecoin graphsync is not the only datatransfer solution available like it could have been in the past.
`go-graphsync` is also developped on many concurrent branches.
The specification for graphsync are less clear than the trustless gateway one and lack a complete conformance test suite any implementation can run.
It is not easily extansible either because selectors are too limited for interesting queries without sideloading ADLs, which for now are hardcoded solutions.
Finaly Kubo is consistently one of the fastest software to update to a new go-libp2p release.
This means the burden to track go-libp2p changes in go-graphsync falls on us, else Kubo cannot compile even if almost all users do not use this feature.
We are then removing the graphsync server experiment.
For people who want alternatives we would like you to try the Trustless-Gateway-over-Libp2p experiment instead, the protocol is simpler (request-response-based) and let us reuse both clients and servers with minimal injection in the network layer.
If you think this is a mistake and we should put it back you should try to answer theses points:
- Find a piece of opensource code which uses a graphsync client to download data from Kubo.
- Why is Trustless-Gateway-over-Libp2p not suitable instead ?
- Why is bitswap not suitable instead ?
Implementation details such as go-graphsync performance vs boxo/gateway is not very interesting to us in this discussion unless they are really huge (in the range of 10x~100x+ more) because the gateway code is under high development and we would be interested in fixing theses.
Fixes#8492
This introduces "nopfs" as a preloaded plugin into Kubo
with support for denylists from https://github.com/ipfs/specs/pull/383
It automatically makes Kubo watch *.deny files found in:
- /etc/ipfs/denylists
- $XDG_CONFIG_HOME/ipfs/denylists
- $IPFS_PATH/denylists
* test: Gateway.NoFetch and GatewayOverLibp2p
adds missing tests for "no fetch" gateways one can expose,
in both cases the offline mode is done by passing custom
blockservice/exchange into path resolver, which means
global path resolver that has nopfs intercept is not used,
and the content blocking does not happen on these gateways.
* fix: use offline path resolvers where appropriate
this fixes the problem described in
https://github.com/ipfs/kubo/pull/10161#issuecomment-1782175955
by adding explicit offline path resolvers that are backed
by offline exchange, and using them in NoFetch gateways
instead of the default online ones
---------
Co-authored-by: Henrique Dias <hacdias@gmail.com>
Co-authored-by: Marcin Rataj <lidel@lidel.org>
Mplex does not implement backpressure, our implementation will randomly reset streams if buffers overflow instead of risking deadlocks.
In the past we had a bug where kubo nodes would prefer mplex over yamux. Turning off mplex make our connections to thoses nodes negociate yamux.
Closes#9958