* fix(autotls): store certificates at the location from the repo path
* docs(autotls): cert storale and other caveats
---------
Co-authored-by: Marcin Rataj <lidel@lidel.org>
(cherry picked from commit 1ca0ae0af6)
Allow configuration of the bitswap server's replace WantHave with WantBlock maximum block size using the Internal.Bitswap.WantHaveReplaceSize config item. This sets the maximum size of a block in bytes up to which we will replace a want-have with a want-block. Setting a size of 0 disables this replacement and means that block sizes are not read for WantHave requests.
See ipfs/boxo#672 for more details
Updated boxo to version with PR 672
---------
Co-authored-by: Marcin Rataj <lidel@lidel.org>
* feat: libp2p.EnableAutoNATv2
Part of https://github.com/ipfs/kubo/issues/10091
We include a flag that allows shutting down V2 in case there are issues
with it.
* docs: EnableAutoNATv2
Most of the removed options are many years old. In addition, they've all been removed in past iterations of Kubo. Some options were marked as removed in the config.md, but we still had a warning in the code to let users know they have been removed.
I think it's been long enough for all of this options, and enough Kubo iterations in order to alert the users. It is good to keep it in the config.md for now so that people can actually check. However, I think it's time to remove them from the code itself.
Updates: #9396Closes: #6831Closes: #6208
Currently the Graphsync server is not widely used due to lack of compatible software.
There have been many years yet we are unable to find any production software making use of the graphsync server in Kubo.
There exists some in the filecoin ecosystem but we are not aware of uses with Kubo.
Even in filecoin graphsync is not the only datatransfer solution available like it could have been in the past.
`go-graphsync` is also developped on many concurrent branches.
The specification for graphsync are less clear than the trustless gateway one and lack a complete conformance test suite any implementation can run.
It is not easily extansible either because selectors are too limited for interesting queries without sideloading ADLs, which for now are hardcoded solutions.
Finaly Kubo is consistently one of the fastest software to update to a new go-libp2p release.
This means the burden to track go-libp2p changes in go-graphsync falls on us, else Kubo cannot compile even if almost all users do not use this feature.
We are then removing the graphsync server experiment.
For people who want alternatives we would like you to try the Trustless-Gateway-over-Libp2p experiment instead, the protocol is simpler (request-response-based) and let us reuse both clients and servers with minimal injection in the network layer.
If you think this is a mistake and we should put it back you should try to answer theses points:
- Find a piece of opensource code which uses a graphsync client to download data from Kubo.
- Why is Trustless-Gateway-over-Libp2p not suitable instead ?
- Why is bitswap not suitable instead ?
Implementation details such as go-graphsync performance vs boxo/gateway is not very interesting to us in this discussion unless they are really huge (in the range of 10x~100x+ more) because the gateway code is under high development and we would be interested in fixing theses.
Fixes#8492
This introduces "nopfs" as a preloaded plugin into Kubo
with support for denylists from https://github.com/ipfs/specs/pull/383
It automatically makes Kubo watch *.deny files found in:
- /etc/ipfs/denylists
- $XDG_CONFIG_HOME/ipfs/denylists
- $IPFS_PATH/denylists
* test: Gateway.NoFetch and GatewayOverLibp2p
adds missing tests for "no fetch" gateways one can expose,
in both cases the offline mode is done by passing custom
blockservice/exchange into path resolver, which means
global path resolver that has nopfs intercept is not used,
and the content blocking does not happen on these gateways.
* fix: use offline path resolvers where appropriate
this fixes the problem described in
https://github.com/ipfs/kubo/pull/10161#issuecomment-1782175955
by adding explicit offline path resolvers that are backed
by offline exchange, and using them in NoFetch gateways
instead of the default online ones
---------
Co-authored-by: Henrique Dias <hacdias@gmail.com>
Co-authored-by: Marcin Rataj <lidel@lidel.org>
Mplex does not implement backpressure, our implementation will randomly reset streams if buffers overflow instead of risking deadlocks.
In the past we had a bug where kubo nodes would prefer mplex over yamux. Turning off mplex make our connections to thoses nodes negociate yamux.
Closes#9958
* fix: mark ipns pubsub router DoNotWaitForSearchValue
That means if the DHT has finished searching and no one responded over pubsub *yet*, we will not spend 1 minute searching for no reason.
This also include other error handling bug fixes inside `go-libp2p-routing-helpers`.
Fixes: #9927
* routing: bring back the old IPNS behaviour
Stop making this configurable let everything race like it used to do.
This adds the ability to enable "optimistic provide" to the default
DHT client, which enables faster provides and reprovides.
For more information about optimistic provide, see:
https://protocollabs.notion.site/Optimistic-Provide-2c79745820fa45649d48de038516b814
Note that this feature only works when using non-custom router
types. This does not include the ability to enable optimistic provide
on custom routers for now, to minimize the footprint of this
experimental feature. We intend on continuing to test this and improve
the UX, which may or may not involve adding configuration for it to
custom routers. We also plan on refactoring/redesigning custom routers
more broadly so I don't want this to add more effort for maintainers
and confusion for users.
In order to make it possible to easily-overwrite the path Resolvers (i.e. via
plugins), this creates resolvers as part of the Node rather than creating them
ad-hoc.