* feat(bootstrap): save connected peers as backup temporary bootstrap ones
* fix: do not add duplicated oldSavedPeers, not using tags, reuse
randomizeList
* test: add regression test
* chore: add changelog
---------
Co-authored-by: Henrique Dias <hacdias@gmail.com>
Co-authored-by: Marcin Rataj <lidel@lidel.org>
GH Actions recently changed their Docker build implementation and it
has a different output than previously, causing the tests that parse
its output to fail.
This switches the test to not parse Docker build output. The parsing
was used to extract the image ID while still showing logs. A better
way to show logs and still know the image ID is to tag it, which is
what this now does.
This also renames the Docker tests so that they run earlier. This
takes better advantage of the fact that the sharness tests are run in
parallel. Since the Docker test are quite long, and are at the end of
the list, the test runner is not running other tests in parallel while
the Docker tests are running.
The multinode test is effectively the same as the twonode test. There
are some problems with it too: it *looks* like it's testing the
Websocket transport with the "listentype,ws" IPTB attribute, but that
attribute doesn't actually exist in ipfs/iptb-plugins, so it does
nothing, so that test actually just runs the same test twice (Yamux
disabled). Furthermore, this is just the same test as in the mplex
twonode test. So this just removes the useless multinode test
entirely.
Also, this removes the part of the twonode test that checks the amount
of data transferred over Bitswap. This is an implementation detail of
Bitswap, it's not appropriate to test this in an end-to-end test as it
depends on algorithmic details of how Bitswap works, and has nothing
to do with transports. This is probably more appropriate as a perf or
benchmark test of Bitswap.
This also moves equivalent functionality from jbenet/go-random-files
into the testutils package. This just copies the code and modifies it
slightly for better ergonomics.
This adds the ability to enable "optimistic provide" to the default
DHT client, which enables faster provides and reprovides.
For more information about optimistic provide, see:
https://protocollabs.notion.site/Optimistic-Provide-2c79745820fa45649d48de038516b814
Note that this feature only works when using non-custom router
types. This does not include the ability to enable optimistic provide
on custom routers for now, to minimize the footprint of this
experimental feature. We intend on continuing to test this and improve
the UX, which may or may not involve adding configuration for it to
custom routers. We also plan on refactoring/redesigning custom routers
more broadly so I don't want this to add more effort for maintainers
and confusion for users.
This also means that rb-pinning-service-api is no longer required for
running remote pinning tests. This alone saves at least 3 minutes in
test runtime in CI because we don't need to checkout the repo, build
the Docker image, run it, etc.
Instead this implements a simple pinning service in Go that the test
runs in-process, with a callback that can be used to control the async
behavior of the pinning service (e.g. simulate work happening
asynchronously like transitioning from "queued" -> "pinning" ->
"pinned").
This also adds an environment variable to Kubo to control the MFS
remote pin polling interval, so that we don't have to wait 30 seconds
in the test for MFS changes to be repinned. This is purely for tests
so I don't think we should document this.
This entire test suite runs in around 2.5 sec on my laptop, compared to
the existing 3+ minutes in CI.
The test trims all whitespace bytes from the output of 'ipfs cat' but
if the random bytes end in a whitespace char then it trims that too,
resulting in random test failure.
Instead this updates the test harness to only trim a single trailing
newline char, so that it doesn't end up chomping legitimate output.
This is the slowest test in the sharness test suite, because it has
very long sleeps. It usually takes 2+ minutes to run.
This new impl runs all peering tests in about 20 seconds, since it
polls for conditions instead of sleeping, and runs the tests in
parallel.
This also has an additional test case for a peer that was never online
and then connects.