Once the server is asked to shut down, we stop accepting new
connections, but the 'manners' graceful shutdown will wait for
all existing connections closed to close before finishing.
For keep-alive connections this will never happen unless the
client detects that the server is shutting down through the
ipfs API itself, and closes the connection in response.
This is a problem e.g. with the webui's connections visualization,
which polls the swarm/peers endpoint once a second, and never
detects that the API server was shut down.
We can mitigate this by telling the server to disable keep-alive,
which will add a 'Connection: close' header to the next HTTP
response on the connection. A well behaving client should then
treat that correspondingly by closing the connection.
Unfortunately this doesn't happen immediately in all cases,
presumably depending on the keep-alive timeout of the browser
that set up the connection, but it's at least a step in the
right direction.
When closing a node, the node itself only takes care of tearing down
its own children. As corehttp sets up a server based on a node, it
needs to also ensure that the server is accounted for when determining
if the node has been fully closed.
The server may stay alive for quite a while due to waiting on
open connections to close before shutting down. We should
find ways to terminate these connections in a more controlled
manner, but in the meantime it's helpful to be able to see
why a shutdown of the ipfs daemon is taking so long.
This changes .go-ipfs to .ipfs everywhere.
And by the way this defines a DefaultPathName const
for this name.
License: MIT
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
We now consider debugerrors harmful: we've run into cases where
debugerror.Wrap() hid valuable error information (err == io.EOF?).
I've removed them from the main code, but left them in some tests.
Go errors are lacking, but unfortunately, this isn't the solution.
It is possible that debugerros.New or debugerrors.Errorf should
remain still (i.e. only remove debugerrors.Wrap) but we don't use
these errors often enough to keep.
This commit adds a new set of sharness tests for pinning, and addresses
bugs that were pointed out by said tests.
test/sharness: added more pinning tests
Pinning is currently broken. See issue #1051. This commit introduces
a few more pinning tests. These are by no means exhaustive, but
definitely surface the present problems going on. I believe these
tests are correct, but not sure. Pushing them as failing so that
pinning is fixed in this PR.
make pinning and merkledag.Get take contexts
improve 'add' commands usage of pinning
FIXUP: fix 'pin lists look good'
ipfs-pin-stat simple script to help check pinning
This is a simple shell script to help check pinning.
We ought to strive towards making adding commands this easy.
The http api is great and powerful, but our setup right now
gets in the way. Perhaps we can clean up that area.
updated t0081-repo-pinning
- fixed a couple bugs with the tests
- made it a bit clearer (still a lot going on)
- the remaining tests are correct and highlight a problem with
pinning. Namely, that recursive pinning is buggy. At least:
towards the end of the test, $HASH_DIR4 and $HASH_FILE4 should
be pinned indirectly, but they're not. And thus get gc-ed out.
There may be other problems too.
cc @whyrusleeping
fix grep params for context deadline check
fix bugs in pin and pin tests
check for block local before checking recursive pin
humanize bandwidth output
instrument conn.Conn for bandwidth metrics
add poll command for continuous bandwidth reporting
move bandwidth tracking onto multiaddr net connections
another mild refactor of recording locations
address concerns from PR
lower mock nodes in race test due to increased goroutines per connection
not exactly elegant, but fixes#871 in the short term. in the mid term, unless more `repo` commands show up, i suggest just replacing `repo gc` simply by `gc`.