commands/object: remove objectData() and objectLinks() helpers
resolver: added context parameters
sharness: $HASH carried the \r from the http protocol with
sharness: write curl output to individual files
http gw: break PUT handler until PR#1191
Currently garbage collection is triggered manually and there are no
age-restrictions on the removal. I expect we'll eventually follow Git
and auto-launch garbage collection when we hit some threshold of disk
consumption (gc.auto). I expect we'll also follow Git and keep
unpinned or unreachable objects (gc.pruneexpire, etc.). But we don't
seem to do either of those yet.
I'm not entirely clear on the role that this package is filling, but
this description seems like a reasonable guess based on a quick skim
through it's exported API.
The last references to CastToReaders were commented out in 6faeee83
(cmds2/add: temp fix for -r. horrible hack, 2014-11-11) and then
removed completely in 032e9c29 (core/commands2: Updated 'add' command
for new file API, 2014-11-16).
The last references to CastToStrings was removed in a0bd29d5
(core/commands2: Fixed swarm command for new arguments API,
2014-11-18).
The change to an array of readers comes from e096060b
(refactor(core/commands2/add) split loop, 2014-11-06), where it's used
to setup readers for each path in the argument list. However, since
6faeee83 (cmds2/add: temp fix for -r. horrible hack, 2014-11-11) the
argument looping moved outside of add() and into Run(), so we can drop
the multiple-reader support from add().
Adding a file can create multiple nodes (e.g. the splitter can chunk
the file into several blocks), but:
1. we were only appending a single node per reader to our returned
list, and
2. we are only using the final node in that returned list,
so this commit also adjusts add() to return a single node reference
instead on an array of nodes.
the random permutaton for bootstrap peers was not working as
intended, returning the first four bootstrap peers always.
this commit fixes it to be a random subset.
Once the server is asked to shut down, we stop accepting new
connections, but the 'manners' graceful shutdown will wait for
all existing connections closed to close before finishing.
For keep-alive connections this will never happen unless the
client detects that the server is shutting down through the
ipfs API itself, and closes the connection in response.
This is a problem e.g. with the webui's connections visualization,
which polls the swarm/peers endpoint once a second, and never
detects that the API server was shut down.
We can mitigate this by telling the server to disable keep-alive,
which will add a 'Connection: close' header to the next HTTP
response on the connection. A well behaving client should then
treat that correspondingly by closing the connection.
Unfortunately this doesn't happen immediately in all cases,
presumably depending on the keep-alive timeout of the browser
that set up the connection, but it's at least a step in the
right direction.
When closing a node, the node itself only takes care of tearing down
its own children. As corehttp sets up a server based on a node, it
needs to also ensure that the server is accounted for when determining
if the node has been fully closed.
The server may stay alive for quite a while due to waiting on
open connections to close before shutting down. We should
find ways to terminate these connections in a more controlled
manner, but in the meantime it's helpful to be able to see
why a shutdown of the ipfs daemon is taking so long.
This changes .go-ipfs to .ipfs everywhere.
And by the way this defines a DefaultPathName const
for this name.
License: MIT
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>