- First tab at integrating @krl's new index page
File icons are embedded in side icons.css (QmXB7PLRWH6bCiwrGh2MrBBjNkLv3mY3JdYXCikYZSwLED contains both the icons and bootstrap)
- Fix back links (..) (fixes#1365)
Thanks @JasonWoof for the insight. The back links now stop a t the root hash and work for links that do and dont end with a slash.
License: MIT
Signed-off-by: Henry <cryptix@riseup.net>
Add ErrNoComponents in ParsePath validation & remove redundant path
validation.
Any lines using core.Resolve & Resolver.ResolvePath will have their path
validated.
License: MIT
Signed-off-by: rht <rhtbot@gmail.com>
Currently `ipfs get -C <hash>` returns error even if <hash> is a file.
This PR is for the case when the compress flag is enabled, use the
dagreader directly and pipe to a gzip processor.
License: MIT
Signed-off-by: rht <rhtbot@gmail.com>
We don't want to prefix these results with the argument. If there was
only one argument, the unprefixed results are still explicit.
License: MIT
Signed-off-by: W. Trevor King <wking@tremily.us>
Discussion with Juan on IRC ([1] through [2]) lead to this adjusted
JSON output. Benefits over the old output include:
* deduplication (we only check the children of a given Merkle node
once, even if multiple arguments resolve to that hash)
* alphabetized output (like POSIX's ls). As a side-effect of this
change, I'm also matching GNU Coreutils' ls output (maybe in POSIX?)
by printing an alphabetized list of non-directories (one per line)
first, with alphabetized directory lists afterwards.
[1]: https://botbot.me/freenode/ipfs/2015-06-12/?msg=41725570&page=5
[2]: https://botbot.me/freenode/ipfs/2015-06-12/?msg=41726547&page=5
License: MIT
Signed-off-by: W. Trevor King <wking@tremily.us>
This doesn't affect the text output, which was already using a
stringified name. The earlier stringification does change the JSON
output from an enumeration integer (e.g. 2) to the string form
(e.g. "File"). If/when we transition to Merkle-object types named by
their hash, we will probably want to revisit this and pass both the
type hash and human-readable-but-collision-prone name on to clients.
License: MIT
Signed-off-by: W. Trevor King <wking@tremily.us>
Change the approach to the directory-header control so we can set the
Argument value in the JSON response.
Stripping the trailing newline from the JSON output is annoying, but
looking over [1] I saw no easy way to add a newline to the JSON
output. And with the general framework that commands/ attempts to be,
it feels a bit funny to customize the JSON output for a command-line
program. Perhaps a workable solution is to have the command-line
client append newlines to any output that otherwise lacks them? But
that seems like a change best left to a separate series.
[1]: http://golang.org/pkg/encoding/json/
License: MIT
Signed-off-by: W. Trevor King <wking@tremily.us>
Instead of raising "keychains not yet implemented" whenever we have an
explicit node ID, only raise the error when the given node ID isn't
the local node. This allows folks to use the more-general
explicit-node-ID form in scripts and such now, as long as they use the
local node name when calling those scripts.
Also add a test for this case, and update the comment for the
one-argument case to match the current syntax for extracting a
multihash name string.
License: MIT
Signed-off-by: W. Trevor King <wking@tremily.us>
ipfs-shell [1] accesses the Command objects directly to construct
requests for an external IPFS daemon API. This isn't a terribly
robust approach, because it doesn't handle version differences between
the version of go-ipfs used to build the daemon and the version used
to build the ipfs-shell-consuming application. But for cases where
you can get those APIs to match it works well. Making these two
commands public allows us to write ipfs-shell wrappers for them.
Until we figure out how to get ipfs-shell working without access to
core/commands, I think the best approach is to make future command
objects and their returned structures public, and to go back and
expose existing commands/structures on an as-needed basis.
In this case, I need the public PublishCmd for the Docker-registry
storage driver, and I made the IpnsCmd public at the same time to stay
consistent for both 'ipfs name ...' sub-commands.
[1]: https://github.com/whyrusleeping/ipfs-shell
License: MIT
Signed-off-by: W. Trevor King <wking@tremily.us>
There has been a regression such that ./t0030-mount.sh fails on
'ipfs mount' fails when there is no mount dir
The issue was a change in how fuse errors are reported to the client
process. We have introduced an optimistic categorization that hides
the obscure fusermount error and replaces it with something a bit
more helpful.
License: MIT
Signed-off-by: Juan Batiz-Benet <juan@benet.ai>
Except when there is an explicit os.Exit(1) after the Critical line,
then replace with Fatal{,f}.
golang's log and logrus already call os.Exit(1) by default with Fatal.
License: MIT
Signed-off-by: rht <rhtbot@gmail.com>
Folks operating at the Unix-filesystem level shouldn't care about that
level of Merkle-DAG detail. Before this commit we had:
$ ipfs unixfs ls /ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4/busybox
/ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4/busybox:
... several lines of empty-string names ...
And with this commit we have:
$ ipfs unixfs ls /ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4/busybox
/ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4/busybox
I also reworked the argument-prefixing (object.Argument) in the output
marshaller to avoid redundancies like:
$ ipfs unixfs ls /ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4/busybox
/ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4/busybox:
/ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4/busybox
As a side-effect of this rework, we no longer have the trailing blank
line that we used to have after the final directory listing.
The new ErrImplementation is like Python's NotImplementedError, and is
mostly a way to guard against external changes that would need
associated updates in this code. For example, once we see something
that's neither a file nor a directory, we'll have to update the switch
statement to handle those objects.
License: MIT
Signed-off-by: W. Trevor King <wking@tremily.us>
This is similar to 'ipfs ls ...', but it:
* Lists file sizes that match the content size:
$ ipfs --encoding=json unixfs ls /ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4
{
"Objects": [
{
"Argument": "/ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4",
"Links": [
{
"Name": "busybox",
"Hash": "QmPbjmmci73roXf9VijpyQGgRJZthiQfnEetaMRGoGYV5a",
"Size": 1947624,
"Type": 2
}
]
}
]
}
$ ipfs cat /ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4/busybox | wc -c
1947624
'ipfs ls ...', on the other hand, is using the Merkle-descendant
size, which also includes fanout links and the typing information
unixfs objects store in their Data:
$ ipfs --encoding=json ls /ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4
{
"Objects": [
{
"Hash": "/ipfs/QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4",
"Links": [
{
"Name": "busybox",
"Hash": "QmPbjmmci73roXf9VijpyQGgRJZthiQfnEetaMRGoGYV5a",
"Size": 1948128,
"Type": 2
}
]
}
]
}
* Has a simpler text output corresponding to POSIX ls [1]:
$ ipfs unixfs ls /ipfs/QmV2FrBtvue5ve7vxbAzKz3mTdWq8wfMNPwYd8d9KHksCF/gentoo/stage3/amd64/2015-04-02
bin
dev
etc
proc
run
sys
$ ipfs ls /ipfs/QmV2FrBtvue5ve7vxbAzKz3mTdWq8wfMNPwYd8d9KHksCF/gentoo/stage3/amd64/2015-04-02
QmSRCHG21Sbqm3EJG9aEBo4vS7Fqu86pAjqf99MyCdNxZ4 1948183 bin/
QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn 4 dev/
QmUz1Z5jnQEjwr78fiMk5babwjJBDmhN5sx5HvPiTGGGjM 1207 etc/
QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn 4 proc/
QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn 4 run/
QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn 4 sys/
The minimal output allows us to start off with POSIX compliance and
then add options (which may or may not be POSIX compatible) to
adjust the output format as we get a better feel for what we need
([2] through [3]).
[1]: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ls.html
[2]: https://botbot.me/freenode/ipfs/2015-06-12/?msg=41724727&page=5
[3]: https://botbot.me/freenode/ipfs/2015-06-12/?msg=41725146&page=5
License: MIT
Signed-off-by: W. Trevor King <wking@tremily.us>
WIP: object creator command
better docs
move patch command into object namespace
dont ignore cancel funcs
addressing comment from CR
add two new subcommands to object patch and clean up main Run func
cancel contexts in early returns
switch to util.Key
If no path after `/ipfs/` or `/ipns/` is given, then the daemon will
panic with a slice bounds out of range error. This checks to see if we
have anything after `ipfs` or `ipns`.
Previously we had a confusing situation, with:
* single-arg doc: published name <name> to <value>
* double-arg doc: published name <value> to <name>
* implementation: Published name <name> to <value>
Now we have the uniform:
Published to <name>: <value>
With the following goals:
1. It's clear that we're writing <value> to <name>'s IPNS slot in the
DHT.
2. We preserve the order of arguments from the command-line
invocation:
$ ipfs name publish <name> <value>
Published to <name>: <value>
This lets users resolve (recursively or not) DNS links without pulling
in the other protocols. That makes an easier, more isolated target
for alternative implemenations, since they don't need to understand
IPNS, proquint, etc. to handle these resolutions.
For explicitly enabling recursive behaviour (it was previously always
enabled). That allows folks who are interested in understanding
layered indirection to step through the chain one link at a time.
This allows direct access to the earlier protocol-specific Resolve
implementations. The guts of each protocol-specific resolver are in
the internal resolveOnce method, and we've added a new:
ResolveN(ctx, name, depth)
method to the public interface. There's also:
Resolve(ctx, name)
which wraps ResolveN using DefaultDepthLimit. The extra API endpoint
is intended to reduce the likelyhood of clients accidentally calling
the more dangerous ResolveN with a nonsensically high or infinite
depth. On IRC on 2015-05-17, Juan said:
15:34 <jbenet> If 90% of uses is the reduced API with no chance to
screw it up, that's a huge win.
15:34 <wking> Why would those 90% not just set depth=0 or depth=1,
depending on which they need?
15:34 <jbenet> Because people will start writing `r.Resolve(ctx, name,
d)` where d is a variable.
15:35 <wking> And then accidentally set that variable to some huge
number?
15:35 <jbenet> Grom experience, i've seen this happen _dozens_ of
times. people screw trivial things up.
15:35 <wking> Why won't those same people be using ResolveN?
15:36 <jbenet> Because almost every example they see will tell them to
use Resolve(), and they will mostly stay away from ResolveN.
The per-prodocol versions also resolve recursively within their
protocol. For example:
DNSResolver.Resolve(ctx, "ipfs.io", 0)
will recursively resolve DNS links until the referenced value is no
longer a DNS link.
I also renamed the multi-protocol ipfs NameSystem (defined in
namesys/namesys.go) to 'mpns' (for Multi-Protocol Name System),
because I wasn't clear on whether IPNS applied to the whole system or
just to to the DHT-based system. The new name is unambiguously
multi-protocol, which is good. It would be nice to have a distinct
name for the DHT-based link system.
Now that resolver output is always prefixed with a namespace and
unprefixed mpns resolver input is interpreted as /ipfs/,
core/corehttp/ipns_hostname.go can dispense with it's old manual
/ipfs/ injection.
Now that the Resolver interface handles recursion, we don't need the
resolveRecurse helper in core/pathresolver.go. The pathresolver
cleanup also called for an adjustment to FromSegments to more easily
get slash-prefixed paths.
Now that recursive resolution with the namesys/namesys.go composite
resolver always gets you to an /ipfs/... path, there's no need for the
/ipns/ special case in fuse/ipns/ipns_unix.go.
Now that DNS links can be things other than /ipfs/ or DHT-link
references (e.g. they could be /ipns/<domain-name> references) I've
also loosened the ParsePath logic to only attempt multihash validation
on IPFS paths. It checks to ensure that other paths have a
known-protocol prefix, but otherwise leaves them alone.
I also changed some key-stringification from .Pretty() to .String()
following the potential deprecation mentioned in util/key.go.
commands/object: remove objectData() and objectLinks() helpers
resolver: added context parameters
sharness: $HASH carried the \r from the http protocol with
sharness: write curl output to individual files
http gw: break PUT handler until PR#1191
Currently garbage collection is triggered manually and there are no
age-restrictions on the removal. I expect we'll eventually follow Git
and auto-launch garbage collection when we hit some threshold of disk
consumption (gc.auto). I expect we'll also follow Git and keep
unpinned or unreachable objects (gc.pruneexpire, etc.). But we don't
seem to do either of those yet.
I'm not entirely clear on the role that this package is filling, but
this description seems like a reasonable guess based on a quick skim
through it's exported API.
The last references to CastToReaders were commented out in 6faeee83
(cmds2/add: temp fix for -r. horrible hack, 2014-11-11) and then
removed completely in 032e9c29 (core/commands2: Updated 'add' command
for new file API, 2014-11-16).
The last references to CastToStrings was removed in a0bd29d5
(core/commands2: Fixed swarm command for new arguments API,
2014-11-18).
The change to an array of readers comes from e096060b
(refactor(core/commands2/add) split loop, 2014-11-06), where it's used
to setup readers for each path in the argument list. However, since
6faeee83 (cmds2/add: temp fix for -r. horrible hack, 2014-11-11) the
argument looping moved outside of add() and into Run(), so we can drop
the multiple-reader support from add().
Adding a file can create multiple nodes (e.g. the splitter can chunk
the file into several blocks), but:
1. we were only appending a single node per reader to our returned
list, and
2. we are only using the final node in that returned list,
so this commit also adjusts add() to return a single node reference
instead on an array of nodes.
the random permutaton for bootstrap peers was not working as
intended, returning the first four bootstrap peers always.
this commit fixes it to be a random subset.
Once the server is asked to shut down, we stop accepting new
connections, but the 'manners' graceful shutdown will wait for
all existing connections closed to close before finishing.
For keep-alive connections this will never happen unless the
client detects that the server is shutting down through the
ipfs API itself, and closes the connection in response.
This is a problem e.g. with the webui's connections visualization,
which polls the swarm/peers endpoint once a second, and never
detects that the API server was shut down.
We can mitigate this by telling the server to disable keep-alive,
which will add a 'Connection: close' header to the next HTTP
response on the connection. A well behaving client should then
treat that correspondingly by closing the connection.
Unfortunately this doesn't happen immediately in all cases,
presumably depending on the keep-alive timeout of the browser
that set up the connection, but it's at least a step in the
right direction.
When closing a node, the node itself only takes care of tearing down
its own children. As corehttp sets up a server based on a node, it
needs to also ensure that the server is accounted for when determining
if the node has been fully closed.
The server may stay alive for quite a while due to waiting on
open connections to close before shutting down. We should
find ways to terminate these connections in a more controlled
manner, but in the meantime it's helpful to be able to see
why a shutdown of the ipfs daemon is taking so long.
This changes .go-ipfs to .ipfs everywhere.
And by the way this defines a DefaultPathName const
for this name.
License: MIT
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
We now consider debugerrors harmful: we've run into cases where
debugerror.Wrap() hid valuable error information (err == io.EOF?).
I've removed them from the main code, but left them in some tests.
Go errors are lacking, but unfortunately, this isn't the solution.
It is possible that debugerros.New or debugerrors.Errorf should
remain still (i.e. only remove debugerrors.Wrap) but we don't use
these errors often enough to keep.
This commit adds a new set of sharness tests for pinning, and addresses
bugs that were pointed out by said tests.
test/sharness: added more pinning tests
Pinning is currently broken. See issue #1051. This commit introduces
a few more pinning tests. These are by no means exhaustive, but
definitely surface the present problems going on. I believe these
tests are correct, but not sure. Pushing them as failing so that
pinning is fixed in this PR.
make pinning and merkledag.Get take contexts
improve 'add' commands usage of pinning
FIXUP: fix 'pin lists look good'
ipfs-pin-stat simple script to help check pinning
This is a simple shell script to help check pinning.
We ought to strive towards making adding commands this easy.
The http api is great and powerful, but our setup right now
gets in the way. Perhaps we can clean up that area.
updated t0081-repo-pinning
- fixed a couple bugs with the tests
- made it a bit clearer (still a lot going on)
- the remaining tests are correct and highlight a problem with
pinning. Namely, that recursive pinning is buggy. At least:
towards the end of the test, $HASH_DIR4 and $HASH_FILE4 should
be pinned indirectly, but they're not. And thus get gc-ed out.
There may be other problems too.
cc @whyrusleeping
fix grep params for context deadline check
fix bugs in pin and pin tests
check for block local before checking recursive pin
humanize bandwidth output
instrument conn.Conn for bandwidth metrics
add poll command for continuous bandwidth reporting
move bandwidth tracking onto multiaddr net connections
another mild refactor of recording locations
address concerns from PR
lower mock nodes in race test due to increased goroutines per connection