mirror of
https://github.com/ipfs/kubo.git
synced 2026-03-03 07:18:12 +08:00
Ipfs Release 0.4.3
-----BEGIN PGP SIGNATURE----- iQIcBAABCAAGBQJX4QQCAAoJEIfjTf5y3Ctrb3gP/0wteBwqh+wex0f1UkkXZ2CF Lqrp69YuRtmbpvzo7a1G8gUkCGy9NaHmUOM1w5vuzwkNt8NxJ8tFY20nM2XVHC6N IRMHaloxVTTbRMY3iHecJW/t/YO5M9geFSgaoq9pJ8uU6lYCWTRYs097mXfe/UOF 6+vO0po/qiVNimd24FVfIr/QUNOKGfVQXu9MqZcrcMVmP+sIIT1DHhmF5TaTxISN 6qIVYAn5h+xXldCAHgWU7WPiIUdd7lW4CPACqOK4eehDdyX5kPHPecB0gEjKLe6P GyhAbzufFjuOUXCnAXA3gCmJwlhyWC3KdTfl0kKxBp8OeIbra+hvNkkBk1koR/Ls lvRul1njt+M/FYVsgTi7IE0fW/K6AYZudQ4uxXTCwDScZoKljD+Qbwsr4LbBgBRC pAYBQBfezU4jotJc4FfXGFhIOF7jUHLsqHrZjESjeCRhTGRA3Lh7LSBHdmMI1J3K S9zu8QVit0yM5zv1DaTqhysozbEOpt8uOdRktz8jvxPP1AqS3zteALICn+uV6FVV OupgHLDQS8jZEAoS1qXiUtFgdc9yIn1qiEQdv4wq6MDm8+tyIMDZPwOK7x5QG3K6 /CbhRM6e5lrSLBzu+J2OGNhS+yB9eUk5ZmcHPa78kgcbxqehS/KxJDyBzRpa2z0r UNwoiKxv4a2vdPFUJzJN =6QT5 -----END PGP SIGNATURE----- Merge tag 'v0.4.3' into release Ipfs Release 0.4.3
This commit is contained in:
commit
567f409da8
@ -1 +0,0 @@
|
||||
LICENSE
|
||||
@ -1,6 +1,7 @@
|
||||
.git/
|
||||
!.git/HEAD
|
||||
!.git/refs/
|
||||
!.git/packed-refs
|
||||
cmd/ipfs/ipfs
|
||||
vendor/gx/
|
||||
test/
|
||||
|
||||
@ -1,4 +0,0 @@
|
||||
---
|
||||
triggers:
|
||||
- github.com/ipfs/go-ipfs/cmd/ipfs
|
||||
no_go_fmt: true
|
||||
@ -1,5 +1,8 @@
|
||||
# dist: trusty # KVM Setup
|
||||
|
||||
notifications:
|
||||
email: false
|
||||
|
||||
os:
|
||||
- linux
|
||||
- osx
|
||||
@ -7,7 +10,7 @@ os:
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.5.2
|
||||
- 1.7
|
||||
|
||||
env:
|
||||
- TEST_NO_FUSE=1 TEST_VERBOSE=1 TEST_SUITE=test_go_expensive
|
||||
|
||||
174
CHANGELOG.md
174
CHANGELOG.md
@ -1,5 +1,171 @@
|
||||
# go-ipfs changelog
|
||||
|
||||
### 0.4.3-rc4 - 2016-09-09
|
||||
|
||||
This release candidate fixes issues in Bitswap and the `ipfs add` command, and improves testing.
|
||||
We plan for this to be the last release candidate before the release of go-ipfs v0.4.3.
|
||||
|
||||
With this release candidate, we're also moving go-ipfs to Go 1.7, which we expect will yield improvements in runtime performance, memory usage, build time and size of the release binaries.
|
||||
|
||||
- Require Go 1.7. (@whyrusleeping, @Kubuxu, @lgierth, [ipfs/go-ipfs#3163](https://github.com/ipfs/go-ipfs/pull/3163))
|
||||
- For this purpose, switch Docker image from Alpine 3.4 to Alpine Edge.
|
||||
- Fix cancellation of Bitswap `wantlist` entries. (@whyrusleeping, [ipfs/go-ipfs#3182](https://github.com/ipfs/go-ipfs/pull/3182))
|
||||
- Fix clearing of `active` state of Bitswap provider queries. (@whyrusleeping, [ipfs/go-ipfs#3169](https://github.com/ipfs/go-ipfs/pull/3169))
|
||||
- Fix a panic in the DHT code. (@Kubuxu, [ipfs/go-ipfs#3200](https://github.com/ipfs/go-ipfs/pull/3200))
|
||||
- Improve handling of `Identity` field in `ipfs config` command. (@Kubuxu, @whyrusleeping, [ipfs/go-ipfs#3141](https://github.com/ipfs/go-ipfs/pull/3141))
|
||||
- Fix explicit adding of symlinked files and directories. (@kevina, [ipfs/go-ipfs#3135](https://github.com/ipfs/go-ipfs/pull/3135))
|
||||
- Fix bash auto-completion of `ipfs daemon --unrestricted-api` option. (@lgierth, [ipfs/go-ipfs#3159](https://github.com/ipfs/go-ipfs/pull/3159))
|
||||
- Introduce a new timeout tool for tests to avoid licensing issues. (@Kubuxu, [ipfs/go-ipfs#3152](https://github.com/ipfs/go-ipfs/pull/3152))
|
||||
- Improve output for migrations of fs-repo. (@lgierth, [ipfs/go-ipfs#3158](https://github.com/ipfs/go-ipfs/pull/3158))
|
||||
- Fix info notice of commands taking input from stdin. (@Kubuxu, [ipfs/go-ipfs#3134](https://github.com/ipfs/go-ipfs/pull/3134))
|
||||
- Bring back a few tests for stdin handling of `ipfs cat` and `ipfs add`. (@Kubuxu, [ipfs/go-ipfs#3144](https://github.com/ipfs/go-ipfs/pull/3144))
|
||||
- Improve sharness tests for `ipfs repo verify` command. (@whyrusleeping, [ipfs/go-ipfs#3148](https://github.com/ipfs/go-ipfs/pull/3148))
|
||||
- Improve sharness tests for CORS headers on the gateway. (@Kubuxu, [ipfs/go-ipfs#3142](https://github.com/ipfs/go-ipfs/pull/3142))
|
||||
- Improve tests for pinning within `ipfs files`. (@kevina, [ipfs/go-ipfs#3151](https://github.com/ipfs/go-ipfs/pull/3151))
|
||||
- Improve tests for the automatic raising of file descriptor limits. (@whyrusleeping, [ipfs/go-ipfs#3149](https://github.com/ipfs/go-ipfs/pull/3149))
|
||||
|
||||
### 0.4.3-rc3 - 2016-08-09
|
||||
|
||||
This release candidate fixes a panic that occurs when input from stdin was
|
||||
expected, but none was given: [ipfs/go-ipfs#3050](https://github.com/ipfs/go-ipfs/pull/3050)
|
||||
|
||||
### 0.4.3-rc2 - 2016-07-23
|
||||
|
||||
This release includes bugfixes and fixes for regressions that were introduced
|
||||
between 0.4.2 and 0.4.3-rc1.
|
||||
|
||||
- Regressions
|
||||
- Fix daemon panic when there is no multipart input provided over the HTTP API.
|
||||
(@whyrusleeping, [ipfs/go-ipfs#2989](https://github.com/ipfs/go-ipfs/pull/2989))
|
||||
- Fix `ipfs refs --edges` not printing edges.
|
||||
(@Kubuxu, [ipfs/go-ipfs#3007](https://github.com/ipfs/go-ipfs/pull/3007))
|
||||
- Fix progress option for `ipfs add` defaulting to true on the HTTP API.
|
||||
(@whyrusleeping, [ipfs/go-ipfs#3025](https://github.com/ipfs/go-ipfs/pull/3025))
|
||||
- Fix erroneous printing of stdin reading message.
|
||||
(@whyrusleeping, [ipfs/go-ipfs#3033](https://github.com/ipfs/go-ipfs/pull/3033))
|
||||
- Fix panic caused by passing `--mount` and `--offline` flags to `ipfs daemon`.
|
||||
(@Kubuxu, [ipfs/go-ipfs#3022](https://github.com/ipfs/go-ipfs/pull/3022))
|
||||
- Fix symlink path resolution on windows.
|
||||
(@Kubuxu, [ipfs/go-ipfs#3023](https://github.com/ipfs/go-ipfs/pull/3023))
|
||||
- Add in code to prevent issue 3032 from crashing the daemon.
|
||||
(@whyrusleeping, [ipfs/go-ipfs#3037](https://github.com/ipfs/go-ipfs/pull/3037))
|
||||
|
||||
|
||||
### 0.4.3-rc1 - 2016-07-23
|
||||
|
||||
This is a maintenance release which comes with a couple of nice enhancements, and improves the performance of Storage, Bitswap, as well as Content and Peer Routing. It also introduces a handful of new commands and options, and fixes a good bunch of bugs.
|
||||
|
||||
This is the first Release Candidate. Unless there are vulnerabilities or regressions discovered, the final 0.4.3 release will happen about one week from now.
|
||||
|
||||
- Security Vulnerability
|
||||
|
||||
- The `master` branch if go-ipfs suffered from a vulnerability for about 3 weeks. It allowed an attacker to use an iframe to request malicious HTML and JS from the API of a local go-ipfs node. The attacker could then gain unrestricted access to the node's API, and e.g. extract the private key. We fixed this issue by reintroducing restrictions on which particular objects can be loaded through the API (@lgierth, [ipfs/go-ipfs#2949](https://github.com/ipfs/go-ipfs/pull/2949)), and by completely excluding the private key from the API (@Kubuxu, [ipfs/go-ipfs#2957](https://github.com/ipfs/go-ipfs/pull/2957)). We will also work on more hardening of the API in the next release.
|
||||
- **The previous release 0.4.2 is not vulnerable. That means if you're using official binaries from [dist.ipfs.io](https://dist.ipfs.io) you're not affected.** If you're running go-ipfs built from the `master` branch between June 17th ([ipfs/go-ipfs@1afebc21](https://github.com/ipfs/go-ipfs/commit/1afebc21f324982141ca8a29710da0d6f83ca804)) and July 7th ([ipfs/go-ipfs@39bef0d5](https://github.com/ipfs/go-ipfs/commit/39bef0d5b01f70abf679fca2c4d078a2d55620e2)), please update to v0.4.3-rc1 immediately.
|
||||
- We are grateful to the group of independent researchers who made us aware of this vulnerability. We wanna use this opportunity to reiterate that we're very happy about any additional review of pull requests and releases. You can contact us any time at security@ipfs.io (GPG [4B9665FB 92636D17 7C7A86D3 50AAE8A9 59B13AF3](https://pgp.mit.edu/pks/lookup?op=get&search=0x50AAE8A959B13AF3)).
|
||||
|
||||
- Notable changes
|
||||
|
||||
- Improve Bitswap performance. (@whyrusleeping, [ipfs/go-ipfs#2727](https://github.com/ipfs/go-ipfs/pull/2727), [ipfs/go-ipfs#2798](https://github.com/ipfs/go-ipfs/pull/2798))
|
||||
- Improve Content Routing and Peer Routing performance. (@whyrusleeping, [ipfs/go-ipfs#2817](https://github.com/ipfs/go-ipfs/pull/2817), [ipfs/go-ipfs#2841](https://github.com/ipfs/go-ipfs/pull/2841))
|
||||
- Improve datastore, blockstore, and dagstore performance. (@kevina, @Kubuxu, @whyrusleeping [ipfs/go-datastore#43](https://github.com/ipfs/go-datastore/pull/43), [ipfs/go-ipfs#2885](https://github.com/ipfs/go-ipfs/pull/2885), [ipfs/go-ipfs#2961](https://github.com/ipfs/go-ipfs/pull/2961), [ipfs/go-ipfs#2953](https://github.com/ipfs/go-ipfs/pull/2953), [ipfs/go-ipfs#2960](https://github.com/ipfs/go-ipfs/pull/2960))
|
||||
- Content Providers are now stored on disk to gain savings on process memory. (@whyrusleeping, [ipfs/go-ipfs#2804](https://github.com/ipfs/go-ipfs/pull/2804), [ipfs/go-ipfs#2860](https://github.com/ipfs/go-ipfs/pull/2860))
|
||||
- Migrations of the fs-repo (usually stored at `~/.ipfs`) now run automatically. If there's a TTY available, you'll get prompted when running `ipfs daemon`, and in addition you can use the `--migrate=true` or `--migrate=false` options to avoid the prompt. (@whyrusleeping, @lgierth, [ipfs/go-ipfs#2939](https://github.com/ipfs/go-ipfs/pull/2939))
|
||||
- The internal naming of blocks in the blockstore has changed, which requires a migration of the fs-repo, from version 3 to 4. (@whyrusleeping, [ipfs/go-ipfs#2903](https://github.com/ipfs/go-ipfs/pull/2903))
|
||||
- We now automatically raise the file descriptor limit to 1024 if neccessary. (@whyrusleeping, [ipfs/go-ipfs#2884](https://github.com/ipfs/go-ipfs/pull/2884), [ipfs/go-ipfs#2891](https://github.com/ipfs/go-ipfs/pull/2891))
|
||||
- After a long struggle with deadlocks and hanging connections, we've decided to disable the uTP transport by default for now. (@whyrusleeping, [ipfs/go-ipfs#2840](https://github.com/ipfs/go-ipfs/pull/2840), [ipfs/go-libp2p-transport@88244000](https://github.com/ipfs/go-libp2p-transport/commit/88244000f0ce8851ffcfbac746ebc0794b71d2a4))
|
||||
- There is now documentation for the configuration options in `docs/config.md`. (@whyrusleeping, [ipfs/go-ipfs#2974](https://github.com/ipfs/go-ipfs/pull/2974))
|
||||
- All commands now sanely handle the combination of stdin and optional flags in certain edge cases. (@lgierth, [ipfs/go-ipfs#2952](https://github.com/ipfs/go-ipfs/pull/2952))
|
||||
|
||||
- New Features
|
||||
|
||||
- Add `--offline` option to `ipfs daemon` command, which disables all swarm networking. (@Kubuxu, [ipfs/go-ipfs#2696](https://github.com/ipfs/go-ipfs/pull/2696), [ipfs/go-ipfs#2867](https://github.com/ipfs/go-ipfs/pull/2867))
|
||||
- Add `Datastore.HashOnRead` option for verifying block hashes on read access. (@Kubuxu, [ipfs/go-ipfs#2904](https://github.com/ipfs/go-ipfs/pull/2904))
|
||||
- Add `Datastore.BloomFilterSize` option for tuning the blockstore's new lookup bloom filter. (@Kubuxu, [ipfs/go-ipfs#2973](https://github.com/ipfs/go-ipfs/pull/2973))
|
||||
|
||||
- Bugfixes
|
||||
|
||||
- Fix publishing of local IPNS entries, and more. (@whyrusleeping, [ipfs/go-ipfs#2943](https://github.com/ipfs/go-ipfs/pull/2943))
|
||||
- Fix progress bars in `ipfs add` and `ipfs get`. (@whyrusleeping, [ipfs/go-ipfs#2893](https://github.com/ipfs/go-ipfs/pull/2893), [ipfs/go-ipfs#2948](https://github.com/ipfs/go-ipfs/pull/2948))
|
||||
- Make sure files added through `ipfs files` are pinned and don't get GC'd. (@kevina, [ipfs/go-ipfs#2872](https://github.com/ipfs/go-ipfs/pull/2872))
|
||||
- Fix copying into directory using `ipfs files cp`. (@whyrusleeping, [ipfs/go-ipfs#2977](https://github.com/ipfs/go-ipfs/pull/2977))
|
||||
- Fix `ipfs version --commit` with Docker containers. (@lgierth, [ipfs/go-ipfs#2734](https://github.com/ipfs/go-ipfs/pull/2734))
|
||||
- Run `ipfs diag` commands in the daemon instead of the CLI. (@Kubuxu, [ipfs/go-ipfs#2761](https://github.com/ipfs/go-ipfs/pull/2761))
|
||||
- Fix protobuf encoding on the API and in commands. (@stebalien, [ipfs/go-ipfs#2516](https://github.com/ipfs/go-ipfs/pull/2516))
|
||||
- Fix goroutine leak in `/ipfs/ping` protocol handler. (@whyrusleeping, [ipfs/go-libp2p#58](https://github.com/ipfs/go-libp2p/pull/58))
|
||||
- Fix `--flags` option on `ipfs commands`. (@Kubuxu, [ipfs/go-ipfs#2773](https://github.com/ipfs/go-ipfs/pull/2773))
|
||||
- Fix the error channels in `namesys`. (@whyrusleeping, [ipfs/go-ipfs#2788](https://github.com/ipfs/go-ipfs/pull/2788))
|
||||
- Fix consumptions of observed swarm addresses. (@whyrusleeping, [ipfs/go-libp2p#63](https://github.com/ipfs/go-libp2p/pull/63), [ipfs/go-ipfs#2771](https://github.com/ipfs/go-ipfs/issues/2771))
|
||||
- Fix a rare DHT panic. (@whyrusleeping, [ipfs/go-ipfs#2856](https://github.com/ipfs/go-ipfs/pull/2856))
|
||||
- Fix go-ipfs/js-ipfs interoperability issues in SPDY. (@whyrusleeping, [whyrusleeping/go-smux-spdystream@fae17783](https://github.com/whyrusleeping/go-smux-spdystream/commit/fae1778302a9e029bb308cf71cf33f857f2d89e8))
|
||||
- Fix a logging race condition during shutdown. (@Kubuxu, [ipfs/go-log#3](https://github.com/ipfs/go-log/pull/3))
|
||||
- Prevent DHT connection hangs. (@whyrusleeping, [ipfs/go-ipfs#2826](https://github.com/ipfs/go-ipfs/pull/2826), [ipfs/go-ipfs#2863](https://github.com/ipfs/go-ipfs/pull/2863))
|
||||
- Fix NDJSON output of `ipfs refs local`. (@Kubuxu, [ipfs/go-ipfs#2812](https://github.com/ipfs/go-ipfs/pull/2812))
|
||||
- Fix race condition in NAT detection. (@whyrusleeping, [ipfs/go-libp2p#69](https://github.com/ipfs/go-libp2p/pull/69))
|
||||
- Fix error messages. (@whyrusleeping, @Kubuxu, [ipfs/go-ipfs#2905](https://github.com/ipfs/go-ipfs/pull/2905), [ipfs/go-ipfs#2928](https://github.com/ipfs/go-ipfs/pull/2928))
|
||||
|
||||
- Enhancements
|
||||
|
||||
- Increase maximum object size on `ipfs put` from 1 MiB to 2 MiB. The maximum object size on the wire including all framing is 4 MiB. (@kpcyrd, [ipfs/go-ipfs#2980](https://github.com/ipfs/go-ipfs/pull/2980))
|
||||
- Add CORS headers to the Gateway's default config. (@Kubuxu, [ipfs/go-ipfs#2778](https://github.com/ipfs/go-ipfs/pull/2778))
|
||||
- Clear the dial backoff for a peer when using `ipfs swarm connect`. (@whyrusleeping, [ipfs/go-ipfs#2941](https://github.com/ipfs/go-ipfs/pull/2941))
|
||||
- Allow passing options to daemon in Docker container. (@lgierth, [ipfs/go-ipfs#2955](https://github.com/ipfs/go-ipfs/pull/2955))
|
||||
- Add `-v/--verbose` to `ìpfs swarm peers` command. (@csasarak, [ipfs/go-ipfs#2713](https://github.com/ipfs/go-ipfs/pull/2713))
|
||||
- Add `--format`, `--hash`, and `--size` options to `ipfs files stat` command. (@Kubuxu, [ipfs/go-ipfs#2706](https://github.com/ipfs/go-ipfs/pull/2706))
|
||||
- Add `--all` option to `ipfs version` command. (@Kubuxu, [ipfs/go-ipfs#2790](https://github.com/ipfs/go-ipfs/pull/2790))
|
||||
- Add `ipfs repo version` command. (@pfista, [ipfs/go-ipfs#2598](https://github.com/ipfs/go-ipfs/pull/2598))
|
||||
- Add `ipfs repo verify` command. (@whyrusleeping, [ipfs/go-ipfs#2924](https://github.com/ipfs/go-ipfs/pull/2924), [ipfs/go-ipfs#2951](https://github.com/ipfs/go-ipfs/pull/2951))
|
||||
- Add `ipfs stats repo` and `ipfs stats bitswap` command aliases. (@pfista, [ipfs/go-ipfs#2810](https://github.com/ipfs/go-ipfs/pull/2810))
|
||||
- Add success indication to responses of `ipfs ping` command. (@Kubuxu, [ipfs/go-ipfs#2813](https://github.com/ipfs/go-ipfs/pull/2813))
|
||||
- Save changes made via `ipfs swarm filter` to the config file. (@yuvallanger, [ipfs/go-ipfs#2880](https://github.com/ipfs/go-ipfs/pull/2880))
|
||||
- Expand `ipfs_p2p_peers` metric to include libp2p transport. (@lgierth, [ipfs/go-ipfs#2728](https://github.com/ipfs/go-ipfs/pull/2728))
|
||||
- Rework `ipfs files add` internals to avoid caching and prevent memory leaks. (@whyrusleeping, [ipfs/go-ipfs#2795](https://github.com/ipfs/go-ipfs/pull/2795))
|
||||
- Support `GOPATH` with multiple path components. (@karalabe, @lgierth, @djdv, [ipfs/go-ipfs#2808](https://github.com/ipfs/go-ipfs/pull/2808), [ipfs/go-ipfs#2862](https://github.com/ipfs/go-ipfs/pull/2862), [ipfs/go-ipfs#2975](https://github.com/ipfs/go-ipfs/pull/2975))
|
||||
|
||||
- General Codebase
|
||||
|
||||
- Take steps towards the `filestore` datastore. (@kevina, [ipfs/go-ipfs#2792](https://github.com/ipfs/go-ipfs/pull/2792), [ipfs/go-ipfs#2634](https://github.com/ipfs/go-ipfs/pull/2634))
|
||||
- Update recommended Golang version to 1.6.2 (@Kubuxu, [ipfs/go-ipfs#2724](https://github.com/ipfs/go-ipfs/pull/2724))
|
||||
- Update to Gx 0.8.0 and Gx-Go 1.2.1, which is faster and less noisy. (@whyrusleeping, [ipfs/go-ipfs#2979](https://github.com/ipfs/go-ipfs/pull/2979))
|
||||
- Use `go4.org/lock` instead of `camlistore/lock` for locking. (@whyrusleeping, [ipfs/go-ipfs#2887](https://github.com/ipfs/go-ipfs/pull/2887))
|
||||
- Manage `go.uuid`, `hamming`, `backoff`, `proquint`, `pb`, `go-context`, `cors`, `go-datastore` packages with Gx. (@Kubuxu, [ipfs/go-ipfs#2733](https://github.com/ipfs/go-ipfs/pull/2733), [ipfs/go-ipfs#2736](https://github.com/ipfs/go-ipfs/pull/2736), [ipfs/go-ipfs#2757](https://github.com/ipfs/go-ipfs/pull/2757), [ipfs/go-ipfs#2825](https://github.com/ipfs/go-ipfs/pull/2825), [ipfs/go-ipfs#2838](https://github.com/ipfs/go-ipfs/pull/2838))
|
||||
- Clean up the gateway's surface. (@lgierth, [ipfs/go-ipfs#2874](https://github.com/ipfs/go-ipfs/pull/2874))
|
||||
- Simplify the API gateway's access restrictions. (@lgierth, [ipfs/go-ipfs#2949](https://github.com/ipfs/go-ipfs/pull/2949), [ipfs/go-ipfs#2956](https://github.com/ipfs/go-ipfs/pull/2956))
|
||||
- Update docker image to Alpine Linux 3.4 and remove Go version constraint. (@lgierth, [ipfs/go-ipfs#2901](https://github.com/ipfs/go-ipfs/pull/2901), [ipfs/go-ipfs#2929](https://github.com/ipfs/go-ipfs/pull/2929))
|
||||
- Clarify `Dockerfile` and `Dockerfile.fast`. (@lgierth, [ipfs/go-ipfs#2796](https://github.com/ipfs/go-ipfs/pull/2796))
|
||||
- Simplify resolution of Git commit refs in Dockerfiles. (@lgierth, [ipfs/go-ipfs#2754](https://github.com/ipfs/go-ipfs/pull/2754))
|
||||
- Consolidate `--verbose` description across commands. (@Kubuxu, [ipfs/go-ipfs#2746](https://github.com/ipfs/go-ipfs/pull/2746))
|
||||
- Allow setting position of default values in command option descriptions. (@Kubuxu, [ipfs/go-ipfs#2744](https://github.com/ipfs/go-ipfs/pull/2744))
|
||||
- Set explicit default values for boolean command options. (@RichardLitt, [ipfs/go-ipfs#2657](https://github.com/ipfs/go-ipfs/pull/2657))
|
||||
- Autogenerate command synopsises. (@Kubuxu, [ipfs/go-ipfs#2785](https://github.com/ipfs/go-ipfs/pull/2785))
|
||||
- Fix and improve lots of documentation. (@RichardLitt, [ipfs/go-ipfs#2741](https://github.com/ipfs/go-ipfs/pull/2741), [ipfs/go-ipfs#2781](https://github.com/ipfs/go-ipfs/pull/2781))
|
||||
- Improve command descriptions to fit a width of 78 characters. (@RichardLitt, [ipfs/go-ipfs#2779](https://github.com/ipfs/go-ipfs/pull/2779), [ipfs/go-ipfs#2780](https://github.com/ipfs/go-ipfs/pull/2780), [ipfs/go-ipfs#2782](https://github.com/ipfs/go-ipfs/pull/2782))
|
||||
- Fix filename conflict in the debugging guide. (@Kubuxu, [ipfs/go-ipfs#2752](https://github.com/ipfs/go-ipfs/pull/2752))
|
||||
- Decapitalize log messages, according to Golang style guides. (@RichardLitt, [ipfs/go-ipfs#2853](https://github.com/ipfs/go-ipfs/pull/2853))
|
||||
- Add Github Issues HowTo guide. (@RichardLitt, @chriscool, [ipfs/go-ipfs#2889](https://github.com/ipfs/go-ipfs/pull/2889), [ipfs/go-ipfs#2895](https://github.com/ipfs/go-ipfs/pull/2895))
|
||||
- Add Github Issue template. (@chriscool, [ipfs/go-ipfs#2786](https://github.com/ipfs/go-ipfs/pull/2786))
|
||||
- Apply standard-readme to the README file. (@RichardLitt, [ipfs/go-ipfs#2883](https://github.com/ipfs/go-ipfs/pull/2883))
|
||||
- Fix issues pointed out by `govet`. (@Kubuxu, [ipfs/go-ipfs#2854](https://github.com/ipfs/go-ipfs/pull/2854))
|
||||
- Clarify `ipfs get` error message. (@whyrusleeping, [ipfs/go-ipfs#2886](https://github.com/ipfs/go-ipfs/pull/2886))
|
||||
- Remove dead code. (@whyrusleeping, [ipfs/go-ipfs#2819](https://github.com/ipfs/go-ipfs/pull/2819))
|
||||
- Add changelog for v0.4.3. (@lgierth, [ipfs/go-ipfs#2984](https://github.com/ipfs/go-ipfs/pull/2984))
|
||||
|
||||
- Tests & CI
|
||||
|
||||
- Fix flaky `ipfs mount` sharness test by using the `iptb` tool. (@noffle, [ipfs/go-ipfs#2707](https://github.com/ipfs/go-ipfs/pull/2707))
|
||||
- Fix flaky IP port selection in tests. (@Kubuxu, [ipfs/go-ipfs#2855](https://github.com/ipfs/go-ipfs/pull/2855))
|
||||
- Fix CLI tests on OSX by resolving /tmp symlink. (@Kubuxu, [ipfs/go-ipfs#2926](https://github.com/ipfs/go-ipfs/pull/2926))
|
||||
- Fix flaky GC test by running the daemon in offline mode. (@Kubuxu, [ipfs/go-ipfs#2908](https://github.com/ipfs/go-ipfs/pull/2908))
|
||||
- Add tests for `ipfs add` with hidden files. (@Kubuxu, [ipfs/go-ipfs#2756](https://github.com/ipfs/go-ipfs/pull/2756))
|
||||
- Add test to make sure the body of HEAD responses is empty. (@Kubuxu, [ipfs/go-ipfs#2775](https://github.com/ipfs/go-ipfs/pull/2775))
|
||||
- Add test to catch misdials. (@Kubuxu, [ipfs/go-ipfs#2831](https://github.com/ipfs/go-ipfs/pull/2831))
|
||||
- Mark flaky tests for `ipfs dht query` as known failure. (@noffle, [ipfs/go-ipfs#2720](https://github.com/ipfs/go-ipfs/pull/2720))
|
||||
- Remove failing blockstore-without-context test. (@Kubuxu, [ipfs/go-ipfs#2857](https://github.com/ipfs/go-ipfs/pull/2857))
|
||||
- Fix `--version` tests for versions with a suffix like `-dev` or `-rc1`. (@lgierth, [ipfs/go-ipfs#2937](https://github.com/ipfs/go-ipfs/pull/2937))
|
||||
- Make sharness tests work in cases where go-ipfs is symlinked into GOPATH. (@lgierth, [ipfs/go-ipfs#2937](https://github.com/ipfs/go-ipfs/pull/2937))
|
||||
- Add variable delays to blockstore mocks. (@rikonor, [ipfs/go-ipfs#2871](https://github.com/ipfs/go-ipfs/pull/2871))
|
||||
- Disable Travis CI email notifications. (@Kubuxu, [ipfs/go-ipfs#2896](https://github.com/ipfs/go-ipfs/pull/2896))
|
||||
|
||||
|
||||
### 0.4.2 - 2016-05-17
|
||||
|
||||
This is a patch release which fixes perfomance and networking bugs in go-libp2p,
|
||||
@ -27,14 +193,14 @@ There are also a few other nice improvements.
|
||||
* Add a debug-guidelines document. (@richardlitt)
|
||||
* Update the contribute document. (@richardlitt)
|
||||
* Fix documentation of many `ipfs` commands. (@richardlitt)
|
||||
* Fall back to ShortDesc if LongDesc is missing. (@kubuxu)
|
||||
* Fall back to ShortDesc if LongDesc is missing. (@Kubuxu)
|
||||
|
||||
* Removals
|
||||
* Remove -f option from `ipfs init` command. (@whyrusleeping)
|
||||
|
||||
* Bugfixes
|
||||
* Fix `ipfs object patch` argument handling and validation. (@jbenet)
|
||||
* Fix `ipfs config edit` command by running it client-side. (@kubuxu)
|
||||
* Fix `ipfs config edit` command by running it client-side. (@Kubuxu)
|
||||
* Set default value for `ipfs refs` arguments. (@richardlitt)
|
||||
* Fix parsing of incorrect command and argument permutations. (@thomas-gardner)
|
||||
* Update Dockerfile to latest go1.5.4-r0. (@chriscool)
|
||||
@ -58,7 +224,7 @@ There are also a few other nice improvements.
|
||||
|
||||
* CI
|
||||
* Fix t0170-dht sharness test. (@chriscool)
|
||||
* Increase timeout in t0060-daemon sharness test. (@kubuxu)
|
||||
* Increase timeout in t0060-daemon sharness test. (@Kubuxu)
|
||||
* Have CircleCI use `make deps` instead of `gx` directly. (@whyrusleeping)
|
||||
|
||||
|
||||
@ -89,7 +255,7 @@ hang bugfix that was shipped in the 0.4.0 release.
|
||||
* Bugfixes
|
||||
* fixed ipfs name resolve --local multihash error (@pfista)
|
||||
* ipfs patch commands won't return null links field anymore (@whyrusleeping)
|
||||
* Make non recursive resolve print the result (@kubuxu)
|
||||
* Make non recursive resolve print the result (@Kubuxu)
|
||||
* Output dirs on ipfs add -rn (@noffle)
|
||||
* update libp2p dep to fix hanging listeners problem (@whyrusleeping)
|
||||
* Fix Swarm.AddrFilters config setting with regard to `/ip6` addresses (@lgierth)
|
||||
|
||||
15
Dockerfile
15
Dockerfile
@ -1,7 +1,7 @@
|
||||
FROM alpine:3.3
|
||||
FROM alpine:edge
|
||||
MAINTAINER Lars Gierth <lgierth@ipfs.io>
|
||||
|
||||
# There is a copy of this Dockerfile in test/sharness,
|
||||
# There is a copy of this Dockerfile called Dockerfile.fast,
|
||||
# which is optimized for build time, instead of image size.
|
||||
#
|
||||
# Please keep these two Dockerfiles in sync.
|
||||
@ -29,7 +29,6 @@ ENV IPFS_PATH /data/ipfs
|
||||
# The default logging level
|
||||
ENV IPFS_LOGGING ""
|
||||
# Golang stuff
|
||||
ENV GO_VERSION 1.5.4-r0
|
||||
ENV GOPATH /go
|
||||
ENV PATH /go/bin:$PATH
|
||||
ENV SRC_PATH /go/src/github.com/ipfs/go-ipfs
|
||||
@ -37,7 +36,7 @@ ENV SRC_PATH /go/src/github.com/ipfs/go-ipfs
|
||||
# Get the go-ipfs sourcecode
|
||||
COPY . $SRC_PATH
|
||||
|
||||
RUN apk add --update musl go=$GO_VERSION git bash wget ca-certificates \
|
||||
RUN apk add --update musl-dev gcc go git bash wget ca-certificates \
|
||||
# Setup user and fs-repo directory
|
||||
&& mkdir -p $IPFS_PATH \
|
||||
&& adduser -D -h $IPFS_PATH -u 1000 ipfs \
|
||||
@ -50,11 +49,7 @@ RUN apk add --update musl go=$GO_VERSION git bash wget ca-certificates \
|
||||
# Invoke gx
|
||||
&& cd $SRC_PATH \
|
||||
&& gx --verbose install --global \
|
||||
# We get the current commit using this hack,
|
||||
# so that we don't have to copy all of .git/ into the build context.
|
||||
# This saves us quite a bit of image size.
|
||||
&& ref="$(cat .git/HEAD | cut -d' ' -f2)" \
|
||||
&& commit="$(cat .git/$ref | head -c 7)" \
|
||||
&& mkdir .git/objects && commit=$(git rev-parse --short HEAD) \
|
||||
&& echo "ldflags=-X github.com/ipfs/go-ipfs/repo/config.CurrentCommit=$commit" \
|
||||
# Build and install IPFS and entrypoint script
|
||||
&& cd $SRC_PATH/cmd/ipfs \
|
||||
@ -63,7 +58,7 @@ RUN apk add --update musl go=$GO_VERSION git bash wget ca-certificates \
|
||||
&& cp $SRC_PATH/bin/container_daemon /usr/local/bin/start_ipfs \
|
||||
&& chmod 755 /usr/local/bin/start_ipfs \
|
||||
# Remove all build-time dependencies
|
||||
&& apk del --purge musl go git && rm -rf $GOPATH && rm -vf $IPFS_PATH/api
|
||||
&& apk del --purge musl-dev gcc go git && rm -rf $GOPATH && rm -vf $IPFS_PATH/api
|
||||
|
||||
# Call uid 1000 "ipfs"
|
||||
USER ipfs
|
||||
|
||||
@ -1,12 +1,10 @@
|
||||
FROM alpine:3.3
|
||||
FROM alpine:edge
|
||||
MAINTAINER Lars Gierth <lgierth@ipfs.io>
|
||||
|
||||
# This is a copy of the root Dockerfile,
|
||||
# This is a copy of /Dockerfile,
|
||||
# except that we optimize for build time, instead of image size.
|
||||
#
|
||||
# Please keep these two Dockerfiles in sync.
|
||||
#
|
||||
# Only sections different from the root Dockerfile are commented.
|
||||
|
||||
|
||||
EXPOSE 4001
|
||||
@ -17,7 +15,6 @@ EXPOSE 8080
|
||||
ENV GX_IPFS ""
|
||||
ENV IPFS_PATH /data/ipfs
|
||||
ENV IPFS_LOGGING ""
|
||||
ENV GO_VERSION 1.5.4-r0
|
||||
ENV GOPATH /go
|
||||
ENV PATH /go/bin:$PATH
|
||||
ENV SRC_PATH /go/src/github.com/ipfs/go-ipfs
|
||||
@ -31,7 +28,7 @@ ENV SRC_PATH /go/src/github.com/ipfs/go-ipfs
|
||||
# and trigger a re-run of all following commands.
|
||||
COPY ./package.json $SRC_PATH/package.json
|
||||
|
||||
RUN apk add --update musl go=$GO_VERSION git bash wget ca-certificates \
|
||||
RUN apk add --update musl-dev gcc go git bash wget ca-certificates \
|
||||
&& mkdir -p $IPFS_PATH \
|
||||
&& adduser -D -h $IPFS_PATH -u 1000 ipfs \
|
||||
&& chown ipfs:ipfs $IPFS_PATH && chmod 755 $IPFS_PATH \
|
||||
@ -44,15 +41,14 @@ RUN apk add --update musl go=$GO_VERSION git bash wget ca-certificates \
|
||||
COPY . $SRC_PATH
|
||||
|
||||
RUN cd $SRC_PATH \
|
||||
&& ref="$(cat .git/HEAD | cut -d' ' -f2)" \
|
||||
&& commit="$(cat .git/$ref | head -c 7)" \
|
||||
&& mkdir .git/objects && commit=$(git rev-parse --short HEAD) \
|
||||
&& echo "ldflags=-X github.com/ipfs/go-ipfs/repo/config.CurrentCommit=$commit" \
|
||||
&& cd $SRC_PATH/cmd/ipfs \
|
||||
&& go build -ldflags "-X github.com/ipfs/go-ipfs/repo/config.CurrentCommit=$commit" \
|
||||
&& cp ipfs /usr/local/bin/ipfs \
|
||||
&& cp $SRC_PATH/bin/container_daemon /usr/local/bin/start_ipfs \
|
||||
&& chmod 755 /usr/local/bin/start_ipfs \
|
||||
&& apk del --purge musl go git && rm -rf $GOPATH && rm -vf $IPFS_PATH/api
|
||||
&& apk del --purge musl-dev gcc go git && rm -rf $GOPATH && rm -vf $IPFS_PATH/api
|
||||
|
||||
USER ipfs
|
||||
VOLUME $IPFS_PATH
|
||||
157
Godeps/Godeps.json
generated
157
Godeps/Godeps.json
generated
@ -9,15 +9,6 @@
|
||||
"ImportPath": "bazil.org/fuse",
|
||||
"Rev": "e4fcc9a2c7567d1c42861deebeb483315d222262"
|
||||
},
|
||||
{
|
||||
"ImportPath": "bitbucket.org/ww/goautoneg",
|
||||
"Comment": "null-5",
|
||||
"Rev": "75cd24fc2f2c2a2088577d12123ddee5f54e0675"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/bren2010/proquint",
|
||||
"Rev": "5958552242606512f714d2e93513b380f43f9991"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/briantigerchow/pubsub",
|
||||
"Rev": "39ce5f556423a4c7223b370fa17a3bbd75b2d197"
|
||||
@ -26,14 +17,6 @@
|
||||
"ImportPath": "github.com/camlistore/lock",
|
||||
"Rev": "ae27720f340952636b826119b58130b9c1a847a0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/cenkalti/backoff",
|
||||
"Rev": "9831e1e25c874e0a0601b6dc43641071414eec7a"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/cheggaaa/pb",
|
||||
"Rev": "d7729fd7ec1372c15b83db39834bf842bf2d69fb"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/codahale/hdrhistogram",
|
||||
"Rev": "5fd85ec0b4e2dd5d4158d257d943f2e586d86b62"
|
||||
@ -42,92 +25,22 @@
|
||||
"ImportPath": "github.com/codahale/metrics",
|
||||
"Rev": "7c37910bc765e705301b159683480bdd44555c91"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/cryptix/mdns",
|
||||
"Rev": "04ff72a32679d57d009c0ac0fc5c4cda10350bad"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/docker/spdystream",
|
||||
"Rev": "e372247595b2edd26f6d022288e97eed793d70a2"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/dustin/go-humanize",
|
||||
"Rev": "00897f070f09f194c26d65afae734ba4c32404e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/dustin/randbo",
|
||||
"Rev": "7f1b564ca7242d22bcc6e2128beb90d9fa38b9f0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/facebookgo/atomicfile",
|
||||
"Rev": "6f117f2e7f224fb03eb5e5fba370eade6e2b90c8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/gogo/protobuf/io",
|
||||
"Rev": "0ac967c269268f1af7d9bcc7927ccc9a589b2b36"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/gogo/protobuf/proto",
|
||||
"Rev": "0ac967c269268f1af7d9bcc7927ccc9a589b2b36"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/golang-lru",
|
||||
"Rev": "253b2dc1ca8bae42c3b5b6e53dd2eab1a7551116"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/ipfs/go-datastore",
|
||||
"Rev": "e63957b6da369d986ef3e7a3f249779ba3f56c7e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-base58",
|
||||
"Rev": "6237cf65f3a6f7111cd8a42be3590df99a66bc7d"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-context/frac",
|
||||
"Rev": "d14ea06fba99483203c19d92cfcd13ebe73135f4"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-context/io",
|
||||
"Rev": "d14ea06fba99483203c19d92cfcd13ebe73135f4"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-detect-race",
|
||||
"Rev": "3463798d9574bd0b7eca275dccc530804ff5216f"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-fuse-version",
|
||||
"Rev": "b733dfc0597e1f6780510ee7afad8b6e3c7af3eb"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-is-domain",
|
||||
"Rev": "93b717f2ae17838a265e30277275ee99ee7198d6"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-msgio",
|
||||
"Rev": "9399b44f6bf265b30bedaf2af8c0604bbc8d5275"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-multiaddr",
|
||||
"Comment": "0.1.2-38-gc13f11b",
|
||||
"Rev": "c13f11bbfe6439771f4df7bfb330f686826144e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-multiaddr-net",
|
||||
"Rev": "4a8bd8f8baf45afcf2bb385bbc17e5208d5d4c71"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-multihash",
|
||||
"Comment": "0.1.0-39-ge8d2374",
|
||||
"Rev": "e8d2374934f16a971d1e94a864514a21ac74bf7f"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-os-rename",
|
||||
"Rev": "3ac97f61ef67a6b87b95c1282f6c317ed0e693c2"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-peerstream",
|
||||
"Rev": "f3ab20739a88aa79306dc039c1b5a39e7afa45d6"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-random",
|
||||
"Rev": "cd535bd25356746b9b1e824871dda7da932460e2"
|
||||
@ -136,10 +49,6 @@
|
||||
"ImportPath": "github.com/jbenet/go-random-files",
|
||||
"Rev": "737479700b40b4b50e914e963ce8d9d44603e3c8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/matttproud/golang_protobuf_extensions/pbutil",
|
||||
"Rev": "fc2b8d3a73c4867e51861bbdd5ae3c1f0869dd6a"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mitchellh/go-homedir",
|
||||
"Rev": "1f6da4a72e57d4e7edd4a7295a585e0a3999a2d4"
|
||||
@ -148,27 +57,6 @@
|
||||
"ImportPath": "github.com/mtchavez/jenkins",
|
||||
"Rev": "5a816af6ef21ef401bff5e4b7dd255d63400f497"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/olekukonko/ts",
|
||||
"Rev": "ecf753e7c962639ab5a1fb46f7da627d4c0a04b8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/rs/cors",
|
||||
"Rev": "5e4ce6bc0ecd3472f6f943666d84876691be2ced"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/satori/go.uuid",
|
||||
"Rev": "7c7f2020c4c9491594b85767967f4619c2fa75f9"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/steakknife/hamming",
|
||||
"Comment": "0.0.10",
|
||||
"Rev": "8bad99011016569c05320e51be39c648679c5b73"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
|
||||
"Rev": "4875955338b0a434238a31165cb87255ab6e9e4a"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syndtr/gosnappy/snappy",
|
||||
"Rev": "156a073208e131d7d2e212cb749feae7c339e846"
|
||||
@ -180,51 +68,6 @@
|
||||
{
|
||||
"ImportPath": "github.com/whyrusleeping/chunker",
|
||||
"Rev": "537e901819164627ca4bb5ce4e3faa8ce7956564"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/whyrusleeping/go-logging",
|
||||
"Rev": "128b9855511a4ea3ccbcf712695baf2bab72e134"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/whyrusleeping/go-metrics",
|
||||
"Rev": "1cd8009604ec2238b5a71305a0ecd974066e0e16"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/whyrusleeping/go-sysinfo",
|
||||
"Rev": "769b7c0b50e8030895abc74ba8107ac715e3162a"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/whyrusleeping/multiaddr-filter",
|
||||
"Rev": "9e26222151125ecd3fc1fd190179b6bdd55f5608"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/blowfish",
|
||||
"Rev": "c84e1f8e3a7e322d497cd16c0e8a13c7e127baf3"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/sha3",
|
||||
"Rev": "c84e1f8e3a7e322d497cd16c0e8a13c7e127baf3"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/context",
|
||||
"Rev": "ff8eb9a34a5cbb9941ffc6f84a19a8014c2646ad"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/internal/iana",
|
||||
"Rev": "ff8eb9a34a5cbb9941ffc6f84a19a8014c2646ad"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/ipv4",
|
||||
"Rev": "ff8eb9a34a5cbb9941ffc6f84a19a8014c2646ad"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/ipv6",
|
||||
"Rev": "ff8eb9a34a5cbb9941ffc6f84a19a8014c2646ad"
|
||||
},
|
||||
{
|
||||
"ImportPath": "gopkg.in/fsnotify.v1",
|
||||
"Comment": "v1.2.0",
|
||||
"Rev": "96c060f6a6b7e0d6f75fddd10efeaca3e5d1bcb0"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
2
Godeps/_workspace/src/bazil.org/fuse/syscallx/syscallx_std.go
generated
vendored
2
Godeps/_workspace/src/bazil.org/fuse/syscallx/syscallx_std.go
generated
vendored
@ -6,7 +6,7 @@ package syscallx
|
||||
// the right stuff in golang.org/x/sys/unix.
|
||||
|
||||
import (
|
||||
"golang.org/x/sys/unix"
|
||||
"gx/ipfs/QmXPKMT5cT8ajqamSD1YaeEpfeaHvs9AU4MQzte4Bkr6V4/sys/unix"
|
||||
)
|
||||
|
||||
func Getxattr(path string, attr string, dest []byte) (sz int, err error) {
|
||||
|
||||
6
Godeps/_workspace/src/github.com/bren2010/proquint/README.md
generated
vendored
6
Godeps/_workspace/src/github.com/bren2010/proquint/README.md
generated
vendored
@ -1,6 +0,0 @@
|
||||
Proquint
|
||||
-------
|
||||
|
||||
Golang implementation of [Proquint Pronounceable Identifiers](https://github.com/deoxxa/proquint).
|
||||
|
||||
|
||||
123
Godeps/_workspace/src/github.com/bren2010/proquint/proquint.go
generated
vendored
123
Godeps/_workspace/src/github.com/bren2010/proquint/proquint.go
generated
vendored
@ -1,123 +0,0 @@
|
||||
/*
|
||||
Copyright (c) 2014 Brendan McMillion
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
*/
|
||||
|
||||
package proquint
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"strings"
|
||||
"regexp"
|
||||
)
|
||||
|
||||
var (
|
||||
conse = [...]byte{'b', 'd', 'f', 'g', 'h', 'j', 'k', 'l', 'm', 'n',
|
||||
'p', 'r', 's', 't', 'v', 'z'}
|
||||
vowse = [...]byte{'a', 'i', 'o', 'u'}
|
||||
|
||||
consd = map[byte] uint16 {
|
||||
'b' : 0, 'd' : 1, 'f' : 2, 'g' : 3,
|
||||
'h' : 4, 'j' : 5, 'k' : 6, 'l' : 7,
|
||||
'm' : 8, 'n' : 9, 'p' : 10, 'r' : 11,
|
||||
's' : 12, 't' : 13, 'v' : 14, 'z' : 15,
|
||||
}
|
||||
|
||||
vowsd = map[byte] uint16 {
|
||||
'a' : 0, 'i' : 1, 'o' : 2, 'u' : 3,
|
||||
}
|
||||
)
|
||||
|
||||
/**
|
||||
* Tests if a given string is a Proquint identifier
|
||||
*
|
||||
* @param {string} str The candidate string.
|
||||
*
|
||||
* @return {bool} Whether or not it qualifies.
|
||||
* @return {error} Error
|
||||
*/
|
||||
func IsProquint(str string) (bool, error) {
|
||||
exp := "^([abdfghijklmnoprstuvz]{5}-)*[abdfghijklmnoprstuvz]{5}$"
|
||||
ok, err := regexp.MatchString(exp, str)
|
||||
|
||||
return ok, err
|
||||
}
|
||||
|
||||
/**
|
||||
* Encodes an arbitrary byte slice into an identifier.
|
||||
*
|
||||
* @param {[]byte} buf Slice of bytes to encode.
|
||||
*
|
||||
* @return {string} The given byte slice as an identifier.
|
||||
*/
|
||||
func Encode(buf []byte) string {
|
||||
var out bytes.Buffer
|
||||
|
||||
for i := 0; i < len(buf); i = i + 2 {
|
||||
var n uint16 = (uint16(buf[i]) * 256) + uint16(buf[i + 1])
|
||||
|
||||
var (
|
||||
c1 = n & 0x0f
|
||||
v1 = (n >> 4) & 0x03
|
||||
c2 = (n >> 6) & 0x0f
|
||||
v2 = (n >> 10) & 0x03
|
||||
c3 = (n >> 12) & 0x0f
|
||||
)
|
||||
|
||||
out.WriteByte(conse[c1])
|
||||
out.WriteByte(vowse[v1])
|
||||
out.WriteByte(conse[c2])
|
||||
out.WriteByte(vowse[v2])
|
||||
out.WriteByte(conse[c3])
|
||||
|
||||
if (i + 2) < len(buf) {
|
||||
out.WriteByte('-')
|
||||
}
|
||||
}
|
||||
|
||||
return out.String()
|
||||
}
|
||||
|
||||
/**
|
||||
* Decodes an identifier into its corresponding byte slice.
|
||||
*
|
||||
* @param {string} str Identifier to convert.
|
||||
*
|
||||
* @return {[]byte} The identifier as a byte slice.
|
||||
*/
|
||||
func Decode(str string) []byte {
|
||||
var (
|
||||
out bytes.Buffer
|
||||
bits []string = strings.Split(str, "-")
|
||||
)
|
||||
|
||||
for i := 0; i < len(bits); i++ {
|
||||
var x uint16 = consd[bits[i][0]] +
|
||||
(vowsd[bits[i][1]] << 4) +
|
||||
(consd[bits[i][2]] << 6) +
|
||||
(vowsd[bits[i][3]] << 10) +
|
||||
(consd[bits[i][4]] << 12)
|
||||
|
||||
out.WriteByte(byte(x >> 8))
|
||||
out.WriteByte(byte(x))
|
||||
}
|
||||
|
||||
return out.Bytes()
|
||||
}
|
||||
1
Godeps/_workspace/src/github.com/camlistore/lock/.gitignore
generated
vendored
1
Godeps/_workspace/src/github.com/camlistore/lock/.gitignore
generated
vendored
@ -1 +0,0 @@
|
||||
*~
|
||||
202
Godeps/_workspace/src/github.com/camlistore/lock/COPYING
generated
vendored
202
Godeps/_workspace/src/github.com/camlistore/lock/COPYING
generated
vendored
@ -1,202 +0,0 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
3
Godeps/_workspace/src/github.com/camlistore/lock/README.txt
generated
vendored
3
Godeps/_workspace/src/github.com/camlistore/lock/README.txt
generated
vendored
@ -1,3 +0,0 @@
|
||||
File locking library.
|
||||
|
||||
See http://godoc.org/github.com/camlistore/lock
|
||||
158
Godeps/_workspace/src/github.com/camlistore/lock/lock.go
generated
vendored
158
Godeps/_workspace/src/github.com/camlistore/lock/lock.go
generated
vendored
@ -1,158 +0,0 @@
|
||||
/*
|
||||
Copyright 2013 The Go Authors
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package lock
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// Lock locks the given file, creating the file if necessary. If the
|
||||
// file already exists, it must have zero size or an error is returned.
|
||||
// The lock is an exclusive lock (a write lock), but locked files
|
||||
// should neither be read from nor written to. Such files should have
|
||||
// zero size and only exist to co-ordinate ownership across processes.
|
||||
//
|
||||
// A nil Closer is returned if an error occurred. Otherwise, close that
|
||||
// Closer to release the lock.
|
||||
//
|
||||
// On Linux, FreeBSD and OSX, a lock has the same semantics as fcntl(2)'s
|
||||
// advisory locks. In particular, closing any other file descriptor for the
|
||||
// same file will release the lock prematurely.
|
||||
//
|
||||
// Attempting to lock a file that is already locked by the current process
|
||||
// has undefined behavior.
|
||||
//
|
||||
// On other operating systems, lock will fallback to using the presence and
|
||||
// content of a file named name + '.lock' to implement locking behavior.
|
||||
func Lock(name string) (io.Closer, error) {
|
||||
return lockFn(name)
|
||||
}
|
||||
|
||||
var lockFn = lockPortable
|
||||
|
||||
// Portable version not using fcntl. Doesn't handle crashes as gracefully,
|
||||
// since it can leave stale lock files.
|
||||
// TODO: write pid of owner to lock file and on race see if pid is
|
||||
// still alive?
|
||||
func lockPortable(name string) (io.Closer, error) {
|
||||
absName, err := filepath.Abs(name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("can't Lock file %q: can't find abs path: %v", name, err)
|
||||
}
|
||||
fi, err := os.Stat(absName)
|
||||
if err == nil && fi.Size() > 0 {
|
||||
if isStaleLock(absName) {
|
||||
os.Remove(absName)
|
||||
} else {
|
||||
return nil, fmt.Errorf("can't Lock file %q: has non-zero size", name)
|
||||
}
|
||||
}
|
||||
f, err := os.OpenFile(absName, os.O_RDWR|os.O_CREATE|os.O_TRUNC|os.O_EXCL, 0666)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create lock file %s %v", absName, err)
|
||||
}
|
||||
if err := json.NewEncoder(f).Encode(&pidLockMeta{OwnerPID: os.Getpid()}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &lockCloser{f: f, abs: absName}, nil
|
||||
}
|
||||
|
||||
type pidLockMeta struct {
|
||||
OwnerPID int
|
||||
}
|
||||
|
||||
func isStaleLock(path string) bool {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
defer f.Close()
|
||||
var meta pidLockMeta
|
||||
if json.NewDecoder(f).Decode(&meta) != nil {
|
||||
return false
|
||||
}
|
||||
if meta.OwnerPID == 0 {
|
||||
return false
|
||||
}
|
||||
p, err := os.FindProcess(meta.OwnerPID)
|
||||
if err != nil {
|
||||
// e.g. on Windows
|
||||
return true
|
||||
}
|
||||
// On unix, os.FindProcess always is true, so we have to send
|
||||
// it a signal to see if it's alive.
|
||||
if signalZero != nil {
|
||||
if p.Signal(signalZero) != nil {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
var signalZero os.Signal // nil or set by lock_sigzero.go
|
||||
|
||||
type lockCloser struct {
|
||||
f *os.File
|
||||
abs string
|
||||
once sync.Once
|
||||
err error
|
||||
}
|
||||
|
||||
func (lc *lockCloser) Close() error {
|
||||
lc.once.Do(lc.close)
|
||||
return lc.err
|
||||
}
|
||||
|
||||
func (lc *lockCloser) close() {
|
||||
if err := lc.f.Close(); err != nil {
|
||||
lc.err = err
|
||||
}
|
||||
if err := os.Remove(lc.abs); err != nil {
|
||||
lc.err = err
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
lockmu sync.Mutex
|
||||
locked = map[string]bool{} // abs path -> true
|
||||
)
|
||||
|
||||
// unlocker is used by the darwin and linux implementations with fcntl
|
||||
// advisory locks.
|
||||
type unlocker struct {
|
||||
f *os.File
|
||||
abs string
|
||||
}
|
||||
|
||||
func (u *unlocker) Close() error {
|
||||
lockmu.Lock()
|
||||
// Remove is not necessary but it's nice for us to clean up.
|
||||
// If we do do this, though, it needs to be before the
|
||||
// u.f.Close below.
|
||||
os.Remove(u.abs)
|
||||
if err := u.f.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
delete(locked, u.abs)
|
||||
lockmu.Unlock()
|
||||
return nil
|
||||
}
|
||||
32
Godeps/_workspace/src/github.com/camlistore/lock/lock_appengine.go
generated
vendored
32
Godeps/_workspace/src/github.com/camlistore/lock/lock_appengine.go
generated
vendored
@ -1,32 +0,0 @@
|
||||
// +build appengine
|
||||
|
||||
/*
|
||||
Copyright 2013 The Go Authors
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package lock
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
func init() {
|
||||
lockFn = lockAppEngine
|
||||
}
|
||||
|
||||
func lockAppEngine(name string) (io.Closer, error) {
|
||||
return nil, errors.New("Lock not available on App Engine")
|
||||
}
|
||||
80
Godeps/_workspace/src/github.com/camlistore/lock/lock_darwin_amd64.go
generated
vendored
80
Godeps/_workspace/src/github.com/camlistore/lock/lock_darwin_amd64.go
generated
vendored
@ -1,80 +0,0 @@
|
||||
// +build darwin,amd64
|
||||
// +build !appengine
|
||||
|
||||
/*
|
||||
Copyright 2013 The Go Authors
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package lock
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func init() {
|
||||
lockFn = lockFcntl
|
||||
}
|
||||
|
||||
func lockFcntl(name string) (io.Closer, error) {
|
||||
abs, err := filepath.Abs(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
lockmu.Lock()
|
||||
if locked[abs] {
|
||||
lockmu.Unlock()
|
||||
return nil, fmt.Errorf("file %q already locked", abs)
|
||||
}
|
||||
locked[abs] = true
|
||||
lockmu.Unlock()
|
||||
|
||||
fi, err := os.Stat(name)
|
||||
if err == nil && fi.Size() > 0 {
|
||||
return nil, fmt.Errorf("can't Lock file %q: has non-zero size", name)
|
||||
}
|
||||
|
||||
f, err := os.Create(name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Lock Create of %s (abs: %s) failed: %v", name, abs, err)
|
||||
}
|
||||
|
||||
// This type matches C's "struct flock" defined in /usr/include/sys/fcntl.h.
|
||||
// TODO: move this into the standard syscall package.
|
||||
k := struct {
|
||||
Start uint64 // sizeof(off_t): 8
|
||||
Len uint64 // sizeof(off_t): 8
|
||||
Pid uint32 // sizeof(pid_t): 4
|
||||
Type uint16 // sizeof(short): 2
|
||||
Whence uint16 // sizeof(short): 2
|
||||
}{
|
||||
Type: syscall.F_WRLCK,
|
||||
Whence: uint16(os.SEEK_SET),
|
||||
Start: 0,
|
||||
Len: 0, // 0 means to lock the entire file.
|
||||
Pid: uint32(os.Getpid()),
|
||||
}
|
||||
|
||||
_, _, errno := syscall.Syscall(syscall.SYS_FCNTL, f.Fd(), uintptr(syscall.F_SETLK), uintptr(unsafe.Pointer(&k)))
|
||||
if errno != 0 {
|
||||
f.Close()
|
||||
return nil, errno
|
||||
}
|
||||
return &unlocker{f, abs}, nil
|
||||
}
|
||||
79
Godeps/_workspace/src/github.com/camlistore/lock/lock_freebsd.go
generated
vendored
79
Godeps/_workspace/src/github.com/camlistore/lock/lock_freebsd.go
generated
vendored
@ -1,79 +0,0 @@
|
||||
/*
|
||||
Copyright 2013 The Go Authors
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package lock
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func init() {
|
||||
lockFn = lockFcntl
|
||||
}
|
||||
|
||||
func lockFcntl(name string) (io.Closer, error) {
|
||||
abs, err := filepath.Abs(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
lockmu.Lock()
|
||||
if locked[abs] {
|
||||
lockmu.Unlock()
|
||||
return nil, fmt.Errorf("file %q already locked", abs)
|
||||
}
|
||||
locked[abs] = true
|
||||
lockmu.Unlock()
|
||||
|
||||
fi, err := os.Stat(name)
|
||||
if err == nil && fi.Size() > 0 {
|
||||
return nil, fmt.Errorf("can't Lock file %q: has non-zero size", name)
|
||||
}
|
||||
|
||||
f, err := os.Create(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// This type matches C's "struct flock" defined in /usr/include/fcntl.h.
|
||||
// TODO: move this into the standard syscall package.
|
||||
k := struct {
|
||||
Start int64 /* off_t starting offset */
|
||||
Len int64 /* off_t len = 0 means until end of file */
|
||||
Pid int32 /* pid_t lock owner */
|
||||
Type int16 /* short lock type: read/write, etc. */
|
||||
Whence int16 /* short type of l_start */
|
||||
Sysid int32 /* int remote system id or zero for local */
|
||||
}{
|
||||
Start: 0,
|
||||
Len: 0, // 0 means to lock the entire file.
|
||||
Pid: int32(os.Getpid()),
|
||||
Type: syscall.F_WRLCK,
|
||||
Whence: int16(os.SEEK_SET),
|
||||
Sysid: 0,
|
||||
}
|
||||
|
||||
_, _, errno := syscall.Syscall(syscall.SYS_FCNTL, f.Fd(), uintptr(syscall.F_SETLK), uintptr(unsafe.Pointer(&k)))
|
||||
if errno != 0 {
|
||||
f.Close()
|
||||
return nil, errno
|
||||
}
|
||||
return &unlocker{f, abs}, nil
|
||||
}
|
||||
80
Godeps/_workspace/src/github.com/camlistore/lock/lock_linux_amd64.go
generated
vendored
80
Godeps/_workspace/src/github.com/camlistore/lock/lock_linux_amd64.go
generated
vendored
@ -1,80 +0,0 @@
|
||||
// +build linux,amd64
|
||||
// +build !appengine
|
||||
|
||||
/*
|
||||
Copyright 2013 The Go Authors
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package lock
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func init() {
|
||||
lockFn = lockFcntl
|
||||
}
|
||||
|
||||
func lockFcntl(name string) (io.Closer, error) {
|
||||
abs, err := filepath.Abs(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
lockmu.Lock()
|
||||
if locked[abs] {
|
||||
lockmu.Unlock()
|
||||
return nil, fmt.Errorf("file %q already locked", abs)
|
||||
}
|
||||
locked[abs] = true
|
||||
lockmu.Unlock()
|
||||
|
||||
fi, err := os.Stat(name)
|
||||
if err == nil && fi.Size() > 0 {
|
||||
return nil, fmt.Errorf("can't Lock file %q: has non-zero size", name)
|
||||
}
|
||||
|
||||
f, err := os.Create(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// This type matches C's "struct flock" defined in /usr/include/bits/fcntl.h.
|
||||
// TODO: move this into the standard syscall package.
|
||||
k := struct {
|
||||
Type uint32
|
||||
Whence uint32
|
||||
Start uint64
|
||||
Len uint64
|
||||
Pid uint32
|
||||
}{
|
||||
Type: syscall.F_WRLCK,
|
||||
Whence: uint32(os.SEEK_SET),
|
||||
Start: 0,
|
||||
Len: 0, // 0 means to lock the entire file.
|
||||
Pid: uint32(os.Getpid()),
|
||||
}
|
||||
|
||||
_, _, errno := syscall.Syscall(syscall.SYS_FCNTL, f.Fd(), uintptr(syscall.F_SETLK), uintptr(unsafe.Pointer(&k)))
|
||||
if errno != 0 {
|
||||
f.Close()
|
||||
return nil, errno
|
||||
}
|
||||
return &unlocker{f, abs}, nil
|
||||
}
|
||||
81
Godeps/_workspace/src/github.com/camlistore/lock/lock_linux_arm.go
generated
vendored
81
Godeps/_workspace/src/github.com/camlistore/lock/lock_linux_arm.go
generated
vendored
@ -1,81 +0,0 @@
|
||||
// +build linux,arm
|
||||
// +build !appengine
|
||||
|
||||
/*
|
||||
Copyright 2013 The Go Authors
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package lock
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func init() {
|
||||
lockFn = lockFcntl
|
||||
}
|
||||
|
||||
func lockFcntl(name string) (io.Closer, error) {
|
||||
abs, err := filepath.Abs(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
lockmu.Lock()
|
||||
if locked[abs] {
|
||||
lockmu.Unlock()
|
||||
return nil, fmt.Errorf("file %q already locked", abs)
|
||||
}
|
||||
locked[abs] = true
|
||||
lockmu.Unlock()
|
||||
|
||||
fi, err := os.Stat(name)
|
||||
if err == nil && fi.Size() > 0 {
|
||||
return nil, fmt.Errorf("can't Lock file %q: has non-zero size", name)
|
||||
}
|
||||
|
||||
f, err := os.Create(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// This type matches C's "struct flock" defined in /usr/include/bits/fcntl.h.
|
||||
// TODO: move this into the standard syscall package.
|
||||
k := struct {
|
||||
Type uint16
|
||||
Whence uint16
|
||||
Start uint32
|
||||
Len uint32
|
||||
Pid uint32
|
||||
}{
|
||||
Type: syscall.F_WRLCK,
|
||||
Whence: uint16(os.SEEK_SET),
|
||||
Start: 0,
|
||||
Len: 0, // 0 means to lock the entire file.
|
||||
Pid: uint32(os.Getpid()),
|
||||
}
|
||||
|
||||
const F_SETLK = 6 // actual value. syscall package is wrong: golang.org/issue/7059
|
||||
_, _, errno := syscall.Syscall(syscall.SYS_FCNTL, f.Fd(), uintptr(F_SETLK), uintptr(unsafe.Pointer(&k)))
|
||||
if errno != 0 {
|
||||
f.Close()
|
||||
return nil, errno
|
||||
}
|
||||
return &unlocker{f, abs}, nil
|
||||
}
|
||||
55
Godeps/_workspace/src/github.com/camlistore/lock/lock_plan9.go
generated
vendored
55
Godeps/_workspace/src/github.com/camlistore/lock/lock_plan9.go
generated
vendored
@ -1,55 +0,0 @@
|
||||
/*
|
||||
Copyright 2013 The Go Authors
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package lock
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
func init() {
|
||||
lockFn = lockPlan9
|
||||
}
|
||||
|
||||
func lockPlan9(name string) (io.Closer, error) {
|
||||
var f *os.File
|
||||
abs, err := filepath.Abs(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
lockmu.Lock()
|
||||
if locked[abs] {
|
||||
lockmu.Unlock()
|
||||
return nil, fmt.Errorf("file %q already locked", abs)
|
||||
}
|
||||
locked[abs] = true
|
||||
lockmu.Unlock()
|
||||
|
||||
fi, err := os.Stat(name)
|
||||
if err == nil && fi.Size() > 0 {
|
||||
return nil, fmt.Errorf("can't Lock file %q: has non-zero size", name)
|
||||
}
|
||||
|
||||
f, err = os.OpenFile(name, os.O_RDWR|os.O_CREATE, os.ModeExclusive|0644)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Lock Create of %s (abs: %s) failed: %v", name, abs, err)
|
||||
}
|
||||
|
||||
return &unlocker{f, abs}, nil
|
||||
}
|
||||
26
Godeps/_workspace/src/github.com/camlistore/lock/lock_sigzero.go
generated
vendored
26
Godeps/_workspace/src/github.com/camlistore/lock/lock_sigzero.go
generated
vendored
@ -1,26 +0,0 @@
|
||||
// +build !appengine
|
||||
// +build linux darwin freebsd openbsd netbsd dragonfly
|
||||
|
||||
/*
|
||||
Copyright 2013 The Go Authors
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package lock
|
||||
|
||||
import "syscall"
|
||||
|
||||
func init() {
|
||||
signalZero = syscall.Signal(0)
|
||||
}
|
||||
131
Godeps/_workspace/src/github.com/camlistore/lock/lock_test.go
generated
vendored
131
Godeps/_workspace/src/github.com/camlistore/lock/lock_test.go
generated
vendored
@ -1,131 +0,0 @@
|
||||
/*
|
||||
Copyright 2013 The Go Authors
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package lock
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLock(t *testing.T) {
|
||||
testLock(t, false)
|
||||
}
|
||||
|
||||
func TestLockPortable(t *testing.T) {
|
||||
testLock(t, true)
|
||||
}
|
||||
|
||||
func TestLockInChild(t *testing.T) {
|
||||
f := os.Getenv("TEST_LOCK_FILE")
|
||||
if f == "" {
|
||||
// not child
|
||||
return
|
||||
}
|
||||
lock := Lock
|
||||
if v, _ := strconv.ParseBool(os.Getenv("TEST_LOCK_PORTABLE")); v {
|
||||
lock = lockPortable
|
||||
}
|
||||
|
||||
lk, err := lock(f)
|
||||
if err != nil {
|
||||
log.Fatalf("Lock failed: %v", err)
|
||||
}
|
||||
|
||||
if v, _ := strconv.ParseBool(os.Getenv("TEST_LOCK_CRASH")); v {
|
||||
// Simulate a crash, or at least not unlocking the
|
||||
// lock. We still exit 0 just to simplify the parent
|
||||
// process exec code.
|
||||
os.Exit(0)
|
||||
}
|
||||
lk.Close()
|
||||
}
|
||||
|
||||
func testLock(t *testing.T, portable bool) {
|
||||
lock := Lock
|
||||
if portable {
|
||||
lock = lockPortable
|
||||
}
|
||||
|
||||
td, err := ioutil.TempDir("", "")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.RemoveAll(td)
|
||||
|
||||
path := filepath.Join(td, "foo.lock")
|
||||
|
||||
childLock := func(crash bool) error {
|
||||
cmd := exec.Command(os.Args[0], "-test.run=LockInChild$")
|
||||
cmd.Env = []string{"TEST_LOCK_FILE=" + path}
|
||||
if portable {
|
||||
cmd.Env = append(cmd.Env, "TEST_LOCK_PORTABLE=1")
|
||||
}
|
||||
if crash {
|
||||
cmd.Env = append(cmd.Env, "TEST_LOCK_CRASH=1")
|
||||
}
|
||||
out, err := cmd.CombinedOutput()
|
||||
t.Logf("Child output: %q (err %v)", out, err)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Child Process lock of %s failed: %v %s", path, err, out)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
t.Logf("Locking in crashing child...")
|
||||
if err := childLock(true); err != nil {
|
||||
t.Fatalf("first lock in child process: %v", err)
|
||||
}
|
||||
|
||||
t.Logf("Locking+unlocking in child...")
|
||||
if err := childLock(false); err != nil {
|
||||
t.Fatalf("lock in child process after crashing child: %v", err)
|
||||
}
|
||||
|
||||
t.Logf("Locking in parent...")
|
||||
lk1, err := lock(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
t.Logf("Again in parent...")
|
||||
_, err = lock(path)
|
||||
if err == nil {
|
||||
t.Fatal("expected second lock to fail")
|
||||
}
|
||||
|
||||
t.Logf("Locking in child...")
|
||||
if childLock(false) == nil {
|
||||
t.Fatalf("expected lock in child process to fail")
|
||||
}
|
||||
|
||||
t.Logf("Unlocking lock in parent")
|
||||
if err := lk1.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
lk3, err := lock(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
lk3.Close()
|
||||
}
|
||||
22
Godeps/_workspace/src/github.com/cenkalti/backoff/.gitignore
generated
vendored
22
Godeps/_workspace/src/github.com/cenkalti/backoff/.gitignore
generated
vendored
@ -1,22 +0,0 @@
|
||||
# Compiled Object files, Static and Dynamic libs (Shared Objects)
|
||||
*.o
|
||||
*.a
|
||||
*.so
|
||||
|
||||
# Folders
|
||||
_obj
|
||||
_test
|
||||
|
||||
# Architecture specific extensions/prefixes
|
||||
*.[568vq]
|
||||
[568vq].out
|
||||
|
||||
*.cgo1.go
|
||||
*.cgo2.c
|
||||
_cgo_defun.c
|
||||
_cgo_gotypes.go
|
||||
_cgo_export.*
|
||||
|
||||
_testmain.go
|
||||
|
||||
*.exe
|
||||
2
Godeps/_workspace/src/github.com/cenkalti/backoff/.travis.yml
generated
vendored
2
Godeps/_workspace/src/github.com/cenkalti/backoff/.travis.yml
generated
vendored
@ -1,2 +0,0 @@
|
||||
language: go
|
||||
go: 1.3.3
|
||||
20
Godeps/_workspace/src/github.com/cenkalti/backoff/LICENSE
generated
vendored
20
Godeps/_workspace/src/github.com/cenkalti/backoff/LICENSE
generated
vendored
@ -1,20 +0,0 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014 Cenk Altı
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
69
Godeps/_workspace/src/github.com/cenkalti/backoff/README.md
generated
vendored
69
Godeps/_workspace/src/github.com/cenkalti/backoff/README.md
generated
vendored
@ -1,69 +0,0 @@
|
||||
# backoff
|
||||
|
||||
[](https://godoc.org/github.com/cenkalti/backoff)
|
||||
[](https://travis-ci.org/cenkalti/backoff)
|
||||
|
||||
This is a Go port of the exponential backoff algorithm from
|
||||
[google-http-java-client](https://code.google.com/p/google-http-java-client/wiki/ExponentialBackoff).
|
||||
|
||||
[Exponential backoff](http://en.wikipedia.org/wiki/Exponential_backoff)
|
||||
is an algorithm that uses feedback to multiplicatively decrease the rate of some process,
|
||||
in order to gradually find an acceptable rate.
|
||||
The retries exponentially increase and stop increasing when a certain threshold is met.
|
||||
|
||||
|
||||
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
go get github.com/cenkalti/backoff
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
Simple retry helper that uses exponential back-off algorithm:
|
||||
|
||||
```go
|
||||
operation := func() error {
|
||||
// An operation that might fail
|
||||
}
|
||||
|
||||
err := backoff.Retry(operation, backoff.NewExponentialBackOff())
|
||||
if err != nil {
|
||||
// handle error
|
||||
}
|
||||
|
||||
// operation is successfull
|
||||
```
|
||||
|
||||
Ticker example:
|
||||
|
||||
```go
|
||||
operation := func() error {
|
||||
// An operation that may fail
|
||||
}
|
||||
|
||||
b := backoff.NewExponentialBackOff()
|
||||
ticker := backoff.NewTicker(b)
|
||||
|
||||
var err error
|
||||
|
||||
// Ticks will continue to arrive when the previous operation is still running,
|
||||
// so operations that take a while to fail could run in quick succession.
|
||||
for t = range ticker.C {
|
||||
if err = operation(); err != nil {
|
||||
log.Println(err, "will retry...")
|
||||
continue
|
||||
}
|
||||
|
||||
ticker.Stop()
|
||||
break
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
// Operation has failed.
|
||||
}
|
||||
|
||||
// Operation is successfull.
|
||||
```
|
||||
56
Godeps/_workspace/src/github.com/cenkalti/backoff/backoff.go
generated
vendored
56
Godeps/_workspace/src/github.com/cenkalti/backoff/backoff.go
generated
vendored
@ -1,56 +0,0 @@
|
||||
// Package backoff implements backoff algorithms for retrying operations.
|
||||
//
|
||||
// Also has a Retry() helper for retrying operations that may fail.
|
||||
package backoff
|
||||
|
||||
import "time"
|
||||
|
||||
// Back-off policy when retrying an operation.
|
||||
type BackOff interface {
|
||||
// Gets the duration to wait before retrying the operation or
|
||||
// backoff.Stop to indicate that no retries should be made.
|
||||
//
|
||||
// Example usage:
|
||||
//
|
||||
// duration := backoff.NextBackOff();
|
||||
// if (duration == backoff.Stop) {
|
||||
// // do not retry operation
|
||||
// } else {
|
||||
// // sleep for duration and retry operation
|
||||
// }
|
||||
//
|
||||
NextBackOff() time.Duration
|
||||
|
||||
// Reset to initial state.
|
||||
Reset()
|
||||
}
|
||||
|
||||
// Indicates that no more retries should be made for use in NextBackOff().
|
||||
const Stop time.Duration = -1
|
||||
|
||||
// ZeroBackOff is a fixed back-off policy whose back-off time is always zero,
|
||||
// meaning that the operation is retried immediately without waiting.
|
||||
type ZeroBackOff struct{}
|
||||
|
||||
func (b *ZeroBackOff) Reset() {}
|
||||
|
||||
func (b *ZeroBackOff) NextBackOff() time.Duration { return 0 }
|
||||
|
||||
// StopBackOff is a fixed back-off policy that always returns backoff.Stop for
|
||||
// NextBackOff(), meaning that the operation should not be retried.
|
||||
type StopBackOff struct{}
|
||||
|
||||
func (b *StopBackOff) Reset() {}
|
||||
|
||||
func (b *StopBackOff) NextBackOff() time.Duration { return Stop }
|
||||
|
||||
type ConstantBackOff struct {
|
||||
Interval time.Duration
|
||||
}
|
||||
|
||||
func (b *ConstantBackOff) Reset() {}
|
||||
func (b *ConstantBackOff) NextBackOff() time.Duration { return b.Interval }
|
||||
|
||||
func NewConstantBackOff(d time.Duration) *ConstantBackOff {
|
||||
return &ConstantBackOff{Interval: d}
|
||||
}
|
||||
28
Godeps/_workspace/src/github.com/cenkalti/backoff/backoff_test.go
generated
vendored
28
Godeps/_workspace/src/github.com/cenkalti/backoff/backoff_test.go
generated
vendored
@ -1,28 +0,0 @@
|
||||
package backoff
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNextBackOffMillis(t *testing.T) {
|
||||
subtestNextBackOff(t, 0, new(ZeroBackOff))
|
||||
subtestNextBackOff(t, Stop, new(StopBackOff))
|
||||
}
|
||||
|
||||
func subtestNextBackOff(t *testing.T, expectedValue time.Duration, backOffPolicy BackOff) {
|
||||
for i := 0; i < 10; i++ {
|
||||
next := backOffPolicy.NextBackOff()
|
||||
if next != expectedValue {
|
||||
t.Errorf("got: %d expected: %d", next, expectedValue)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConstantBackOff(t *testing.T) {
|
||||
backoff := NewConstantBackOff(time.Second)
|
||||
if backoff.NextBackOff() != time.Second {
|
||||
t.Error("invalid interval")
|
||||
}
|
||||
}
|
||||
141
Godeps/_workspace/src/github.com/cenkalti/backoff/exponential.go
generated
vendored
141
Godeps/_workspace/src/github.com/cenkalti/backoff/exponential.go
generated
vendored
@ -1,141 +0,0 @@
|
||||
package backoff
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"time"
|
||||
)
|
||||
|
||||
/*
|
||||
ExponentialBackOff is an implementation of BackOff that increases the back off
|
||||
period for each retry attempt using a randomization function that grows exponentially.
|
||||
|
||||
NextBackOff() is calculated using the following formula:
|
||||
|
||||
randomized_interval =
|
||||
retry_interval * (random value in range [1 - randomization_factor, 1 + randomization_factor])
|
||||
|
||||
In other words NextBackOff() will range between the randomization factor
|
||||
percentage below and above the retry interval. For example, using 2 seconds as the base retry
|
||||
interval and 0.5 as the randomization factor, the actual back off period used in the next retry
|
||||
attempt will be between 1 and 3 seconds.
|
||||
|
||||
NOTE: max_interval caps the retry_interval and not the randomized_interval.
|
||||
|
||||
If the time elapsed since an ExponentialBackOff instance is created goes past the
|
||||
max_elapsed_time then the method NextBackOff() starts returning backoff.Stop.
|
||||
The elapsed time can be reset by calling Reset().
|
||||
|
||||
EXAMPLE: The default retry_interval is .5 seconds, default randomization_factor is 0.5, default
|
||||
multiplier is 1.5 and the default max_interval is 1 minute. For 10 tries the sequence will be
|
||||
(values in seconds) and assuming we go over the max_elapsed_time on the 10th try:
|
||||
|
||||
request# retry_interval randomized_interval
|
||||
|
||||
1 0.5 [0.25, 0.75]
|
||||
2 0.75 [0.375, 1.125]
|
||||
3 1.125 [0.562, 1.687]
|
||||
4 1.687 [0.8435, 2.53]
|
||||
5 2.53 [1.265, 3.795]
|
||||
6 3.795 [1.897, 5.692]
|
||||
7 5.692 [2.846, 8.538]
|
||||
8 8.538 [4.269, 12.807]
|
||||
9 12.807 [6.403, 19.210]
|
||||
10 19.210 backoff.Stop
|
||||
|
||||
Implementation is not thread-safe.
|
||||
*/
|
||||
type ExponentialBackOff struct {
|
||||
InitialInterval time.Duration
|
||||
RandomizationFactor float64
|
||||
Multiplier float64
|
||||
MaxInterval time.Duration
|
||||
// After MaxElapsedTime the ExponentialBackOff stops.
|
||||
// It never stops if MaxElapsedTime == 0.
|
||||
MaxElapsedTime time.Duration
|
||||
Clock Clock
|
||||
|
||||
currentInterval time.Duration
|
||||
startTime time.Time
|
||||
}
|
||||
|
||||
// Clock is an interface that returns current time for BackOff.
|
||||
type Clock interface {
|
||||
Now() time.Time
|
||||
}
|
||||
|
||||
// Default values for ExponentialBackOff.
|
||||
const (
|
||||
DefaultInitialInterval = 500 * time.Millisecond
|
||||
DefaultRandomizationFactor = 0.5
|
||||
DefaultMultiplier = 1.5
|
||||
DefaultMaxInterval = 60 * time.Second
|
||||
DefaultMaxElapsedTime = 15 * time.Minute
|
||||
)
|
||||
|
||||
// NewExponentialBackOff creates an instance of ExponentialBackOff using default values.
|
||||
func NewExponentialBackOff() *ExponentialBackOff {
|
||||
return &ExponentialBackOff{
|
||||
InitialInterval: DefaultInitialInterval,
|
||||
RandomizationFactor: DefaultRandomizationFactor,
|
||||
Multiplier: DefaultMultiplier,
|
||||
MaxInterval: DefaultMaxInterval,
|
||||
MaxElapsedTime: DefaultMaxElapsedTime,
|
||||
Clock: SystemClock,
|
||||
}
|
||||
}
|
||||
|
||||
type systemClock struct{}
|
||||
|
||||
func (t systemClock) Now() time.Time {
|
||||
return time.Now()
|
||||
}
|
||||
|
||||
// SystemClock implements Clock interface that uses time.Now().
|
||||
var SystemClock = systemClock{}
|
||||
|
||||
// Reset the interval back to the initial retry interval and restarts the timer.
|
||||
func (b *ExponentialBackOff) Reset() {
|
||||
b.currentInterval = b.InitialInterval
|
||||
b.startTime = b.Clock.Now()
|
||||
}
|
||||
|
||||
// NextBackOff calculates the next back off interval using the formula:
|
||||
// randomized_interval = retry_interval +/- (randomization_factor * retry_interval)
|
||||
func (b *ExponentialBackOff) NextBackOff() time.Duration {
|
||||
// Make sure we have not gone over the maximum elapsed time.
|
||||
if b.MaxElapsedTime != 0 && b.GetElapsedTime() > b.MaxElapsedTime {
|
||||
return Stop
|
||||
}
|
||||
defer b.incrementCurrentInterval()
|
||||
return getRandomValueFromInterval(b.RandomizationFactor, rand.Float64(), b.currentInterval)
|
||||
}
|
||||
|
||||
// GetElapsedTime returns the elapsed time since an ExponentialBackOff instance
|
||||
// is created and is reset when Reset() is called.
|
||||
//
|
||||
// The elapsed time is computed using time.Now().UnixNano().
|
||||
func (b *ExponentialBackOff) GetElapsedTime() time.Duration {
|
||||
return b.Clock.Now().Sub(b.startTime)
|
||||
}
|
||||
|
||||
// Increments the current interval by multiplying it with the multiplier.
|
||||
func (b *ExponentialBackOff) incrementCurrentInterval() {
|
||||
// Check for overflow, if overflow is detected set the current interval to the max interval.
|
||||
if float64(b.currentInterval) >= float64(b.MaxInterval)/b.Multiplier {
|
||||
b.currentInterval = b.MaxInterval
|
||||
} else {
|
||||
b.currentInterval = time.Duration(float64(b.currentInterval) * b.Multiplier)
|
||||
}
|
||||
}
|
||||
|
||||
// Returns a random value from the interval:
|
||||
// [randomizationFactor * currentInterval, randomizationFactor * currentInterval].
|
||||
func getRandomValueFromInterval(randomizationFactor, random float64, currentInterval time.Duration) time.Duration {
|
||||
var delta = randomizationFactor * float64(currentInterval)
|
||||
var minInterval = float64(currentInterval) - delta
|
||||
var maxInterval = float64(currentInterval) + delta
|
||||
// Get a random value from the range [minInterval, maxInterval].
|
||||
// The formula used below has a +1 because if the minInterval is 1 and the maxInterval is 3 then
|
||||
// we want a 33% chance for selecting either 1, 2 or 3.
|
||||
return time.Duration(minInterval + (random * (maxInterval - minInterval + 1)))
|
||||
}
|
||||
111
Godeps/_workspace/src/github.com/cenkalti/backoff/exponential_test.go
generated
vendored
111
Godeps/_workspace/src/github.com/cenkalti/backoff/exponential_test.go
generated
vendored
@ -1,111 +0,0 @@
|
||||
package backoff
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestBackOff(t *testing.T) {
|
||||
var (
|
||||
testInitialInterval = 500 * time.Millisecond
|
||||
testRandomizationFactor = 0.1
|
||||
testMultiplier = 2.0
|
||||
testMaxInterval = 5 * time.Second
|
||||
testMaxElapsedTime = 15 * time.Minute
|
||||
)
|
||||
|
||||
exp := NewExponentialBackOff()
|
||||
exp.InitialInterval = testInitialInterval
|
||||
exp.RandomizationFactor = testRandomizationFactor
|
||||
exp.Multiplier = testMultiplier
|
||||
exp.MaxInterval = testMaxInterval
|
||||
exp.MaxElapsedTime = testMaxElapsedTime
|
||||
exp.Reset()
|
||||
|
||||
var expectedResults = []time.Duration{500, 1000, 2000, 4000, 5000, 5000, 5000, 5000, 5000, 5000}
|
||||
for i, d := range expectedResults {
|
||||
expectedResults[i] = d * time.Millisecond
|
||||
}
|
||||
|
||||
for _, expected := range expectedResults {
|
||||
assertEquals(t, expected, exp.currentInterval)
|
||||
// Assert that the next back off falls in the expected range.
|
||||
var minInterval = expected - time.Duration(testRandomizationFactor*float64(expected))
|
||||
var maxInterval = expected + time.Duration(testRandomizationFactor*float64(expected))
|
||||
var actualInterval = exp.NextBackOff()
|
||||
if !(minInterval <= actualInterval && actualInterval <= maxInterval) {
|
||||
t.Error("error")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetRandomizedInterval(t *testing.T) {
|
||||
// 33% chance of being 1.
|
||||
assertEquals(t, 1, getRandomValueFromInterval(0.5, 0, 2))
|
||||
assertEquals(t, 1, getRandomValueFromInterval(0.5, 0.33, 2))
|
||||
// 33% chance of being 2.
|
||||
assertEquals(t, 2, getRandomValueFromInterval(0.5, 0.34, 2))
|
||||
assertEquals(t, 2, getRandomValueFromInterval(0.5, 0.66, 2))
|
||||
// 33% chance of being 3.
|
||||
assertEquals(t, 3, getRandomValueFromInterval(0.5, 0.67, 2))
|
||||
assertEquals(t, 3, getRandomValueFromInterval(0.5, 0.99, 2))
|
||||
}
|
||||
|
||||
type TestClock struct {
|
||||
i time.Duration
|
||||
start time.Time
|
||||
}
|
||||
|
||||
func (c *TestClock) Now() time.Time {
|
||||
t := c.start.Add(c.i)
|
||||
c.i += time.Second
|
||||
return t
|
||||
}
|
||||
|
||||
func TestGetElapsedTime(t *testing.T) {
|
||||
var exp = NewExponentialBackOff()
|
||||
exp.Clock = &TestClock{}
|
||||
exp.Reset()
|
||||
|
||||
var elapsedTime = exp.GetElapsedTime()
|
||||
if elapsedTime != time.Second {
|
||||
t.Errorf("elapsedTime=%d", elapsedTime)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMaxElapsedTime(t *testing.T) {
|
||||
var exp = NewExponentialBackOff()
|
||||
exp.Clock = &TestClock{start: time.Time{}.Add(10000 * time.Second)}
|
||||
if exp.NextBackOff() != Stop {
|
||||
t.Error("error2")
|
||||
}
|
||||
// Change the currentElapsedTime to be 0 ensuring that the elapsed time will be greater
|
||||
// than the max elapsed time.
|
||||
exp.startTime = time.Time{}
|
||||
assertEquals(t, Stop, exp.NextBackOff())
|
||||
}
|
||||
|
||||
func TestBackOffOverflow(t *testing.T) {
|
||||
var (
|
||||
testInitialInterval time.Duration = math.MaxInt64 / 2
|
||||
testMaxInterval time.Duration = math.MaxInt64
|
||||
testMultiplier float64 = 2.1
|
||||
)
|
||||
|
||||
exp := NewExponentialBackOff()
|
||||
exp.InitialInterval = testInitialInterval
|
||||
exp.Multiplier = testMultiplier
|
||||
exp.MaxInterval = testMaxInterval
|
||||
exp.Reset()
|
||||
|
||||
exp.NextBackOff()
|
||||
// Assert that when an overflow is possible the current varerval time.Duration is set to the max varerval time.Duration .
|
||||
assertEquals(t, testMaxInterval, exp.currentInterval)
|
||||
}
|
||||
|
||||
func assertEquals(t *testing.T, expected, value time.Duration) {
|
||||
if expected != value {
|
||||
t.Errorf("got: %d, expected: %d", value, expected)
|
||||
}
|
||||
}
|
||||
47
Godeps/_workspace/src/github.com/cenkalti/backoff/retry.go
generated
vendored
47
Godeps/_workspace/src/github.com/cenkalti/backoff/retry.go
generated
vendored
@ -1,47 +0,0 @@
|
||||
package backoff
|
||||
|
||||
import "time"
|
||||
|
||||
// Retry the function f until it does not return error or BackOff stops.
|
||||
// f is guaranteed to be run at least once.
|
||||
// It is the caller's responsibility to reset b after Retry returns.
|
||||
//
|
||||
// Retry sleeps the goroutine for the duration returned by BackOff after a
|
||||
// failed operation returns.
|
||||
//
|
||||
// Usage:
|
||||
// operation := func() error {
|
||||
// // An operation that may fail
|
||||
// }
|
||||
//
|
||||
// err := backoff.Retry(operation, backoff.NewExponentialBackOff())
|
||||
// if err != nil {
|
||||
// // Operation has failed.
|
||||
// }
|
||||
//
|
||||
// // Operation is successfull.
|
||||
//
|
||||
func Retry(f func() error, b BackOff) error { return RetryNotify(f, b, nil) }
|
||||
|
||||
// RetryNotify calls notify function with the error and wait duration for each failed attempt before sleep.
|
||||
func RetryNotify(f func() error, b BackOff, notify func(err error, wait time.Duration)) error {
|
||||
var err error
|
||||
var next time.Duration
|
||||
|
||||
b.Reset()
|
||||
for {
|
||||
if err = f(); err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if next = b.NextBackOff(); next == Stop {
|
||||
return err
|
||||
}
|
||||
|
||||
if notify != nil {
|
||||
notify(err, next)
|
||||
}
|
||||
|
||||
time.Sleep(next)
|
||||
}
|
||||
}
|
||||
34
Godeps/_workspace/src/github.com/cenkalti/backoff/retry_test.go
generated
vendored
34
Godeps/_workspace/src/github.com/cenkalti/backoff/retry_test.go
generated
vendored
@ -1,34 +0,0 @@
|
||||
package backoff
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"log"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestRetry(t *testing.T) {
|
||||
const successOn = 3
|
||||
var i = 0
|
||||
|
||||
// This function is successfull on "successOn" calls.
|
||||
f := func() error {
|
||||
i++
|
||||
log.Printf("function is called %d. time\n", i)
|
||||
|
||||
if i == successOn {
|
||||
log.Println("OK")
|
||||
return nil
|
||||
}
|
||||
|
||||
log.Println("error")
|
||||
return errors.New("error")
|
||||
}
|
||||
|
||||
err := Retry(f, NewExponentialBackOff())
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %s", err.Error())
|
||||
}
|
||||
if i != successOn {
|
||||
t.Errorf("invalid number of retries: %d", i)
|
||||
}
|
||||
}
|
||||
105
Godeps/_workspace/src/github.com/cenkalti/backoff/ticker.go
generated
vendored
105
Godeps/_workspace/src/github.com/cenkalti/backoff/ticker.go
generated
vendored
@ -1,105 +0,0 @@
|
||||
package backoff
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Ticker holds a channel that delivers `ticks' of a clock at times reported by a BackOff.
|
||||
//
|
||||
// Ticks will continue to arrive when the previous operation is still running,
|
||||
// so operations that take a while to fail could run in quick succession.
|
||||
//
|
||||
// Usage:
|
||||
// operation := func() error {
|
||||
// // An operation that may fail
|
||||
// }
|
||||
//
|
||||
// b := backoff.NewExponentialBackOff()
|
||||
// ticker := backoff.NewTicker(b)
|
||||
//
|
||||
// var err error
|
||||
// for _ = range ticker.C {
|
||||
// if err = operation(); err != nil {
|
||||
// log.Println(err, "will retry...")
|
||||
// continue
|
||||
// }
|
||||
//
|
||||
// ticker.Stop()
|
||||
// break
|
||||
// }
|
||||
//
|
||||
// if err != nil {
|
||||
// // Operation has failed.
|
||||
// }
|
||||
//
|
||||
// // Operation is successfull.
|
||||
//
|
||||
type Ticker struct {
|
||||
C <-chan time.Time
|
||||
c chan time.Time
|
||||
b BackOff
|
||||
stop chan struct{}
|
||||
stopOnce sync.Once
|
||||
}
|
||||
|
||||
// NewTicker returns a new Ticker containing a channel that will send the time at times
|
||||
// specified by the BackOff argument. Ticker is guaranteed to tick at least once.
|
||||
// The channel is closed when Stop method is called or BackOff stops.
|
||||
func NewTicker(b BackOff) *Ticker {
|
||||
c := make(chan time.Time)
|
||||
t := &Ticker{
|
||||
C: c,
|
||||
c: c,
|
||||
b: b,
|
||||
stop: make(chan struct{}),
|
||||
}
|
||||
go t.run()
|
||||
runtime.SetFinalizer(t, (*Ticker).Stop)
|
||||
return t
|
||||
}
|
||||
|
||||
// Stop turns off a ticker. After Stop, no more ticks will be sent.
|
||||
func (t *Ticker) Stop() {
|
||||
t.stopOnce.Do(func() { close(t.stop) })
|
||||
}
|
||||
|
||||
func (t *Ticker) run() {
|
||||
c := t.c
|
||||
defer close(c)
|
||||
t.b.Reset()
|
||||
|
||||
// Ticker is guaranteed to tick at least once.
|
||||
afterC := t.send(time.Now())
|
||||
|
||||
for {
|
||||
if afterC == nil {
|
||||
return
|
||||
}
|
||||
|
||||
select {
|
||||
case tick := <-afterC:
|
||||
afterC = t.send(tick)
|
||||
case <-t.stop:
|
||||
t.c = nil // Prevent future ticks from being sent to the channel.
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (t *Ticker) send(tick time.Time) <-chan time.Time {
|
||||
select {
|
||||
case t.c <- tick:
|
||||
case <-t.stop:
|
||||
return nil
|
||||
}
|
||||
|
||||
next := t.b.NextBackOff()
|
||||
if next == Stop {
|
||||
t.Stop()
|
||||
return nil
|
||||
}
|
||||
|
||||
return time.After(next)
|
||||
}
|
||||
45
Godeps/_workspace/src/github.com/cenkalti/backoff/ticker_test.go
generated
vendored
45
Godeps/_workspace/src/github.com/cenkalti/backoff/ticker_test.go
generated
vendored
@ -1,45 +0,0 @@
|
||||
package backoff
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"log"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestTicker(t *testing.T) {
|
||||
const successOn = 3
|
||||
var i = 0
|
||||
|
||||
// This function is successfull on "successOn" calls.
|
||||
f := func() error {
|
||||
i++
|
||||
log.Printf("function is called %d. time\n", i)
|
||||
|
||||
if i == successOn {
|
||||
log.Println("OK")
|
||||
return nil
|
||||
}
|
||||
|
||||
log.Println("error")
|
||||
return errors.New("error")
|
||||
}
|
||||
|
||||
b := NewExponentialBackOff()
|
||||
ticker := NewTicker(b)
|
||||
|
||||
var err error
|
||||
for _ = range ticker.C {
|
||||
if err = f(); err != nil {
|
||||
t.Log(err)
|
||||
continue
|
||||
}
|
||||
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %s", err.Error())
|
||||
}
|
||||
if i != successOn {
|
||||
t.Errorf("invalid number of retries: %d", i)
|
||||
}
|
||||
}
|
||||
12
Godeps/_workspace/src/github.com/cheggaaa/pb/LICENSE
generated
vendored
12
Godeps/_workspace/src/github.com/cheggaaa/pb/LICENSE
generated
vendored
@ -1,12 +0,0 @@
|
||||
Copyright (c) 2012, Sergey Cherepanov
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
|
||||
|
||||
* Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
98
Godeps/_workspace/src/github.com/cheggaaa/pb/README.md
generated
vendored
98
Godeps/_workspace/src/github.com/cheggaaa/pb/README.md
generated
vendored
@ -1,98 +0,0 @@
|
||||
## Terminal progress bar for Go
|
||||
|
||||
Simple progress bar for console programms.
|
||||
|
||||
|
||||
### Installation
|
||||
```
|
||||
go get github.com/cheggaaa/pb
|
||||
```
|
||||
|
||||
### Usage
|
||||
```Go
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/cheggaaa/pb"
|
||||
"time"
|
||||
)
|
||||
|
||||
func main() {
|
||||
count := 100000
|
||||
bar := pb.StartNew(count)
|
||||
for i := 0; i < count; i++ {
|
||||
bar.Increment()
|
||||
time.Sleep(time.Millisecond)
|
||||
}
|
||||
bar.FinishPrint("The End!")
|
||||
}
|
||||
```
|
||||
Result will be like this:
|
||||
```
|
||||
> go run test.go
|
||||
37158 / 100000 [================>_______________________________] 37.16% 1m11s
|
||||
```
|
||||
|
||||
|
||||
More functions?
|
||||
```Go
|
||||
// create bar
|
||||
bar := pb.New(count)
|
||||
|
||||
// refresh info every second (default 200ms)
|
||||
bar.SetRefreshRate(time.Second)
|
||||
|
||||
// show percents (by default already true)
|
||||
bar.ShowPercent = true
|
||||
|
||||
// show bar (by default already true)
|
||||
bar.ShowBar = true
|
||||
|
||||
// no need counters
|
||||
bar.ShowCounters = false
|
||||
|
||||
// show "time left"
|
||||
bar.ShowTimeLeft = true
|
||||
|
||||
// show average speed
|
||||
bar.ShowSpeed = true
|
||||
|
||||
// sets the width of the progress bar
|
||||
bar.SetWidth(80)
|
||||
|
||||
// sets the width of the progress bar, but if terminal size smaller will be ignored
|
||||
bar.SetMaxWidth(80)
|
||||
|
||||
// convert output to readable format (like KB, MB)
|
||||
bar.SetUnits(pb.U_BYTES)
|
||||
|
||||
// and start
|
||||
bar.Start()
|
||||
```
|
||||
|
||||
Want handle progress of io operations?
|
||||
```Go
|
||||
// create and start bar
|
||||
bar := pb.New(myDataLen).SetUnits(pb.U_BYTES)
|
||||
bar.Start()
|
||||
|
||||
// my io.Reader
|
||||
r := myReader
|
||||
|
||||
// my io.Writer
|
||||
w := myWriter
|
||||
|
||||
// create multi writer
|
||||
writer := io.MultiWriter(w, bar)
|
||||
|
||||
// and copy
|
||||
io.Copy(writer, r)
|
||||
|
||||
// show example/copy/copy.go for advanced example
|
||||
|
||||
```
|
||||
|
||||
Not like the looks?
|
||||
```Go
|
||||
bar.Format("<.- >")
|
||||
```
|
||||
81
Godeps/_workspace/src/github.com/cheggaaa/pb/example/copy/copy.go
generated
vendored
81
Godeps/_workspace/src/github.com/cheggaaa/pb/example/copy/copy.go
generated
vendored
@ -1,81 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/cheggaaa/pb"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// check args
|
||||
if len(os.Args) < 3 {
|
||||
printUsage()
|
||||
return
|
||||
}
|
||||
sourceName, destName := os.Args[1], os.Args[2]
|
||||
|
||||
// check source
|
||||
var source io.Reader
|
||||
var sourceSize int64
|
||||
if strings.HasPrefix(sourceName, "http://") {
|
||||
// open as url
|
||||
resp, err := http.Get(sourceName)
|
||||
if err != nil {
|
||||
fmt.Printf("Can't get %s: %v\n", sourceName, err)
|
||||
return
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
fmt.Printf("Server return non-200 status: %v\n", resp.Status)
|
||||
return
|
||||
}
|
||||
i, _ := strconv.Atoi(resp.Header.Get("Content-Length"))
|
||||
sourceSize = int64(i)
|
||||
source = resp.Body
|
||||
} else {
|
||||
// open as file
|
||||
s, err := os.Open(sourceName)
|
||||
if err != nil {
|
||||
fmt.Printf("Can't open %s: %v\n", sourceName, err)
|
||||
return
|
||||
}
|
||||
defer s.Close()
|
||||
// get source size
|
||||
sourceStat, err := s.Stat()
|
||||
if err != nil {
|
||||
fmt.Printf("Can't stat %s: %v\n", sourceName, err)
|
||||
return
|
||||
}
|
||||
sourceSize = sourceStat.Size()
|
||||
source = s
|
||||
}
|
||||
|
||||
// create dest
|
||||
dest, err := os.Create(destName)
|
||||
if err != nil {
|
||||
fmt.Printf("Can't create %s: %v\n", destName, err)
|
||||
return
|
||||
}
|
||||
defer dest.Close()
|
||||
|
||||
// create bar
|
||||
bar := pb.New(int(sourceSize)).SetUnits(pb.U_BYTES).SetRefreshRate(time.Millisecond * 10)
|
||||
bar.ShowSpeed = true
|
||||
bar.Start()
|
||||
|
||||
// create multi writer
|
||||
writer := io.MultiWriter(dest, bar)
|
||||
|
||||
// and copy
|
||||
io.Copy(writer, source)
|
||||
bar.Finish()
|
||||
}
|
||||
|
||||
func printUsage() {
|
||||
fmt.Println("copy [source file or url] [dest file]")
|
||||
}
|
||||
30
Godeps/_workspace/src/github.com/cheggaaa/pb/example/pb.go
generated
vendored
30
Godeps/_workspace/src/github.com/cheggaaa/pb/example/pb.go
generated
vendored
@ -1,30 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/cheggaaa/pb"
|
||||
"time"
|
||||
)
|
||||
|
||||
func main() {
|
||||
count := 5000
|
||||
bar := pb.New(count)
|
||||
|
||||
// show percents (by default already true)
|
||||
bar.ShowPercent = true
|
||||
|
||||
// show bar (by default already true)
|
||||
bar.ShowPercent = true
|
||||
|
||||
// no need counters
|
||||
bar.ShowCounters = true
|
||||
|
||||
bar.ShowTimeLeft = true
|
||||
|
||||
// and start
|
||||
bar.Start()
|
||||
for i := 0; i < count; i++ {
|
||||
bar.Increment()
|
||||
time.Sleep(time.Millisecond)
|
||||
}
|
||||
bar.FinishPrint("The End!")
|
||||
}
|
||||
45
Godeps/_workspace/src/github.com/cheggaaa/pb/format.go
generated
vendored
45
Godeps/_workspace/src/github.com/cheggaaa/pb/format.go
generated
vendored
@ -1,45 +0,0 @@
|
||||
package pb
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type Units int
|
||||
|
||||
const (
|
||||
// By default, without type handle
|
||||
U_NO Units = iota
|
||||
// Handle as b, Kb, Mb, etc
|
||||
U_BYTES
|
||||
)
|
||||
|
||||
// Format integer
|
||||
func Format(i int64, units Units) string {
|
||||
switch units {
|
||||
case U_BYTES:
|
||||
return FormatBytes(i)
|
||||
default:
|
||||
// by default just convert to string
|
||||
return strconv.FormatInt(i, 10)
|
||||
}
|
||||
}
|
||||
|
||||
// Convert bytes to human readable string. Like a 2 MB, 64.2 KB, 52 B
|
||||
func FormatBytes(i int64) (result string) {
|
||||
switch {
|
||||
case i > (1024 * 1024 * 1024 * 1024):
|
||||
result = fmt.Sprintf("%#.02f TB", float64(i)/1024/1024/1024/1024)
|
||||
case i > (1024 * 1024 * 1024):
|
||||
result = fmt.Sprintf("%#.02f GB", float64(i)/1024/1024/1024)
|
||||
case i > (1024 * 1024):
|
||||
result = fmt.Sprintf("%#.02f MB", float64(i)/1024/1024)
|
||||
case i > 1024:
|
||||
result = fmt.Sprintf("%#.02f KB", float64(i)/1024)
|
||||
default:
|
||||
result = fmt.Sprintf("%d B", i)
|
||||
}
|
||||
result = strings.Trim(result, " ")
|
||||
return
|
||||
}
|
||||
37
Godeps/_workspace/src/github.com/cheggaaa/pb/format_test.go
generated
vendored
37
Godeps/_workspace/src/github.com/cheggaaa/pb/format_test.go
generated
vendored
@ -1,37 +0,0 @@
|
||||
package pb
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test_DefaultsToInteger(t *testing.T) {
|
||||
value := int64(1000)
|
||||
expected := strconv.Itoa(int(value))
|
||||
actual := Format(value, -1)
|
||||
|
||||
if actual != expected {
|
||||
t.Error(fmt.Sprintf("Expected {%s} was {%s}", expected, actual))
|
||||
}
|
||||
}
|
||||
|
||||
func Test_CanFormatAsInteger(t *testing.T) {
|
||||
value := int64(1000)
|
||||
expected := strconv.Itoa(int(value))
|
||||
actual := Format(value, U_NO)
|
||||
|
||||
if actual != expected {
|
||||
t.Error(fmt.Sprintf("Expected {%s} was {%s}", expected, actual))
|
||||
}
|
||||
}
|
||||
|
||||
func Test_CanFormatAsBytes(t *testing.T) {
|
||||
value := int64(1000)
|
||||
expected := "1000 B"
|
||||
actual := Format(value, U_BYTES)
|
||||
|
||||
if actual != expected {
|
||||
t.Error(fmt.Sprintf("Expected {%s} was {%s}", expected, actual))
|
||||
}
|
||||
}
|
||||
352
Godeps/_workspace/src/github.com/cheggaaa/pb/pb.go
generated
vendored
352
Godeps/_workspace/src/github.com/cheggaaa/pb/pb.go
generated
vendored
@ -1,352 +0,0 @@
|
||||
package pb
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
// Default refresh rate - 200ms
|
||||
DEFAULT_REFRESH_RATE = time.Millisecond * 200
|
||||
FORMAT = "[=>-]"
|
||||
)
|
||||
|
||||
// DEPRECATED
|
||||
// variables for backward compatibility, from now do not work
|
||||
// use pb.Format and pb.SetRefreshRate
|
||||
var (
|
||||
DefaultRefreshRate = DEFAULT_REFRESH_RATE
|
||||
BarStart, BarEnd, Empty, Current, CurrentN string
|
||||
)
|
||||
|
||||
// Create new progress bar object
|
||||
func New(total int) *ProgressBar {
|
||||
return New64(int64(total))
|
||||
}
|
||||
|
||||
// Create new progress bar object uding int64 as total
|
||||
func New64(total int64) *ProgressBar {
|
||||
pb := &ProgressBar{
|
||||
Total: total,
|
||||
RefreshRate: DEFAULT_REFRESH_RATE,
|
||||
ShowPercent: true,
|
||||
ShowCounters: true,
|
||||
ShowBar: true,
|
||||
ShowTimeLeft: true,
|
||||
ShowFinalTime: true,
|
||||
Units: U_NO,
|
||||
ManualUpdate: false,
|
||||
isFinish: make(chan struct{}),
|
||||
currentValue: -1,
|
||||
}
|
||||
return pb.Format(FORMAT)
|
||||
}
|
||||
|
||||
// Create new object and start
|
||||
func StartNew(total int) *ProgressBar {
|
||||
return New(total).Start()
|
||||
}
|
||||
|
||||
// Callback for custom output
|
||||
// For example:
|
||||
// bar.Callback = func(s string) {
|
||||
// mySuperPrint(s)
|
||||
// }
|
||||
//
|
||||
type Callback func(out string)
|
||||
|
||||
type ProgressBar struct {
|
||||
current int64 // current must be first member of struct (https://code.google.com/p/go/issues/detail?id=5278)
|
||||
|
||||
Total int64
|
||||
RefreshRate time.Duration
|
||||
ShowPercent, ShowCounters bool
|
||||
ShowSpeed, ShowTimeLeft, ShowBar bool
|
||||
ShowFinalTime bool
|
||||
Output io.Writer
|
||||
Callback Callback
|
||||
NotPrint bool
|
||||
Units Units
|
||||
Width int
|
||||
ForceWidth bool
|
||||
ManualUpdate bool
|
||||
|
||||
finishOnce sync.Once //Guards isFinish
|
||||
isFinish chan struct{}
|
||||
|
||||
startTime time.Time
|
||||
startValue int64
|
||||
currentValue int64
|
||||
|
||||
prefix, postfix string
|
||||
|
||||
BarStart string
|
||||
BarEnd string
|
||||
Empty string
|
||||
Current string
|
||||
CurrentN string
|
||||
}
|
||||
|
||||
// Start print
|
||||
func (pb *ProgressBar) Start() *ProgressBar {
|
||||
pb.startTime = time.Now()
|
||||
pb.startValue = pb.current
|
||||
if pb.Total == 0 {
|
||||
pb.ShowBar = false
|
||||
pb.ShowTimeLeft = false
|
||||
pb.ShowPercent = false
|
||||
}
|
||||
if !pb.ManualUpdate {
|
||||
go pb.writer()
|
||||
}
|
||||
return pb
|
||||
}
|
||||
|
||||
// Increment current value
|
||||
func (pb *ProgressBar) Increment() int {
|
||||
return pb.Add(1)
|
||||
}
|
||||
|
||||
// Set current value
|
||||
func (pb *ProgressBar) Set(current int) *ProgressBar {
|
||||
return pb.Set64(int64(current))
|
||||
}
|
||||
|
||||
// Set64 sets the current value as int64
|
||||
func (pb *ProgressBar) Set64(current int64) *ProgressBar {
|
||||
atomic.StoreInt64(&pb.current, current)
|
||||
return pb
|
||||
}
|
||||
|
||||
// Add to current value
|
||||
func (pb *ProgressBar) Add(add int) int {
|
||||
return int(pb.Add64(int64(add)))
|
||||
}
|
||||
|
||||
func (pb *ProgressBar) Add64(add int64) int64 {
|
||||
return atomic.AddInt64(&pb.current, add)
|
||||
}
|
||||
|
||||
// Set prefix string
|
||||
func (pb *ProgressBar) Prefix(prefix string) *ProgressBar {
|
||||
pb.prefix = prefix
|
||||
return pb
|
||||
}
|
||||
|
||||
// Set postfix string
|
||||
func (pb *ProgressBar) Postfix(postfix string) *ProgressBar {
|
||||
pb.postfix = postfix
|
||||
return pb
|
||||
}
|
||||
|
||||
// Set custom format for bar
|
||||
// EXAMPLE: bar.Format("[=>_]")
|
||||
func (pb *ProgressBar) Format(format string) *ProgressBar {
|
||||
formatEntries := strings.Split(format, "")
|
||||
if len(formatEntries) == 5 {
|
||||
pb.BarStart = formatEntries[0]
|
||||
pb.BarEnd = formatEntries[4]
|
||||
pb.Empty = formatEntries[3]
|
||||
pb.Current = formatEntries[1]
|
||||
pb.CurrentN = formatEntries[2]
|
||||
}
|
||||
return pb
|
||||
}
|
||||
|
||||
// Set bar refresh rate
|
||||
func (pb *ProgressBar) SetRefreshRate(rate time.Duration) *ProgressBar {
|
||||
pb.RefreshRate = rate
|
||||
return pb
|
||||
}
|
||||
|
||||
// Set units
|
||||
// bar.SetUnits(U_NO) - by default
|
||||
// bar.SetUnits(U_BYTES) - for Mb, Kb, etc
|
||||
func (pb *ProgressBar) SetUnits(units Units) *ProgressBar {
|
||||
pb.Units = units
|
||||
return pb
|
||||
}
|
||||
|
||||
// Set max width, if width is bigger than terminal width, will be ignored
|
||||
func (pb *ProgressBar) SetMaxWidth(width int) *ProgressBar {
|
||||
pb.Width = width
|
||||
pb.ForceWidth = false
|
||||
return pb
|
||||
}
|
||||
|
||||
// Set bar width
|
||||
func (pb *ProgressBar) SetWidth(width int) *ProgressBar {
|
||||
pb.Width = width
|
||||
pb.ForceWidth = true
|
||||
return pb
|
||||
}
|
||||
|
||||
// End print
|
||||
func (pb *ProgressBar) Finish() {
|
||||
//Protect multiple calls
|
||||
pb.finishOnce.Do(func() {
|
||||
close(pb.isFinish)
|
||||
pb.write(atomic.LoadInt64(&pb.current))
|
||||
if !pb.NotPrint {
|
||||
fmt.Println()
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// End print and write string 'str'
|
||||
func (pb *ProgressBar) FinishPrint(str string) {
|
||||
pb.Finish()
|
||||
fmt.Println(str)
|
||||
}
|
||||
|
||||
// implement io.Writer
|
||||
func (pb *ProgressBar) Write(p []byte) (n int, err error) {
|
||||
n = len(p)
|
||||
pb.Add(n)
|
||||
return
|
||||
}
|
||||
|
||||
// implement io.Reader
|
||||
func (pb *ProgressBar) Read(p []byte) (n int, err error) {
|
||||
n = len(p)
|
||||
pb.Add(n)
|
||||
return
|
||||
}
|
||||
|
||||
// Create new proxy reader over bar
|
||||
func (pb *ProgressBar) NewProxyReader(r io.Reader) *Reader {
|
||||
return &Reader{r, pb}
|
||||
}
|
||||
|
||||
func (pb *ProgressBar) write(current int64) {
|
||||
width := pb.getWidth()
|
||||
|
||||
var percentBox, countersBox, timeLeftBox, speedBox, barBox, end, out string
|
||||
|
||||
// percents
|
||||
if pb.ShowPercent {
|
||||
percent := float64(current) / (float64(pb.Total) / float64(100))
|
||||
percentBox = fmt.Sprintf(" %#.02f %% ", percent)
|
||||
}
|
||||
|
||||
// counters
|
||||
if pb.ShowCounters {
|
||||
if pb.Total > 0 {
|
||||
countersBox = fmt.Sprintf("%s / %s ", Format(current, pb.Units), Format(pb.Total, pb.Units))
|
||||
} else {
|
||||
countersBox = Format(current, pb.Units) + " "
|
||||
}
|
||||
}
|
||||
|
||||
// time left
|
||||
fromStart := time.Now().Sub(pb.startTime)
|
||||
currentFromStart := current - pb.startValue
|
||||
select {
|
||||
case <-pb.isFinish:
|
||||
if pb.ShowFinalTime {
|
||||
left := (fromStart / time.Second) * time.Second
|
||||
timeLeftBox = left.String()
|
||||
}
|
||||
default:
|
||||
if pb.ShowTimeLeft && currentFromStart > 0 {
|
||||
perEntry := fromStart / time.Duration(currentFromStart)
|
||||
left := time.Duration(pb.Total-currentFromStart) * perEntry
|
||||
left = (left / time.Second) * time.Second
|
||||
timeLeftBox = left.String()
|
||||
}
|
||||
}
|
||||
|
||||
// speed
|
||||
if pb.ShowSpeed && currentFromStart > 0 {
|
||||
fromStart := time.Now().Sub(pb.startTime)
|
||||
speed := float64(currentFromStart) / (float64(fromStart) / float64(time.Second))
|
||||
speedBox = Format(int64(speed), pb.Units) + "/s "
|
||||
}
|
||||
|
||||
// bar
|
||||
if pb.ShowBar {
|
||||
size := width - len(countersBox+pb.BarStart+pb.BarEnd+percentBox+timeLeftBox+speedBox+pb.prefix+pb.postfix)
|
||||
if size > 0 && pb.Total > 0 {
|
||||
curCount := int(math.Ceil((float64(current) / float64(pb.Total)) * float64(size)))
|
||||
emptCount := size - curCount
|
||||
barBox = pb.BarStart
|
||||
if emptCount < 0 {
|
||||
emptCount = 0
|
||||
}
|
||||
if curCount > size {
|
||||
curCount = size
|
||||
}
|
||||
if emptCount <= 0 {
|
||||
barBox += strings.Repeat(pb.Current, curCount)
|
||||
} else if curCount > 0 {
|
||||
barBox += strings.Repeat(pb.Current, curCount-1) + pb.CurrentN
|
||||
}
|
||||
|
||||
barBox += strings.Repeat(pb.Empty, emptCount) + pb.BarEnd
|
||||
}
|
||||
}
|
||||
|
||||
// check len
|
||||
out = pb.prefix + countersBox + barBox + percentBox + speedBox + timeLeftBox + pb.postfix
|
||||
if len(out) < width {
|
||||
end = strings.Repeat(" ", width-len(out))
|
||||
}
|
||||
|
||||
// and print!
|
||||
switch {
|
||||
case pb.Output != nil:
|
||||
fmt.Fprint(pb.Output, "\r"+out+end)
|
||||
case pb.Callback != nil:
|
||||
pb.Callback(out + end)
|
||||
case !pb.NotPrint:
|
||||
fmt.Print("\r" + out + end)
|
||||
}
|
||||
}
|
||||
|
||||
func (pb *ProgressBar) getWidth() int {
|
||||
if pb.ForceWidth {
|
||||
return pb.Width
|
||||
}
|
||||
|
||||
width := pb.Width
|
||||
termWidth, _ := terminalWidth()
|
||||
if width == 0 || termWidth <= width {
|
||||
width = termWidth
|
||||
}
|
||||
|
||||
return width
|
||||
}
|
||||
|
||||
// Write the current state of the progressbar
|
||||
func (pb *ProgressBar) Update() {
|
||||
c := atomic.LoadInt64(&pb.current)
|
||||
if c != pb.currentValue {
|
||||
pb.write(c)
|
||||
pb.currentValue = c
|
||||
}
|
||||
}
|
||||
|
||||
// Internal loop for writing progressbar
|
||||
func (pb *ProgressBar) writer() {
|
||||
pb.Update()
|
||||
for {
|
||||
select {
|
||||
case <-pb.isFinish:
|
||||
return
|
||||
case <-time.After(pb.RefreshRate):
|
||||
pb.Update()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type window struct {
|
||||
Row uint16
|
||||
Col uint16
|
||||
Xpixel uint16
|
||||
Ypixel uint16
|
||||
}
|
||||
7
Godeps/_workspace/src/github.com/cheggaaa/pb/pb_nix.go
generated
vendored
7
Godeps/_workspace/src/github.com/cheggaaa/pb/pb_nix.go
generated
vendored
@ -1,7 +0,0 @@
|
||||
// +build linux darwin freebsd netbsd openbsd
|
||||
|
||||
package pb
|
||||
|
||||
import "syscall"
|
||||
|
||||
const sys_ioctl = syscall.SYS_IOCTL
|
||||
5
Godeps/_workspace/src/github.com/cheggaaa/pb/pb_solaris.go
generated
vendored
5
Godeps/_workspace/src/github.com/cheggaaa/pb/pb_solaris.go
generated
vendored
@ -1,5 +0,0 @@
|
||||
// +build solaris
|
||||
|
||||
package pb
|
||||
|
||||
const sys_ioctl = 54
|
||||
37
Godeps/_workspace/src/github.com/cheggaaa/pb/pb_test.go
generated
vendored
37
Godeps/_workspace/src/github.com/cheggaaa/pb/pb_test.go
generated
vendored
@ -1,37 +0,0 @@
|
||||
package pb
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test_IncrementAddsOne(t *testing.T) {
|
||||
count := 5000
|
||||
bar := New(count)
|
||||
expected := 1
|
||||
actual := bar.Increment()
|
||||
|
||||
if actual != expected {
|
||||
t.Errorf("Expected {%d} was {%d}", expected, actual)
|
||||
}
|
||||
}
|
||||
|
||||
func Test_Width(t *testing.T) {
|
||||
count := 5000
|
||||
bar := New(count)
|
||||
width := 100
|
||||
bar.SetWidth(100).Callback = func(out string) {
|
||||
if len(out) != width {
|
||||
t.Errorf("Bar width expected {%d} was {%d}", len(out), width)
|
||||
}
|
||||
}
|
||||
bar.Start()
|
||||
bar.Increment()
|
||||
bar.Finish()
|
||||
}
|
||||
|
||||
func Test_MultipleFinish(t *testing.T) {
|
||||
bar := New(5000)
|
||||
bar.Add(2000)
|
||||
bar.Finish()
|
||||
bar.Finish()
|
||||
}
|
||||
16
Godeps/_workspace/src/github.com/cheggaaa/pb/pb_win.go
generated
vendored
16
Godeps/_workspace/src/github.com/cheggaaa/pb/pb_win.go
generated
vendored
@ -1,16 +0,0 @@
|
||||
// +build windows
|
||||
|
||||
package pb
|
||||
|
||||
import (
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/olekukonko/ts"
|
||||
)
|
||||
|
||||
func bold(str string) string {
|
||||
return str
|
||||
}
|
||||
|
||||
func terminalWidth() (int, error) {
|
||||
size, err := ts.GetSize()
|
||||
return size.Col(), err
|
||||
}
|
||||
46
Godeps/_workspace/src/github.com/cheggaaa/pb/pb_x.go
generated
vendored
46
Godeps/_workspace/src/github.com/cheggaaa/pb/pb_x.go
generated
vendored
@ -1,46 +0,0 @@
|
||||
// +build linux darwin freebsd netbsd openbsd solaris
|
||||
|
||||
package pb
|
||||
|
||||
import (
|
||||
"os"
|
||||
"runtime"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const (
|
||||
TIOCGWINSZ = 0x5413
|
||||
TIOCGWINSZ_OSX = 1074295912
|
||||
)
|
||||
|
||||
var tty *os.File
|
||||
|
||||
func init() {
|
||||
var err error
|
||||
tty, err = os.Open("/dev/tty")
|
||||
if err != nil {
|
||||
tty = os.Stdin
|
||||
}
|
||||
}
|
||||
|
||||
func bold(str string) string {
|
||||
return "\033[1m" + str + "\033[0m"
|
||||
}
|
||||
|
||||
func terminalWidth() (int, error) {
|
||||
w := new(window)
|
||||
tio := syscall.TIOCGWINSZ
|
||||
if runtime.GOOS == "darwin" {
|
||||
tio = TIOCGWINSZ_OSX
|
||||
}
|
||||
res, _, err := syscall.Syscall(sys_ioctl,
|
||||
tty.Fd(),
|
||||
uintptr(tio),
|
||||
uintptr(unsafe.Pointer(w)),
|
||||
)
|
||||
if int(res) == -1 {
|
||||
return 0, err
|
||||
}
|
||||
return int(w.Col), nil
|
||||
}
|
||||
17
Godeps/_workspace/src/github.com/cheggaaa/pb/reader.go
generated
vendored
17
Godeps/_workspace/src/github.com/cheggaaa/pb/reader.go
generated
vendored
@ -1,17 +0,0 @@
|
||||
package pb
|
||||
|
||||
import (
|
||||
"io"
|
||||
)
|
||||
|
||||
// It's proxy reader, implement io.Reader
|
||||
type Reader struct {
|
||||
io.Reader
|
||||
bar *ProgressBar
|
||||
}
|
||||
|
||||
func (r *Reader) Read(p []byte) (n int, err error) {
|
||||
n, err = r.Reader.Read(p)
|
||||
r.bar.Add(n)
|
||||
return
|
||||
}
|
||||
9
Godeps/_workspace/src/github.com/codahale/hdrhistogram/.travis.yml
generated
vendored
9
Godeps/_workspace/src/github.com/codahale/hdrhistogram/.travis.yml
generated
vendored
@ -1,9 +0,0 @@
|
||||
language: go
|
||||
go:
|
||||
- 1.3.3
|
||||
notifications:
|
||||
# See http://about.travis-ci.org/docs/user/build-configuration/ to learn more
|
||||
# about configuring notification recipients and more.
|
||||
email:
|
||||
recipients:
|
||||
- coda.hale@gmail.com
|
||||
15
Godeps/_workspace/src/github.com/codahale/hdrhistogram/README.md
generated
vendored
15
Godeps/_workspace/src/github.com/codahale/hdrhistogram/README.md
generated
vendored
@ -1,15 +0,0 @@
|
||||
hdrhistogram
|
||||
============
|
||||
|
||||
[](https://travis-ci.org/codahale/hdrhistogram)
|
||||
|
||||
A pure Go implementation of the [HDR Histogram](https://github.com/HdrHistogram/HdrHistogram).
|
||||
|
||||
> A Histogram that supports recording and analyzing sampled data value counts
|
||||
> across a configurable integer value range with configurable value precision
|
||||
> within the range. Value precision is expressed as the number of significant
|
||||
> digits in the value recording, and provides control over value quantization
|
||||
> behavior across the value range and the subsequent value resolution at any
|
||||
> given level.
|
||||
|
||||
For documentation, check [godoc](http://godoc.org/github.com/codahale/hdrhistogram).
|
||||
513
Godeps/_workspace/src/github.com/codahale/hdrhistogram/hdr.go
generated
vendored
513
Godeps/_workspace/src/github.com/codahale/hdrhistogram/hdr.go
generated
vendored
@ -1,513 +0,0 @@
|
||||
// Package hdrhistogram provides an implementation of Gil Tene's HDR Histogram
|
||||
// data structure. The HDR Histogram allows for fast and accurate analysis of
|
||||
// the extreme ranges of data with non-normal distributions, like latency.
|
||||
package hdrhistogram
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
)
|
||||
|
||||
// A Bracket is a part of a cumulative distribution.
|
||||
type Bracket struct {
|
||||
Quantile float64
|
||||
Count, ValueAt int64
|
||||
}
|
||||
|
||||
// A Snapshot is an exported view of a Histogram, useful for serializing them.
|
||||
// A Histogram can be constructed from it by passing it to Import.
|
||||
type Snapshot struct {
|
||||
LowestTrackableValue int64
|
||||
HighestTrackableValue int64
|
||||
SignificantFigures int64
|
||||
Counts []int64
|
||||
}
|
||||
|
||||
// A Histogram is a lossy data structure used to record the distribution of
|
||||
// non-normally distributed data (like latency) with a high degree of accuracy
|
||||
// and a bounded degree of precision.
|
||||
type Histogram struct {
|
||||
lowestTrackableValue int64
|
||||
highestTrackableValue int64
|
||||
unitMagnitude int64
|
||||
significantFigures int64
|
||||
subBucketHalfCountMagnitude int32
|
||||
subBucketHalfCount int32
|
||||
subBucketMask int64
|
||||
subBucketCount int32
|
||||
bucketCount int32
|
||||
countsLen int32
|
||||
totalCount int64
|
||||
counts []int64
|
||||
}
|
||||
|
||||
// New returns a new Histogram instance capable of tracking values in the given
|
||||
// range and with the given amount of precision.
|
||||
func New(minValue, maxValue int64, sigfigs int) *Histogram {
|
||||
if sigfigs < 1 || 5 < sigfigs {
|
||||
panic(fmt.Errorf("sigfigs must be [1,5] (was %d)", sigfigs))
|
||||
}
|
||||
|
||||
largestValueWithSingleUnitResolution := 2 * math.Pow10(sigfigs)
|
||||
subBucketCountMagnitude := int32(math.Ceil(math.Log2(float64(largestValueWithSingleUnitResolution))))
|
||||
|
||||
subBucketHalfCountMagnitude := subBucketCountMagnitude
|
||||
if subBucketHalfCountMagnitude < 1 {
|
||||
subBucketHalfCountMagnitude = 1
|
||||
}
|
||||
subBucketHalfCountMagnitude--
|
||||
|
||||
unitMagnitude := int32(math.Floor(math.Log2(float64(minValue))))
|
||||
if unitMagnitude < 0 {
|
||||
unitMagnitude = 0
|
||||
}
|
||||
|
||||
subBucketCount := int32(math.Pow(2, float64(subBucketHalfCountMagnitude)+1))
|
||||
|
||||
subBucketHalfCount := subBucketCount / 2
|
||||
subBucketMask := int64(subBucketCount-1) << uint(unitMagnitude)
|
||||
|
||||
// determine exponent range needed to support the trackable value with no
|
||||
// overflow:
|
||||
smallestUntrackableValue := int64(subBucketCount) << uint(unitMagnitude)
|
||||
bucketsNeeded := int32(1)
|
||||
for smallestUntrackableValue < maxValue {
|
||||
smallestUntrackableValue <<= 1
|
||||
bucketsNeeded++
|
||||
}
|
||||
|
||||
bucketCount := bucketsNeeded
|
||||
countsLen := (bucketCount + 1) * (subBucketCount / 2)
|
||||
|
||||
return &Histogram{
|
||||
lowestTrackableValue: minValue,
|
||||
highestTrackableValue: maxValue,
|
||||
unitMagnitude: int64(unitMagnitude),
|
||||
significantFigures: int64(sigfigs),
|
||||
subBucketHalfCountMagnitude: subBucketHalfCountMagnitude,
|
||||
subBucketHalfCount: subBucketHalfCount,
|
||||
subBucketMask: subBucketMask,
|
||||
subBucketCount: subBucketCount,
|
||||
bucketCount: bucketCount,
|
||||
countsLen: countsLen,
|
||||
totalCount: 0,
|
||||
counts: make([]int64, countsLen),
|
||||
}
|
||||
}
|
||||
|
||||
// ByteSize returns an estimate of the amount of memory allocated to the
|
||||
// histogram in bytes.
|
||||
//
|
||||
// N.B.: This does not take into account the overhead for slices, which are
|
||||
// small, constant, and specific to the compiler version.
|
||||
func (h *Histogram) ByteSize() int {
|
||||
return 6*8 + 5*4 + len(h.counts)*8
|
||||
}
|
||||
|
||||
// Merge merges the data stored in the given histogram with the receiver,
|
||||
// returning the number of recorded values which had to be dropped.
|
||||
func (h *Histogram) Merge(from *Histogram) (dropped int64) {
|
||||
i := from.rIterator()
|
||||
for i.next() {
|
||||
v := i.valueFromIdx
|
||||
c := i.countAtIdx
|
||||
|
||||
if h.RecordValues(v, c) != nil {
|
||||
dropped += c
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// TotalCount returns total number of values recorded.
|
||||
func (h *Histogram) TotalCount() int64 {
|
||||
return h.totalCount
|
||||
}
|
||||
|
||||
// Max returns the approximate maximum recorded value.
|
||||
func (h *Histogram) Max() int64 {
|
||||
var max int64
|
||||
i := h.iterator()
|
||||
for i.next() {
|
||||
if i.countAtIdx != 0 {
|
||||
max = i.highestEquivalentValue
|
||||
}
|
||||
}
|
||||
return h.lowestEquivalentValue(max)
|
||||
}
|
||||
|
||||
// Min returns the approximate minimum recorded value.
|
||||
func (h *Histogram) Min() int64 {
|
||||
var min int64
|
||||
i := h.iterator()
|
||||
for i.next() {
|
||||
if i.countAtIdx != 0 && min == 0 {
|
||||
min = i.highestEquivalentValue
|
||||
break
|
||||
}
|
||||
}
|
||||
return h.lowestEquivalentValue(min)
|
||||
}
|
||||
|
||||
// Mean returns the approximate arithmetic mean of the recorded values.
|
||||
func (h *Histogram) Mean() float64 {
|
||||
var total int64
|
||||
i := h.iterator()
|
||||
for i.next() {
|
||||
if i.countAtIdx != 0 {
|
||||
total += i.countAtIdx * h.medianEquivalentValue(i.valueFromIdx)
|
||||
}
|
||||
}
|
||||
return float64(total) / float64(h.totalCount)
|
||||
}
|
||||
|
||||
// StdDev returns the approximate standard deviation of the recorded values.
|
||||
func (h *Histogram) StdDev() float64 {
|
||||
mean := h.Mean()
|
||||
geometricDevTotal := 0.0
|
||||
|
||||
i := h.iterator()
|
||||
for i.next() {
|
||||
if i.countAtIdx != 0 {
|
||||
dev := float64(h.medianEquivalentValue(i.valueFromIdx)) - mean
|
||||
geometricDevTotal += (dev * dev) * float64(i.countAtIdx)
|
||||
}
|
||||
}
|
||||
|
||||
return math.Sqrt(geometricDevTotal / float64(h.totalCount))
|
||||
}
|
||||
|
||||
// Reset deletes all recorded values and restores the histogram to its original
|
||||
// state.
|
||||
func (h *Histogram) Reset() {
|
||||
h.totalCount = 0
|
||||
for i := range h.counts {
|
||||
h.counts[i] = 0
|
||||
}
|
||||
}
|
||||
|
||||
// RecordValue records the given value, returning an error if the value is out
|
||||
// of range.
|
||||
func (h *Histogram) RecordValue(v int64) error {
|
||||
return h.RecordValues(v, 1)
|
||||
}
|
||||
|
||||
// RecordCorrectedValue records the given value, correcting for stalls in the
|
||||
// recording process. This only works for processes which are recording values
|
||||
// at an expected interval (e.g., doing jitter analysis). Processes which are
|
||||
// recording ad-hoc values (e.g., latency for incoming requests) can't take
|
||||
// advantage of this.
|
||||
func (h *Histogram) RecordCorrectedValue(v, expectedInterval int64) error {
|
||||
if err := h.RecordValue(v); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if expectedInterval <= 0 || v <= expectedInterval {
|
||||
return nil
|
||||
}
|
||||
|
||||
missingValue := v - expectedInterval
|
||||
for missingValue >= expectedInterval {
|
||||
if err := h.RecordValue(missingValue); err != nil {
|
||||
return err
|
||||
}
|
||||
missingValue -= expectedInterval
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RecordValues records n occurrences of the given value, returning an error if
|
||||
// the value is out of range.
|
||||
func (h *Histogram) RecordValues(v, n int64) error {
|
||||
idx := h.countsIndexFor(v)
|
||||
if idx < 0 || int(h.countsLen) <= idx {
|
||||
return fmt.Errorf("value %d is too large to be recorded", v)
|
||||
}
|
||||
h.counts[idx] += n
|
||||
h.totalCount += n
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ValueAtQuantile returns the recorded value at the given quantile (0..100).
|
||||
func (h *Histogram) ValueAtQuantile(q float64) int64 {
|
||||
if q > 100 {
|
||||
q = 100
|
||||
}
|
||||
|
||||
total := int64(0)
|
||||
countAtPercentile := int64(((q / 100) * float64(h.totalCount)) + 0.5)
|
||||
|
||||
i := h.iterator()
|
||||
for i.next() {
|
||||
total += i.countAtIdx
|
||||
if total >= countAtPercentile {
|
||||
return h.highestEquivalentValue(i.valueFromIdx)
|
||||
}
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
// CumulativeDistribution returns an ordered list of brackets of the
|
||||
// distribution of recorded values.
|
||||
func (h *Histogram) CumulativeDistribution() []Bracket {
|
||||
var result []Bracket
|
||||
|
||||
i := h.pIterator(1)
|
||||
for i.next() {
|
||||
result = append(result, Bracket{
|
||||
Quantile: i.percentile,
|
||||
Count: i.countToIdx,
|
||||
ValueAt: i.highestEquivalentValue,
|
||||
})
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// Equals returns true if the two Histograms are equivalent, false if not.
|
||||
func (h *Histogram) Equals(other *Histogram) bool {
|
||||
switch {
|
||||
case
|
||||
h.lowestTrackableValue != other.lowestTrackableValue,
|
||||
h.highestTrackableValue != other.highestTrackableValue,
|
||||
h.unitMagnitude != other.unitMagnitude,
|
||||
h.significantFigures != other.significantFigures,
|
||||
h.subBucketHalfCountMagnitude != other.subBucketHalfCountMagnitude,
|
||||
h.subBucketHalfCount != other.subBucketHalfCount,
|
||||
h.subBucketMask != other.subBucketMask,
|
||||
h.subBucketCount != other.subBucketCount,
|
||||
h.bucketCount != other.bucketCount,
|
||||
h.countsLen != other.countsLen,
|
||||
h.totalCount != other.totalCount:
|
||||
return false
|
||||
default:
|
||||
for i, c := range h.counts {
|
||||
if c != other.counts[i] {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Export returns a snapshot view of the Histogram. This can be later passed to
|
||||
// Import to construct a new Histogram with the same state.
|
||||
func (h *Histogram) Export() *Snapshot {
|
||||
return &Snapshot{
|
||||
LowestTrackableValue: h.lowestTrackableValue,
|
||||
HighestTrackableValue: h.highestTrackableValue,
|
||||
SignificantFigures: h.significantFigures,
|
||||
Counts: h.counts,
|
||||
}
|
||||
}
|
||||
|
||||
// Import returns a new Histogram populated from the Snapshot data.
|
||||
func Import(s *Snapshot) *Histogram {
|
||||
h := New(s.LowestTrackableValue, s.HighestTrackableValue, int(s.SignificantFigures))
|
||||
h.counts = s.Counts
|
||||
totalCount := int64(0)
|
||||
for i := int32(0); i < h.countsLen; i++ {
|
||||
countAtIndex := h.counts[i]
|
||||
if countAtIndex > 0 {
|
||||
totalCount += countAtIndex
|
||||
}
|
||||
}
|
||||
h.totalCount = totalCount
|
||||
return h
|
||||
}
|
||||
|
||||
func (h *Histogram) iterator() *iterator {
|
||||
return &iterator{
|
||||
h: h,
|
||||
subBucketIdx: -1,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Histogram) rIterator() *rIterator {
|
||||
return &rIterator{
|
||||
iterator: iterator{
|
||||
h: h,
|
||||
subBucketIdx: -1,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Histogram) pIterator(ticksPerHalfDistance int32) *pIterator {
|
||||
return &pIterator{
|
||||
iterator: iterator{
|
||||
h: h,
|
||||
subBucketIdx: -1,
|
||||
},
|
||||
ticksPerHalfDistance: ticksPerHalfDistance,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Histogram) sizeOfEquivalentValueRange(v int64) int64 {
|
||||
bucketIdx := h.getBucketIndex(v)
|
||||
subBucketIdx := h.getSubBucketIdx(v, bucketIdx)
|
||||
adjustedBucket := bucketIdx
|
||||
if subBucketIdx >= h.subBucketCount {
|
||||
adjustedBucket++
|
||||
}
|
||||
return int64(1) << uint(h.unitMagnitude+int64(adjustedBucket))
|
||||
}
|
||||
|
||||
func (h *Histogram) valueFromIndex(bucketIdx, subBucketIdx int32) int64 {
|
||||
return int64(subBucketIdx) << uint(int64(bucketIdx)+h.unitMagnitude)
|
||||
}
|
||||
|
||||
func (h *Histogram) lowestEquivalentValue(v int64) int64 {
|
||||
bucketIdx := h.getBucketIndex(v)
|
||||
subBucketIdx := h.getSubBucketIdx(v, bucketIdx)
|
||||
return h.valueFromIndex(bucketIdx, subBucketIdx)
|
||||
}
|
||||
|
||||
func (h *Histogram) nextNonEquivalentValue(v int64) int64 {
|
||||
return h.lowestEquivalentValue(v) + h.sizeOfEquivalentValueRange(v)
|
||||
}
|
||||
|
||||
func (h *Histogram) highestEquivalentValue(v int64) int64 {
|
||||
return h.nextNonEquivalentValue(v) - 1
|
||||
}
|
||||
|
||||
func (h *Histogram) medianEquivalentValue(v int64) int64 {
|
||||
return h.lowestEquivalentValue(v) + (h.sizeOfEquivalentValueRange(v) >> 1)
|
||||
}
|
||||
|
||||
func (h *Histogram) getCountAtIndex(bucketIdx, subBucketIdx int32) int64 {
|
||||
return h.counts[h.countsIndex(bucketIdx, subBucketIdx)]
|
||||
}
|
||||
|
||||
func (h *Histogram) countsIndex(bucketIdx, subBucketIdx int32) int32 {
|
||||
bucketBaseIdx := (bucketIdx + 1) << uint(h.subBucketHalfCountMagnitude)
|
||||
offsetInBucket := subBucketIdx - h.subBucketHalfCount
|
||||
return bucketBaseIdx + offsetInBucket
|
||||
}
|
||||
|
||||
func (h *Histogram) getBucketIndex(v int64) int32 {
|
||||
pow2Ceiling := bitLen(v | h.subBucketMask)
|
||||
return int32(pow2Ceiling - int64(h.unitMagnitude) -
|
||||
int64(h.subBucketHalfCountMagnitude+1))
|
||||
}
|
||||
|
||||
func (h *Histogram) getSubBucketIdx(v int64, idx int32) int32 {
|
||||
return int32(v >> uint(int64(idx)+int64(h.unitMagnitude)))
|
||||
}
|
||||
|
||||
func (h *Histogram) countsIndexFor(v int64) int {
|
||||
bucketIdx := h.getBucketIndex(v)
|
||||
subBucketIdx := h.getSubBucketIdx(v, bucketIdx)
|
||||
return int(h.countsIndex(bucketIdx, subBucketIdx))
|
||||
}
|
||||
|
||||
type iterator struct {
|
||||
h *Histogram
|
||||
bucketIdx, subBucketIdx int32
|
||||
countAtIdx, countToIdx, valueFromIdx int64
|
||||
highestEquivalentValue int64
|
||||
}
|
||||
|
||||
func (i *iterator) next() bool {
|
||||
if i.countToIdx >= i.h.totalCount {
|
||||
return false
|
||||
}
|
||||
|
||||
// increment bucket
|
||||
i.subBucketIdx++
|
||||
if i.subBucketIdx >= i.h.subBucketCount {
|
||||
i.subBucketIdx = i.h.subBucketHalfCount
|
||||
i.bucketIdx++
|
||||
}
|
||||
|
||||
if i.bucketIdx >= i.h.bucketCount {
|
||||
return false
|
||||
}
|
||||
|
||||
i.countAtIdx = i.h.getCountAtIndex(i.bucketIdx, i.subBucketIdx)
|
||||
i.countToIdx += i.countAtIdx
|
||||
i.valueFromIdx = i.h.valueFromIndex(i.bucketIdx, i.subBucketIdx)
|
||||
i.highestEquivalentValue = i.h.highestEquivalentValue(i.valueFromIdx)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
type rIterator struct {
|
||||
iterator
|
||||
countAddedThisStep int64
|
||||
}
|
||||
|
||||
func (r *rIterator) next() bool {
|
||||
for r.iterator.next() {
|
||||
if r.countAtIdx != 0 {
|
||||
r.countAddedThisStep = r.countAtIdx
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
type pIterator struct {
|
||||
iterator
|
||||
seenLastValue bool
|
||||
ticksPerHalfDistance int32
|
||||
percentileToIteratorTo float64
|
||||
percentile float64
|
||||
}
|
||||
|
||||
func (p *pIterator) next() bool {
|
||||
if !(p.countToIdx < p.h.totalCount) {
|
||||
if p.seenLastValue {
|
||||
return false
|
||||
}
|
||||
|
||||
p.seenLastValue = true
|
||||
p.percentile = 100
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
if p.subBucketIdx == -1 && !p.iterator.next() {
|
||||
return false
|
||||
}
|
||||
|
||||
var done = false
|
||||
for !done {
|
||||
currentPercentile := (100.0 * float64(p.countToIdx)) / float64(p.h.totalCount)
|
||||
if p.countAtIdx != 0 && p.percentileToIteratorTo <= currentPercentile {
|
||||
p.percentile = p.percentileToIteratorTo
|
||||
halfDistance := math.Trunc(math.Pow(2, math.Trunc(math.Log2(100.0/(100.0-p.percentileToIteratorTo)))+1))
|
||||
percentileReportingTicks := float64(p.ticksPerHalfDistance) * halfDistance
|
||||
p.percentileToIteratorTo += 100.0 / percentileReportingTicks
|
||||
return true
|
||||
}
|
||||
done = !p.iterator.next()
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func bitLen(x int64) (n int64) {
|
||||
for ; x >= 0x8000; x >>= 16 {
|
||||
n += 16
|
||||
}
|
||||
if x >= 0x80 {
|
||||
x >>= 8
|
||||
n += 8
|
||||
}
|
||||
if x >= 0x8 {
|
||||
x >>= 4
|
||||
n += 4
|
||||
}
|
||||
if x >= 0x2 {
|
||||
x >>= 2
|
||||
n += 2
|
||||
}
|
||||
if x >= 0x1 {
|
||||
n++
|
||||
}
|
||||
return
|
||||
}
|
||||
333
Godeps/_workspace/src/github.com/codahale/hdrhistogram/hdr_test.go
generated
vendored
333
Godeps/_workspace/src/github.com/codahale/hdrhistogram/hdr_test.go
generated
vendored
@ -1,333 +0,0 @@
|
||||
package hdrhistogram_test
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/codahale/hdrhistogram"
|
||||
)
|
||||
|
||||
func TestHighSigFig(t *testing.T) {
|
||||
input := []int64{
|
||||
459876, 669187, 711612, 816326, 931423, 1033197, 1131895, 2477317,
|
||||
3964974, 12718782,
|
||||
}
|
||||
|
||||
hist := hdrhistogram.New(459876, 12718782, 5)
|
||||
for _, sample := range input {
|
||||
hist.RecordValue(sample)
|
||||
}
|
||||
|
||||
if v, want := hist.ValueAtQuantile(50), int64(1048575); v != want {
|
||||
t.Errorf("Median was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestValueAtQuantile(t *testing.T) {
|
||||
h := hdrhistogram.New(1, 10000000, 3)
|
||||
|
||||
for i := 0; i < 1000000; i++ {
|
||||
if err := h.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
data := []struct {
|
||||
q float64
|
||||
v int64
|
||||
}{
|
||||
{q: 50, v: 500223},
|
||||
{q: 75, v: 750079},
|
||||
{q: 90, v: 900095},
|
||||
{q: 95, v: 950271},
|
||||
{q: 99, v: 990207},
|
||||
{q: 99.9, v: 999423},
|
||||
{q: 99.99, v: 999935},
|
||||
}
|
||||
|
||||
for _, d := range data {
|
||||
if v := h.ValueAtQuantile(d.q); v != d.v {
|
||||
t.Errorf("P%v was %v, but expected %v", d.q, v, d.v)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestMean(t *testing.T) {
|
||||
h := hdrhistogram.New(1, 10000000, 3)
|
||||
|
||||
for i := 0; i < 1000000; i++ {
|
||||
if err := h.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
if v, want := h.Mean(), 500000.013312; v != want {
|
||||
t.Errorf("Mean was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestStdDev(t *testing.T) {
|
||||
h := hdrhistogram.New(1, 10000000, 3)
|
||||
|
||||
for i := 0; i < 1000000; i++ {
|
||||
if err := h.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
if v, want := h.StdDev(), 288675.1403682715; v != want {
|
||||
t.Errorf("StdDev was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTotalCount(t *testing.T) {
|
||||
h := hdrhistogram.New(1, 10000000, 3)
|
||||
|
||||
for i := 0; i < 1000000; i++ {
|
||||
if err := h.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if v, want := h.TotalCount(), int64(i+1); v != want {
|
||||
t.Errorf("TotalCount was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestMax(t *testing.T) {
|
||||
h := hdrhistogram.New(1, 10000000, 3)
|
||||
|
||||
for i := 0; i < 1000000; i++ {
|
||||
if err := h.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
if v, want := h.Max(), int64(999936); v != want {
|
||||
t.Errorf("Max was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestReset(t *testing.T) {
|
||||
h := hdrhistogram.New(1, 10000000, 3)
|
||||
|
||||
for i := 0; i < 1000000; i++ {
|
||||
if err := h.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
h.Reset()
|
||||
|
||||
if v, want := h.Max(), int64(0); v != want {
|
||||
t.Errorf("Max was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMerge(t *testing.T) {
|
||||
h1 := hdrhistogram.New(1, 1000, 3)
|
||||
h2 := hdrhistogram.New(1, 1000, 3)
|
||||
|
||||
for i := 0; i < 100; i++ {
|
||||
if err := h1.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
for i := 100; i < 200; i++ {
|
||||
if err := h2.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
h1.Merge(h2)
|
||||
|
||||
if v, want := h1.ValueAtQuantile(50), int64(99); v != want {
|
||||
t.Errorf("Median was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMin(t *testing.T) {
|
||||
h := hdrhistogram.New(1, 10000000, 3)
|
||||
|
||||
for i := 0; i < 1000000; i++ {
|
||||
if err := h.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
if v, want := h.Min(), int64(0); v != want {
|
||||
t.Errorf("Min was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestByteSize(t *testing.T) {
|
||||
h := hdrhistogram.New(1, 100000, 3)
|
||||
|
||||
if v, want := h.ByteSize(), 65604; v != want {
|
||||
t.Errorf("ByteSize was %v, but expected %d", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRecordCorrectedValue(t *testing.T) {
|
||||
h := hdrhistogram.New(1, 100000, 3)
|
||||
|
||||
if err := h.RecordCorrectedValue(10, 100); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if v, want := h.ValueAtQuantile(75), int64(10); v != want {
|
||||
t.Errorf("Corrected value was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRecordCorrectedValueStall(t *testing.T) {
|
||||
h := hdrhistogram.New(1, 100000, 3)
|
||||
|
||||
if err := h.RecordCorrectedValue(1000, 100); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if v, want := h.ValueAtQuantile(75), int64(800); v != want {
|
||||
t.Errorf("Corrected value was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCumulativeDistribution(t *testing.T) {
|
||||
h := hdrhistogram.New(1, 100000000, 3)
|
||||
|
||||
for i := 0; i < 1000000; i++ {
|
||||
if err := h.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
actual := h.CumulativeDistribution()
|
||||
expected := []hdrhistogram.Bracket{
|
||||
hdrhistogram.Bracket{Quantile: 0, Count: 1, ValueAt: 0},
|
||||
hdrhistogram.Bracket{Quantile: 50, Count: 500224, ValueAt: 500223},
|
||||
hdrhistogram.Bracket{Quantile: 75, Count: 750080, ValueAt: 750079},
|
||||
hdrhistogram.Bracket{Quantile: 87.5, Count: 875008, ValueAt: 875007},
|
||||
hdrhistogram.Bracket{Quantile: 93.75, Count: 937984, ValueAt: 937983},
|
||||
hdrhistogram.Bracket{Quantile: 96.875, Count: 969216, ValueAt: 969215},
|
||||
hdrhistogram.Bracket{Quantile: 98.4375, Count: 984576, ValueAt: 984575},
|
||||
hdrhistogram.Bracket{Quantile: 99.21875, Count: 992256, ValueAt: 992255},
|
||||
hdrhistogram.Bracket{Quantile: 99.609375, Count: 996352, ValueAt: 996351},
|
||||
hdrhistogram.Bracket{Quantile: 99.8046875, Count: 998400, ValueAt: 998399},
|
||||
hdrhistogram.Bracket{Quantile: 99.90234375, Count: 999424, ValueAt: 999423},
|
||||
hdrhistogram.Bracket{Quantile: 99.951171875, Count: 999936, ValueAt: 999935},
|
||||
hdrhistogram.Bracket{Quantile: 99.9755859375, Count: 999936, ValueAt: 999935},
|
||||
hdrhistogram.Bracket{Quantile: 99.98779296875, Count: 999936, ValueAt: 999935},
|
||||
hdrhistogram.Bracket{Quantile: 99.993896484375, Count: 1000000, ValueAt: 1000447},
|
||||
hdrhistogram.Bracket{Quantile: 100, Count: 1000000, ValueAt: 1000447},
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(actual, expected) {
|
||||
t.Errorf("CF was %#v, but expected %#v", actual, expected)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkHistogramRecordValue(b *testing.B) {
|
||||
h := hdrhistogram.New(1, 10000000, 3)
|
||||
for i := 0; i < 1000000; i++ {
|
||||
if err := h.RecordValue(int64(i)); err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
}
|
||||
b.ResetTimer()
|
||||
b.ReportAllocs()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
h.RecordValue(100)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkNew(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
hdrhistogram.New(1, 120000, 3) // this could track 1ms-2min
|
||||
}
|
||||
}
|
||||
|
||||
func TestUnitMagnitudeOverflow(t *testing.T) {
|
||||
h := hdrhistogram.New(0, 200, 4)
|
||||
if err := h.RecordValue(11); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubBucketMaskOverflow(t *testing.T) {
|
||||
hist := hdrhistogram.New(2e7, 1e8, 5)
|
||||
for _, sample := range [...]int64{1e8, 2e7, 3e7} {
|
||||
hist.RecordValue(sample)
|
||||
}
|
||||
|
||||
for q, want := range map[float64]int64{
|
||||
50: 33554431,
|
||||
83.33: 33554431,
|
||||
83.34: 100663295,
|
||||
99: 100663295,
|
||||
} {
|
||||
if got := hist.ValueAtQuantile(q); got != want {
|
||||
t.Errorf("got %d for %fth percentile. want: %d", got, q, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestExportImport(t *testing.T) {
|
||||
min := int64(1)
|
||||
max := int64(10000000)
|
||||
sigfigs := 3
|
||||
h := hdrhistogram.New(min, max, sigfigs)
|
||||
for i := 0; i < 1000000; i++ {
|
||||
if err := h.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
s := h.Export()
|
||||
|
||||
if v := s.LowestTrackableValue; v != min {
|
||||
t.Errorf("LowestTrackableValue was %v, but expected %v", v, min)
|
||||
}
|
||||
|
||||
if v := s.HighestTrackableValue; v != max {
|
||||
t.Errorf("HighestTrackableValue was %v, but expected %v", v, max)
|
||||
}
|
||||
|
||||
if v := int(s.SignificantFigures); v != sigfigs {
|
||||
t.Errorf("SignificantFigures was %v, but expected %v", v, sigfigs)
|
||||
}
|
||||
|
||||
if imported := hdrhistogram.Import(s); !imported.Equals(h) {
|
||||
t.Error("Expected Histograms to be equivalent")
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestEquals(t *testing.T) {
|
||||
h1 := hdrhistogram.New(1, 10000000, 3)
|
||||
for i := 0; i < 1000000; i++ {
|
||||
if err := h1.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
h2 := hdrhistogram.New(1, 10000000, 3)
|
||||
for i := 0; i < 10000; i++ {
|
||||
if err := h1.RecordValue(int64(i)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
if h1.Equals(h2) {
|
||||
t.Error("Expected Histograms to not be equivalent")
|
||||
}
|
||||
|
||||
h1.Reset()
|
||||
h2.Reset()
|
||||
|
||||
if !h1.Equals(h2) {
|
||||
t.Error("Expected Histograms to be equivalent")
|
||||
}
|
||||
}
|
||||
45
Godeps/_workspace/src/github.com/codahale/hdrhistogram/window.go
generated
vendored
45
Godeps/_workspace/src/github.com/codahale/hdrhistogram/window.go
generated
vendored
@ -1,45 +0,0 @@
|
||||
package hdrhistogram
|
||||
|
||||
// A WindowedHistogram combines histograms to provide windowed statistics.
|
||||
type WindowedHistogram struct {
|
||||
idx int
|
||||
h []Histogram
|
||||
m *Histogram
|
||||
|
||||
Current *Histogram
|
||||
}
|
||||
|
||||
// NewWindowed creates a new WindowedHistogram with N underlying histograms with
|
||||
// the given parameters.
|
||||
func NewWindowed(n int, minValue, maxValue int64, sigfigs int) *WindowedHistogram {
|
||||
w := WindowedHistogram{
|
||||
idx: -1,
|
||||
h: make([]Histogram, n),
|
||||
m: New(minValue, maxValue, sigfigs),
|
||||
}
|
||||
|
||||
for i := range w.h {
|
||||
w.h[i] = *New(minValue, maxValue, sigfigs)
|
||||
}
|
||||
w.Rotate()
|
||||
|
||||
return &w
|
||||
}
|
||||
|
||||
// Merge returns a histogram which includes the recorded values from all the
|
||||
// sections of the window.
|
||||
func (w *WindowedHistogram) Merge() *Histogram {
|
||||
w.m.Reset()
|
||||
for _, h := range w.h {
|
||||
w.m.Merge(&h)
|
||||
}
|
||||
return w.m
|
||||
}
|
||||
|
||||
// Rotate resets the oldest histogram and rotates it to be used as the current
|
||||
// histogram.
|
||||
func (w *WindowedHistogram) Rotate() {
|
||||
w.idx++
|
||||
w.Current = &w.h[w.idx%len(w.h)]
|
||||
w.Current.Reset()
|
||||
}
|
||||
64
Godeps/_workspace/src/github.com/codahale/hdrhistogram/window_test.go
generated
vendored
64
Godeps/_workspace/src/github.com/codahale/hdrhistogram/window_test.go
generated
vendored
@ -1,64 +0,0 @@
|
||||
package hdrhistogram_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/codahale/hdrhistogram"
|
||||
)
|
||||
|
||||
func TestWindowedHistogram(t *testing.T) {
|
||||
w := hdrhistogram.NewWindowed(2, 1, 1000, 3)
|
||||
|
||||
for i := 0; i < 100; i++ {
|
||||
w.Current.RecordValue(int64(i))
|
||||
}
|
||||
w.Rotate()
|
||||
|
||||
for i := 100; i < 200; i++ {
|
||||
w.Current.RecordValue(int64(i))
|
||||
}
|
||||
w.Rotate()
|
||||
|
||||
for i := 200; i < 300; i++ {
|
||||
w.Current.RecordValue(int64(i))
|
||||
}
|
||||
|
||||
if v, want := w.Merge().ValueAtQuantile(50), int64(199); v != want {
|
||||
t.Errorf("Median was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkWindowedHistogramRecordAndRotate(b *testing.B) {
|
||||
w := hdrhistogram.NewWindowed(3, 1, 10000000, 3)
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
if err := w.Current.RecordValue(100); err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
|
||||
if i%100000 == 1 {
|
||||
w.Rotate()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkWindowedHistogramMerge(b *testing.B) {
|
||||
w := hdrhistogram.NewWindowed(3, 1, 10000000, 3)
|
||||
for i := 0; i < 10000000; i++ {
|
||||
if err := w.Current.RecordValue(100); err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
|
||||
if i%100000 == 1 {
|
||||
w.Rotate()
|
||||
}
|
||||
}
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
w.Merge()
|
||||
}
|
||||
}
|
||||
9
Godeps/_workspace/src/github.com/codahale/metrics/.travis.yml
generated
vendored
9
Godeps/_workspace/src/github.com/codahale/metrics/.travis.yml
generated
vendored
@ -1,9 +0,0 @@
|
||||
language: go
|
||||
go:
|
||||
- 1.3.3
|
||||
notifications:
|
||||
# See http://about.travis-ci.org/docs/user/build-configuration/ to learn more
|
||||
# about configuring notification recipients and more.
|
||||
email:
|
||||
recipients:
|
||||
- coda.hale@gmail.com
|
||||
21
Godeps/_workspace/src/github.com/codahale/metrics/LICENSE
generated
vendored
21
Godeps/_workspace/src/github.com/codahale/metrics/LICENSE
generated
vendored
@ -1,21 +0,0 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014 Coda Hale
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
8
Godeps/_workspace/src/github.com/codahale/metrics/README.md
generated
vendored
8
Godeps/_workspace/src/github.com/codahale/metrics/README.md
generated
vendored
@ -1,8 +0,0 @@
|
||||
metrics
|
||||
=======
|
||||
|
||||
[](https://travis-ci.org/codahale/metrics)
|
||||
|
||||
A Go library which provides light-weight instrumentation for your application.
|
||||
|
||||
For documentation, check [godoc](http://godoc.org/github.com/codahale/metrics).
|
||||
329
Godeps/_workspace/src/github.com/codahale/metrics/metrics.go
generated
vendored
329
Godeps/_workspace/src/github.com/codahale/metrics/metrics.go
generated
vendored
@ -1,329 +0,0 @@
|
||||
// Package metrics provides minimalist instrumentation for your applications in
|
||||
// the form of counters and gauges.
|
||||
//
|
||||
// Counters
|
||||
//
|
||||
// A counter is a monotonically-increasing, unsigned, 64-bit integer used to
|
||||
// represent the number of times an event has occurred. By tracking the deltas
|
||||
// between measurements of a counter over intervals of time, an aggregation
|
||||
// layer can derive rates, acceleration, etc.
|
||||
//
|
||||
// Gauges
|
||||
//
|
||||
// A gauge returns instantaneous measurements of something using signed, 64-bit
|
||||
// integers. This value does not need to be monotonic.
|
||||
//
|
||||
// Histograms
|
||||
//
|
||||
// A histogram tracks the distribution of a stream of values (e.g. the number of
|
||||
// milliseconds it takes to handle requests), adding gauges for the values at
|
||||
// meaningful quantiles: 50th, 75th, 90th, 95th, 99th, 99.9th.
|
||||
//
|
||||
// Reporting
|
||||
//
|
||||
// Measurements from counters and gauges are available as expvars. Your service
|
||||
// should return its expvars from an HTTP endpoint (i.e., /debug/vars) as a JSON
|
||||
// object.
|
||||
package metrics
|
||||
|
||||
import (
|
||||
"expvar"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/codahale/hdrhistogram"
|
||||
)
|
||||
|
||||
// A Counter is a monotonically increasing unsigned integer.
|
||||
//
|
||||
// Use a counter to derive rates (e.g., record total number of requests, derive
|
||||
// requests per second).
|
||||
type Counter string
|
||||
|
||||
// Add increments the counter by one.
|
||||
func (c Counter) Add() {
|
||||
c.AddN(1)
|
||||
}
|
||||
|
||||
// AddN increments the counter by N.
|
||||
func (c Counter) AddN(delta uint64) {
|
||||
cm.Lock()
|
||||
counters[string(c)] += delta
|
||||
cm.Unlock()
|
||||
}
|
||||
|
||||
// SetFunc sets the counter's value to the lazily-called return value of the
|
||||
// given function.
|
||||
func (c Counter) SetFunc(f func() uint64) {
|
||||
cm.Lock()
|
||||
defer cm.Unlock()
|
||||
|
||||
counterFuncs[string(c)] = f
|
||||
}
|
||||
|
||||
// SetBatchFunc sets the counter's value to the lazily-called return value of
|
||||
// the given function, with an additional initializer function for a related
|
||||
// batch of counters, all of which are keyed by an arbitrary value.
|
||||
func (c Counter) SetBatchFunc(key interface{}, init func(), f func() uint64) {
|
||||
cm.Lock()
|
||||
defer cm.Unlock()
|
||||
|
||||
gm.Lock()
|
||||
defer gm.Unlock()
|
||||
|
||||
counterFuncs[string(c)] = f
|
||||
if _, ok := inits[key]; !ok {
|
||||
inits[key] = init
|
||||
}
|
||||
}
|
||||
|
||||
// Remove removes the given counter.
|
||||
func (c Counter) Remove() {
|
||||
cm.Lock()
|
||||
defer cm.Unlock()
|
||||
|
||||
gm.Lock()
|
||||
defer gm.Unlock()
|
||||
|
||||
delete(counters, string(c))
|
||||
delete(counterFuncs, string(c))
|
||||
delete(inits, string(c))
|
||||
}
|
||||
|
||||
// A Gauge is an instantaneous measurement of a value.
|
||||
//
|
||||
// Use a gauge to track metrics which increase and decrease (e.g., amount of
|
||||
// free memory).
|
||||
type Gauge string
|
||||
|
||||
// Set the gauge's value to the given value.
|
||||
func (g Gauge) Set(value int64) {
|
||||
gm.Lock()
|
||||
defer gm.Unlock()
|
||||
|
||||
gauges[string(g)] = func() int64 {
|
||||
return value
|
||||
}
|
||||
}
|
||||
|
||||
// SetFunc sets the gauge's value to the lazily-called return value of the given
|
||||
// function.
|
||||
func (g Gauge) SetFunc(f func() int64) {
|
||||
gm.Lock()
|
||||
defer gm.Unlock()
|
||||
|
||||
gauges[string(g)] = f
|
||||
}
|
||||
|
||||
// SetBatchFunc sets the gauge's value to the lazily-called return value of the
|
||||
// given function, with an additional initializer function for a related batch
|
||||
// of gauges, all of which are keyed by an arbitrary value.
|
||||
func (g Gauge) SetBatchFunc(key interface{}, init func(), f func() int64) {
|
||||
gm.Lock()
|
||||
defer gm.Unlock()
|
||||
|
||||
gauges[string(g)] = f
|
||||
if _, ok := inits[key]; !ok {
|
||||
inits[key] = init
|
||||
}
|
||||
}
|
||||
|
||||
// Remove removes the given gauge.
|
||||
func (g Gauge) Remove() {
|
||||
gm.Lock()
|
||||
defer gm.Unlock()
|
||||
|
||||
delete(gauges, string(g))
|
||||
delete(inits, string(g))
|
||||
}
|
||||
|
||||
// Reset removes all existing counters and gauges.
|
||||
func Reset() {
|
||||
cm.Lock()
|
||||
defer cm.Unlock()
|
||||
|
||||
gm.Lock()
|
||||
defer gm.Unlock()
|
||||
|
||||
hm.Lock()
|
||||
defer hm.Unlock()
|
||||
|
||||
counters = make(map[string]uint64)
|
||||
counterFuncs = make(map[string]func() uint64)
|
||||
gauges = make(map[string]func() int64)
|
||||
histograms = make(map[string]*Histogram)
|
||||
inits = make(map[interface{}]func())
|
||||
}
|
||||
|
||||
// Snapshot returns a copy of the values of all registered counters and gauges.
|
||||
func Snapshot() (c map[string]uint64, g map[string]int64) {
|
||||
cm.Lock()
|
||||
defer cm.Unlock()
|
||||
|
||||
gm.Lock()
|
||||
defer gm.Unlock()
|
||||
|
||||
hm.Lock()
|
||||
defer hm.Unlock()
|
||||
|
||||
for _, init := range inits {
|
||||
init()
|
||||
}
|
||||
|
||||
c = make(map[string]uint64, len(counters)+len(counterFuncs))
|
||||
for n, v := range counters {
|
||||
c[n] = v
|
||||
}
|
||||
|
||||
for n, f := range counterFuncs {
|
||||
c[n] = f()
|
||||
}
|
||||
|
||||
g = make(map[string]int64, len(gauges))
|
||||
for n, f := range gauges {
|
||||
g[n] = f()
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// NewHistogram returns a windowed HDR histogram which drops data older than
|
||||
// five minutes. The returned histogram is safe to use from multiple goroutines.
|
||||
//
|
||||
// Use a histogram to track the distribution of a stream of values (e.g., the
|
||||
// latency associated with HTTP requests).
|
||||
func NewHistogram(name string, minValue, maxValue int64, sigfigs int) *Histogram {
|
||||
hm.Lock()
|
||||
defer hm.Unlock()
|
||||
|
||||
if _, ok := histograms[name]; ok {
|
||||
panic(name + " already exists")
|
||||
}
|
||||
|
||||
hist := &Histogram{
|
||||
name: name,
|
||||
hist: hdrhistogram.NewWindowed(5, minValue, maxValue, sigfigs),
|
||||
}
|
||||
histograms[name] = hist
|
||||
|
||||
Gauge(name+".P50").SetBatchFunc(hname(name), hist.merge, hist.valueAt(50))
|
||||
Gauge(name+".P75").SetBatchFunc(hname(name), hist.merge, hist.valueAt(75))
|
||||
Gauge(name+".P90").SetBatchFunc(hname(name), hist.merge, hist.valueAt(90))
|
||||
Gauge(name+".P95").SetBatchFunc(hname(name), hist.merge, hist.valueAt(95))
|
||||
Gauge(name+".P99").SetBatchFunc(hname(name), hist.merge, hist.valueAt(99))
|
||||
Gauge(name+".P999").SetBatchFunc(hname(name), hist.merge, hist.valueAt(99.9))
|
||||
|
||||
return hist
|
||||
}
|
||||
|
||||
// Remove removes the given histogram.
|
||||
func (h *Histogram) Remove() {
|
||||
|
||||
hm.Lock()
|
||||
defer hm.Unlock()
|
||||
|
||||
Gauge(h.name + ".P50").Remove()
|
||||
Gauge(h.name + ".P75").Remove()
|
||||
Gauge(h.name + ".P90").Remove()
|
||||
Gauge(h.name + ".P95").Remove()
|
||||
Gauge(h.name + ".P99").Remove()
|
||||
Gauge(h.name + ".P999").Remove()
|
||||
|
||||
delete(histograms, h.name)
|
||||
}
|
||||
|
||||
type hname string // unexported to prevent collisions
|
||||
|
||||
// A Histogram measures the distribution of a stream of values.
|
||||
type Histogram struct {
|
||||
name string
|
||||
hist *hdrhistogram.WindowedHistogram
|
||||
m *hdrhistogram.Histogram
|
||||
rw sync.RWMutex
|
||||
}
|
||||
|
||||
// Name returns the name of the histogram
|
||||
func (h *Histogram) Name() string {
|
||||
return h.name
|
||||
}
|
||||
|
||||
// RecordValue records the given value, or returns an error if the value is out
|
||||
// of range.
|
||||
// Returned error values are of type Error.
|
||||
func (h *Histogram) RecordValue(v int64) error {
|
||||
h.rw.Lock()
|
||||
defer h.rw.Unlock()
|
||||
|
||||
err := h.hist.Current.RecordValue(v)
|
||||
if err != nil {
|
||||
return Error{h.name, err}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *Histogram) rotate() {
|
||||
h.rw.Lock()
|
||||
defer h.rw.Unlock()
|
||||
|
||||
h.hist.Rotate()
|
||||
}
|
||||
|
||||
func (h *Histogram) merge() {
|
||||
h.rw.Lock()
|
||||
defer h.rw.Unlock()
|
||||
|
||||
h.m = h.hist.Merge()
|
||||
}
|
||||
|
||||
func (h *Histogram) valueAt(q float64) func() int64 {
|
||||
return func() int64 {
|
||||
h.rw.RLock()
|
||||
defer h.rw.RUnlock()
|
||||
|
||||
if h.m == nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
return h.m.ValueAtQuantile(q)
|
||||
}
|
||||
}
|
||||
|
||||
// Error describes an error and the name of the metric where it occurred.
|
||||
type Error struct {
|
||||
Metric string
|
||||
Err error
|
||||
}
|
||||
|
||||
func (e Error) Error() string {
|
||||
return e.Metric + ": " + e.Err.Error()
|
||||
}
|
||||
|
||||
var (
|
||||
counters = make(map[string]uint64)
|
||||
counterFuncs = make(map[string]func() uint64)
|
||||
gauges = make(map[string]func() int64)
|
||||
inits = make(map[interface{}]func())
|
||||
histograms = make(map[string]*Histogram)
|
||||
|
||||
cm, gm, hm sync.Mutex
|
||||
)
|
||||
|
||||
func init() {
|
||||
expvar.Publish("metrics", expvar.Func(func() interface{} {
|
||||
counters, gauges := Snapshot()
|
||||
return map[string]interface{}{
|
||||
"Counters": counters,
|
||||
"Gauges": gauges,
|
||||
}
|
||||
}))
|
||||
|
||||
go func() {
|
||||
for _ = range time.NewTicker(1 * time.Minute).C {
|
||||
hm.Lock()
|
||||
for _, h := range histograms {
|
||||
h.rotate()
|
||||
}
|
||||
hm.Unlock()
|
||||
}
|
||||
}()
|
||||
}
|
||||
217
Godeps/_workspace/src/github.com/codahale/metrics/metrics_test.go
generated
vendored
217
Godeps/_workspace/src/github.com/codahale/metrics/metrics_test.go
generated
vendored
@ -1,217 +0,0 @@
|
||||
package metrics_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/codahale/metrics"
|
||||
)
|
||||
|
||||
func TestCounter(t *testing.T) {
|
||||
metrics.Reset()
|
||||
|
||||
metrics.Counter("whee").Add()
|
||||
metrics.Counter("whee").AddN(10)
|
||||
|
||||
counters, _ := metrics.Snapshot()
|
||||
if v, want := counters["whee"], uint64(11); v != want {
|
||||
t.Errorf("Counter was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCounterFunc(t *testing.T) {
|
||||
metrics.Reset()
|
||||
|
||||
metrics.Counter("whee").SetFunc(func() uint64 {
|
||||
return 100
|
||||
})
|
||||
|
||||
counters, _ := metrics.Snapshot()
|
||||
if v, want := counters["whee"], uint64(100); v != want {
|
||||
t.Errorf("Counter was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCounterBatchFunc(t *testing.T) {
|
||||
metrics.Reset()
|
||||
|
||||
var a, b uint64
|
||||
|
||||
metrics.Counter("whee").SetBatchFunc(
|
||||
"yay",
|
||||
func() {
|
||||
a, b = 1, 2
|
||||
},
|
||||
func() uint64 {
|
||||
return a
|
||||
},
|
||||
)
|
||||
|
||||
metrics.Counter("woo").SetBatchFunc(
|
||||
"yay",
|
||||
func() {
|
||||
a, b = 1, 2
|
||||
},
|
||||
func() uint64 {
|
||||
return b
|
||||
},
|
||||
)
|
||||
|
||||
counters, _ := metrics.Snapshot()
|
||||
if v, want := counters["whee"], uint64(1); v != want {
|
||||
t.Errorf("Counter was %v, but expected %v", v, want)
|
||||
}
|
||||
|
||||
if v, want := counters["woo"], uint64(2); v != want {
|
||||
t.Errorf("Counter was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCounterRemove(t *testing.T) {
|
||||
metrics.Reset()
|
||||
|
||||
metrics.Counter("whee").Add()
|
||||
metrics.Counter("whee").Remove()
|
||||
|
||||
counters, _ := metrics.Snapshot()
|
||||
if v, ok := counters["whee"]; ok {
|
||||
t.Errorf("Counter was %v, but expected nothing", v)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGaugeValue(t *testing.T) {
|
||||
metrics.Reset()
|
||||
|
||||
metrics.Gauge("whee").Set(-100)
|
||||
|
||||
_, gauges := metrics.Snapshot()
|
||||
if v, want := gauges["whee"], int64(-100); v != want {
|
||||
t.Errorf("Gauge was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGaugeFunc(t *testing.T) {
|
||||
metrics.Reset()
|
||||
|
||||
metrics.Gauge("whee").SetFunc(func() int64 {
|
||||
return -100
|
||||
})
|
||||
|
||||
_, gauges := metrics.Snapshot()
|
||||
if v, want := gauges["whee"], int64(-100); v != want {
|
||||
t.Errorf("Gauge was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGaugeRemove(t *testing.T) {
|
||||
metrics.Reset()
|
||||
|
||||
metrics.Gauge("whee").Set(1)
|
||||
metrics.Gauge("whee").Remove()
|
||||
|
||||
_, gauges := metrics.Snapshot()
|
||||
if v, ok := gauges["whee"]; ok {
|
||||
t.Errorf("Gauge was %v, but expected nothing", v)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHistogram(t *testing.T) {
|
||||
metrics.Reset()
|
||||
|
||||
h := metrics.NewHistogram("heyo", 1, 1000, 3)
|
||||
for i := 100; i > 0; i-- {
|
||||
for j := 0; j < i; j++ {
|
||||
h.RecordValue(int64(i))
|
||||
}
|
||||
}
|
||||
|
||||
_, gauges := metrics.Snapshot()
|
||||
|
||||
if v, want := gauges["heyo.P50"], int64(71); v != want {
|
||||
t.Errorf("P50 was %v, but expected %v", v, want)
|
||||
}
|
||||
|
||||
if v, want := gauges["heyo.P75"], int64(87); v != want {
|
||||
t.Errorf("P75 was %v, but expected %v", v, want)
|
||||
}
|
||||
|
||||
if v, want := gauges["heyo.P90"], int64(95); v != want {
|
||||
t.Errorf("P90 was %v, but expected %v", v, want)
|
||||
}
|
||||
|
||||
if v, want := gauges["heyo.P95"], int64(98); v != want {
|
||||
t.Errorf("P95 was %v, but expected %v", v, want)
|
||||
}
|
||||
|
||||
if v, want := gauges["heyo.P99"], int64(100); v != want {
|
||||
t.Errorf("P99 was %v, but expected %v", v, want)
|
||||
}
|
||||
|
||||
if v, want := gauges["heyo.P999"], int64(100); v != want {
|
||||
t.Errorf("P999 was %v, but expected %v", v, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHistogramRemove(t *testing.T) {
|
||||
metrics.Reset()
|
||||
|
||||
h := metrics.NewHistogram("heyo", 1, 1000, 3)
|
||||
h.Remove()
|
||||
|
||||
_, gauges := metrics.Snapshot()
|
||||
if v, ok := gauges["heyo.P50"]; ok {
|
||||
t.Errorf("Gauge was %v, but expected nothing", v)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkCounterAdd(b *testing.B) {
|
||||
metrics.Reset()
|
||||
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
for pb.Next() {
|
||||
metrics.Counter("test1").Add()
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func BenchmarkCounterAddN(b *testing.B) {
|
||||
metrics.Reset()
|
||||
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
for pb.Next() {
|
||||
metrics.Counter("test2").AddN(100)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func BenchmarkGaugeSet(b *testing.B) {
|
||||
metrics.Reset()
|
||||
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
for pb.Next() {
|
||||
metrics.Gauge("test2").Set(100)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func BenchmarkHistogramRecordValue(b *testing.B) {
|
||||
metrics.Reset()
|
||||
h := metrics.NewHistogram("hist", 1, 1000, 3)
|
||||
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
for pb.Next() {
|
||||
h.RecordValue(100)
|
||||
}
|
||||
})
|
||||
}
|
||||
18
Godeps/_workspace/src/github.com/codahale/metrics/runtime/doc.go
generated
vendored
18
Godeps/_workspace/src/github.com/codahale/metrics/runtime/doc.go
generated
vendored
@ -1,18 +0,0 @@
|
||||
// Package runtime registers gauges and counters for various operationally
|
||||
// important aspects of the Go runtime.
|
||||
//
|
||||
// To use, import this package:
|
||||
//
|
||||
// import _ "github.com/codahale/metrics/runtime"
|
||||
//
|
||||
// This registers the following gauges:
|
||||
//
|
||||
// FileDescriptors.Max
|
||||
// FileDescriptors.Used
|
||||
// Mem.NumGC
|
||||
// Mem.PauseTotalNs
|
||||
// Mem.LastGC
|
||||
// Mem.Alloc
|
||||
// Mem.HeapObjects
|
||||
// Goroutines.Num
|
||||
package runtime
|
||||
46
Godeps/_workspace/src/github.com/codahale/metrics/runtime/fds.go
generated
vendored
46
Godeps/_workspace/src/github.com/codahale/metrics/runtime/fds.go
generated
vendored
@ -1,46 +0,0 @@
|
||||
// +build !windows
|
||||
|
||||
package runtime
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"syscall"
|
||||
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/codahale/metrics"
|
||||
)
|
||||
|
||||
func getFDLimit() (uint64, error) {
|
||||
var rlimit syscall.Rlimit
|
||||
if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rlimit); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
// rlimit.Cur's type is platform-dependent, so here we widen it as far as Go
|
||||
// will allow by converting it to a uint64.
|
||||
return uint64(rlimit.Cur), nil
|
||||
}
|
||||
|
||||
func getFDUsage() (uint64, error) {
|
||||
fds, err := ioutil.ReadDir("/proc/self/fd")
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return uint64(len(fds)), nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
metrics.Gauge("FileDescriptors.Max").SetFunc(func() int64 {
|
||||
v, err := getFDLimit()
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return int64(v)
|
||||
})
|
||||
|
||||
metrics.Gauge("FileDescriptors.Used").SetFunc(func() int64 {
|
||||
v, err := getFDUsage()
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return int64(v)
|
||||
})
|
||||
}
|
||||
24
Godeps/_workspace/src/github.com/codahale/metrics/runtime/fds_test.go
generated
vendored
24
Godeps/_workspace/src/github.com/codahale/metrics/runtime/fds_test.go
generated
vendored
@ -1,24 +0,0 @@
|
||||
// +build !windows
|
||||
|
||||
package runtime
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/codahale/metrics"
|
||||
)
|
||||
|
||||
func TestFdStats(t *testing.T) {
|
||||
_, gauges := metrics.Snapshot()
|
||||
|
||||
expected := []string{
|
||||
"FileDescriptors.Max",
|
||||
"FileDescriptors.Used",
|
||||
}
|
||||
|
||||
for _, name := range expected {
|
||||
if _, ok := gauges[name]; !ok {
|
||||
t.Errorf("Missing gauge %q", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
4
Godeps/_workspace/src/github.com/codahale/metrics/runtime/fds_windows.go
generated
vendored
4
Godeps/_workspace/src/github.com/codahale/metrics/runtime/fds_windows.go
generated
vendored
@ -1,4 +0,0 @@
|
||||
package runtime
|
||||
|
||||
func init() {
|
||||
}
|
||||
13
Godeps/_workspace/src/github.com/codahale/metrics/runtime/goroutines.go
generated
vendored
13
Godeps/_workspace/src/github.com/codahale/metrics/runtime/goroutines.go
generated
vendored
@ -1,13 +0,0 @@
|
||||
package runtime
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/codahale/metrics"
|
||||
)
|
||||
|
||||
func init() {
|
||||
metrics.Gauge("Goroutines.Num").SetFunc(func() int64 {
|
||||
return int64(runtime.NumGoroutine())
|
||||
})
|
||||
}
|
||||
21
Godeps/_workspace/src/github.com/codahale/metrics/runtime/goroutines_test.go
generated
vendored
21
Godeps/_workspace/src/github.com/codahale/metrics/runtime/goroutines_test.go
generated
vendored
@ -1,21 +0,0 @@
|
||||
package runtime
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/codahale/metrics"
|
||||
)
|
||||
|
||||
func TestGoroutinesStats(t *testing.T) {
|
||||
_, gauges := metrics.Snapshot()
|
||||
|
||||
expected := []string{
|
||||
"Goroutines.Num",
|
||||
}
|
||||
|
||||
for _, name := range expected {
|
||||
if _, ok := gauges[name]; !ok {
|
||||
t.Errorf("Missing gauge %q", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
48
Godeps/_workspace/src/github.com/codahale/metrics/runtime/memstats.go
generated
vendored
48
Godeps/_workspace/src/github.com/codahale/metrics/runtime/memstats.go
generated
vendored
@ -1,48 +0,0 @@
|
||||
package runtime
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/codahale/metrics"
|
||||
)
|
||||
|
||||
func init() {
|
||||
msg := &memStatGauges{}
|
||||
|
||||
metrics.Counter("Mem.NumGC").SetBatchFunc(key{}, msg.init, msg.numGC)
|
||||
metrics.Counter("Mem.PauseTotalNs").SetBatchFunc(key{}, msg.init, msg.totalPause)
|
||||
|
||||
metrics.Gauge("Mem.LastGC").SetBatchFunc(key{}, msg.init, msg.lastPause)
|
||||
metrics.Gauge("Mem.Alloc").SetBatchFunc(key{}, msg.init, msg.alloc)
|
||||
metrics.Gauge("Mem.HeapObjects").SetBatchFunc(key{}, msg.init, msg.objects)
|
||||
}
|
||||
|
||||
type key struct{} // unexported to prevent collision
|
||||
|
||||
type memStatGauges struct {
|
||||
stats runtime.MemStats
|
||||
}
|
||||
|
||||
func (msg *memStatGauges) init() {
|
||||
runtime.ReadMemStats(&msg.stats)
|
||||
}
|
||||
|
||||
func (msg *memStatGauges) numGC() uint64 {
|
||||
return uint64(msg.stats.NumGC)
|
||||
}
|
||||
|
||||
func (msg *memStatGauges) totalPause() uint64 {
|
||||
return msg.stats.PauseTotalNs
|
||||
}
|
||||
|
||||
func (msg *memStatGauges) lastPause() int64 {
|
||||
return int64(msg.stats.LastGC)
|
||||
}
|
||||
|
||||
func (msg *memStatGauges) alloc() int64 {
|
||||
return int64(msg.stats.Alloc)
|
||||
}
|
||||
|
||||
func (msg *memStatGauges) objects() int64 {
|
||||
return int64(msg.stats.HeapObjects)
|
||||
}
|
||||
34
Godeps/_workspace/src/github.com/codahale/metrics/runtime/memstats_test.go
generated
vendored
34
Godeps/_workspace/src/github.com/codahale/metrics/runtime/memstats_test.go
generated
vendored
@ -1,34 +0,0 @@
|
||||
package runtime
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/codahale/metrics"
|
||||
)
|
||||
|
||||
func TestMemStats(t *testing.T) {
|
||||
counters, gauges := metrics.Snapshot()
|
||||
|
||||
expectedCounters := []string{
|
||||
"Mem.NumGC",
|
||||
"Mem.PauseTotalNs",
|
||||
}
|
||||
|
||||
expectedGauges := []string{
|
||||
"Mem.LastGC",
|
||||
"Mem.Alloc",
|
||||
"Mem.HeapObjects",
|
||||
}
|
||||
|
||||
for _, name := range expectedCounters {
|
||||
if _, ok := counters[name]; !ok {
|
||||
t.Errorf("Missing counters %q", name)
|
||||
}
|
||||
}
|
||||
|
||||
for _, name := range expectedGauges {
|
||||
if _, ok := gauges[name]; !ok {
|
||||
t.Errorf("Missing gauge %q", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
6
Godeps/_workspace/src/github.com/dustin/go-humanize/.gitignore
generated
vendored
6
Godeps/_workspace/src/github.com/dustin/go-humanize/.gitignore
generated
vendored
@ -1,6 +0,0 @@
|
||||
#*
|
||||
*.[568]
|
||||
*.a
|
||||
*~
|
||||
[568].out
|
||||
_*
|
||||
21
Godeps/_workspace/src/github.com/dustin/go-humanize/LICENSE
generated
vendored
21
Godeps/_workspace/src/github.com/dustin/go-humanize/LICENSE
generated
vendored
@ -1,21 +0,0 @@
|
||||
Copyright (c) 2005-2008 Dustin Sallings <dustin@spy.net>
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
||||
<http://www.opensource.org/licenses/mit-license.php>
|
||||
78
Godeps/_workspace/src/github.com/dustin/go-humanize/README.markdown
generated
vendored
78
Godeps/_workspace/src/github.com/dustin/go-humanize/README.markdown
generated
vendored
@ -1,78 +0,0 @@
|
||||
# Humane Units
|
||||
|
||||
Just a few functions for helping humanize times and sizes.
|
||||
|
||||
`go get` it as `github.com/dustin/go-humanize`, import it as
|
||||
`"github.com/dustin/go-humanize"`, use it as `humanize`
|
||||
|
||||
## Sizes
|
||||
|
||||
This lets you take numbers like `82854982` and convert them to useful
|
||||
strings like, `83MB` or `79MiB` (whichever you prefer).
|
||||
|
||||
Example:
|
||||
|
||||
fmt.Printf("That file is %s.", humanize.Bytes(82854982))
|
||||
|
||||
## Times
|
||||
|
||||
This lets you take a `time.Time` and spit it out in relative terms.
|
||||
For example, `12 seconds ago` or `3 days from now`.
|
||||
|
||||
Example:
|
||||
|
||||
fmt.Printf("This was touched %s", humanize.Time(someTimeInstance))
|
||||
|
||||
Thanks to Kyle Lemons for the time implementation from an IRC
|
||||
conversation one day. It's pretty neat.
|
||||
|
||||
## Ordinals
|
||||
|
||||
From a [mailing list discussion][odisc] where a user wanted to be able
|
||||
to label ordinals.
|
||||
|
||||
0 -> 0th
|
||||
1 -> 1st
|
||||
2 -> 2nd
|
||||
3 -> 3rd
|
||||
4 -> 4th
|
||||
[...]
|
||||
|
||||
Example:
|
||||
|
||||
fmt.Printf("You're my %s best friend.", humanize.Ordinal(193))
|
||||
|
||||
## Commas
|
||||
|
||||
Want to shove commas into numbers? Be my guest.
|
||||
|
||||
0 -> 0
|
||||
100 -> 100
|
||||
1000 -> 1,000
|
||||
1000000000 -> 1,000,000,000
|
||||
-100000 -> -100,000
|
||||
|
||||
Example:
|
||||
|
||||
fmt.Printf("You owe $%s.\n", humanize.Comma(6582491))
|
||||
|
||||
## Ftoa
|
||||
|
||||
Nicer float64 formatter that removes trailing zeros.
|
||||
|
||||
fmt.Printf("%f", 2.24) // 2.240000
|
||||
fmt.Printf("%s", humanize.Ftoa(2.24)) // 2.24
|
||||
fmt.Printf("%f", 2.0) // 2.000000
|
||||
fmt.Printf("%s", humanize.Ftoa(2.0)) // 2
|
||||
|
||||
## SI notation
|
||||
|
||||
Format numbers with [SI notation][sinotation].
|
||||
|
||||
Example:
|
||||
|
||||
humanize.SI(0.00000000223, "M") // 2.23nM
|
||||
|
||||
|
||||
[odisc]: https://groups.google.com/d/topic/golang-nuts/l8NhI74jl-4/discussion
|
||||
[sinotation]: http://en.wikipedia.org/wiki/Metric_prefix
|
||||
31
Godeps/_workspace/src/github.com/dustin/go-humanize/big.go
generated
vendored
31
Godeps/_workspace/src/github.com/dustin/go-humanize/big.go
generated
vendored
@ -1,31 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
)
|
||||
|
||||
// order of magnitude (to a max order)
|
||||
func oomm(n, b *big.Int, maxmag int) (float64, int) {
|
||||
mag := 0
|
||||
m := &big.Int{}
|
||||
for n.Cmp(b) >= 0 {
|
||||
n.DivMod(n, b, m)
|
||||
mag++
|
||||
if mag == maxmag && maxmag >= 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag
|
||||
}
|
||||
|
||||
// total order of magnitude
|
||||
// (same as above, but with no upper limit)
|
||||
func oom(n, b *big.Int) (float64, int) {
|
||||
mag := 0
|
||||
m := &big.Int{}
|
||||
for n.Cmp(b) >= 0 {
|
||||
n.DivMod(n, b, m)
|
||||
mag++
|
||||
}
|
||||
return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag
|
||||
}
|
||||
164
Godeps/_workspace/src/github.com/dustin/go-humanize/bigbytes.go
generated
vendored
164
Godeps/_workspace/src/github.com/dustin/go-humanize/bigbytes.go
generated
vendored
@ -1,164 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math/big"
|
||||
"strings"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
var (
|
||||
bigIECExp = big.NewInt(1024)
|
||||
|
||||
// BigByte is one byte in bit.Ints
|
||||
BigByte = big.NewInt(1)
|
||||
// BigKiByte is 1,024 bytes in bit.Ints
|
||||
BigKiByte = (&big.Int{}).Mul(BigByte, bigIECExp)
|
||||
// BigMiByte is 1,024 k bytes in bit.Ints
|
||||
BigMiByte = (&big.Int{}).Mul(BigKiByte, bigIECExp)
|
||||
// BigGiByte is 1,024 m bytes in bit.Ints
|
||||
BigGiByte = (&big.Int{}).Mul(BigMiByte, bigIECExp)
|
||||
// BigTiByte is 1,024 g bytes in bit.Ints
|
||||
BigTiByte = (&big.Int{}).Mul(BigGiByte, bigIECExp)
|
||||
// BigPiByte is 1,024 t bytes in bit.Ints
|
||||
BigPiByte = (&big.Int{}).Mul(BigTiByte, bigIECExp)
|
||||
// BigEiByte is 1,024 p bytes in bit.Ints
|
||||
BigEiByte = (&big.Int{}).Mul(BigPiByte, bigIECExp)
|
||||
// BigZiByte is 1,024 e bytes in bit.Ints
|
||||
BigZiByte = (&big.Int{}).Mul(BigEiByte, bigIECExp)
|
||||
// BigYiByte is 1,024 z bytes in bit.Ints
|
||||
BigYiByte = (&big.Int{}).Mul(BigZiByte, bigIECExp)
|
||||
)
|
||||
|
||||
var (
|
||||
bigSIExp = big.NewInt(1000)
|
||||
|
||||
// BigSIByte is one SI byte in big.Ints
|
||||
BigSIByte = big.NewInt(1)
|
||||
// BigKByte is 1,000 SI bytes in big.Ints
|
||||
BigKByte = (&big.Int{}).Mul(BigSIByte, bigSIExp)
|
||||
// BigMByte is 1,000 SI k bytes in big.Ints
|
||||
BigMByte = (&big.Int{}).Mul(BigKByte, bigSIExp)
|
||||
// BigGByte is 1,000 SI m bytes in big.Ints
|
||||
BigGByte = (&big.Int{}).Mul(BigMByte, bigSIExp)
|
||||
// BigTByte is 1,000 SI g bytes in big.Ints
|
||||
BigTByte = (&big.Int{}).Mul(BigGByte, bigSIExp)
|
||||
// BigPByte is 1,000 SI t bytes in big.Ints
|
||||
BigPByte = (&big.Int{}).Mul(BigTByte, bigSIExp)
|
||||
// BigEByte is 1,000 SI p bytes in big.Ints
|
||||
BigEByte = (&big.Int{}).Mul(BigPByte, bigSIExp)
|
||||
// BigZByte is 1,000 SI e bytes in big.Ints
|
||||
BigZByte = (&big.Int{}).Mul(BigEByte, bigSIExp)
|
||||
// BigYByte is 1,000 SI z bytes in big.Ints
|
||||
BigYByte = (&big.Int{}).Mul(BigZByte, bigSIExp)
|
||||
)
|
||||
|
||||
var bigBytesSizeTable = map[string]*big.Int{
|
||||
"b": BigByte,
|
||||
"kib": BigKiByte,
|
||||
"kb": BigKByte,
|
||||
"mib": BigMiByte,
|
||||
"mb": BigMByte,
|
||||
"gib": BigGiByte,
|
||||
"gb": BigGByte,
|
||||
"tib": BigTiByte,
|
||||
"tb": BigTByte,
|
||||
"pib": BigPiByte,
|
||||
"pb": BigPByte,
|
||||
"eib": BigEiByte,
|
||||
"eb": BigEByte,
|
||||
"zib": BigZiByte,
|
||||
"zb": BigZByte,
|
||||
"yib": BigYiByte,
|
||||
"yb": BigYByte,
|
||||
// Without suffix
|
||||
"": BigByte,
|
||||
"ki": BigKiByte,
|
||||
"k": BigKByte,
|
||||
"mi": BigMiByte,
|
||||
"m": BigMByte,
|
||||
"gi": BigGiByte,
|
||||
"g": BigGByte,
|
||||
"ti": BigTiByte,
|
||||
"t": BigTByte,
|
||||
"pi": BigPiByte,
|
||||
"p": BigPByte,
|
||||
"ei": BigEiByte,
|
||||
"e": BigEByte,
|
||||
"z": BigZByte,
|
||||
"zi": BigZiByte,
|
||||
"y": BigYByte,
|
||||
"yi": BigYiByte,
|
||||
}
|
||||
|
||||
var ten = big.NewInt(10)
|
||||
|
||||
func humanateBigBytes(s, base *big.Int, sizes []string) string {
|
||||
if s.Cmp(ten) < 0 {
|
||||
return fmt.Sprintf("%dB", s)
|
||||
}
|
||||
c := (&big.Int{}).Set(s)
|
||||
val, mag := oomm(c, base, len(sizes)-1)
|
||||
suffix := sizes[mag]
|
||||
f := "%.0f%s"
|
||||
if val < 10 {
|
||||
f = "%.1f%s"
|
||||
}
|
||||
|
||||
return fmt.Sprintf(f, val, suffix)
|
||||
|
||||
}
|
||||
|
||||
// BigBytes produces a human readable representation of an SI size.
|
||||
//
|
||||
// See also: ParseBigBytes.
|
||||
//
|
||||
// BigBytes(82854982) -> 83MB
|
||||
func BigBytes(s *big.Int) string {
|
||||
sizes := []string{"B", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"}
|
||||
return humanateBigBytes(s, bigSIExp, sizes)
|
||||
}
|
||||
|
||||
// BigIBytes produces a human readable representation of an IEC size.
|
||||
//
|
||||
// See also: ParseBigBytes.
|
||||
//
|
||||
// BigIBytes(82854982) -> 79MiB
|
||||
func BigIBytes(s *big.Int) string {
|
||||
sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"}
|
||||
return humanateBigBytes(s, bigIECExp, sizes)
|
||||
}
|
||||
|
||||
// ParseBigBytes parses a string representation of bytes into the number
|
||||
// of bytes it represents.
|
||||
//
|
||||
// See also: BigBytes, BigIBytes.
|
||||
//
|
||||
// ParseBigBytes("42MB") -> 42000000, nil
|
||||
// ParseBigBytes("42mib") -> 44040192, nil
|
||||
func ParseBigBytes(s string) (*big.Int, error) {
|
||||
lastDigit := 0
|
||||
for _, r := range s {
|
||||
if !(unicode.IsDigit(r) || r == '.') {
|
||||
break
|
||||
}
|
||||
lastDigit++
|
||||
}
|
||||
|
||||
val := &big.Rat{}
|
||||
_, err := fmt.Sscanf(s[:lastDigit], "%f", val)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
extra := strings.ToLower(strings.TrimSpace(s[lastDigit:]))
|
||||
if m, ok := bigBytesSizeTable[extra]; ok {
|
||||
mv := (&big.Rat{}).SetInt(m)
|
||||
val.Mul(val, mv)
|
||||
rv := &big.Int{}
|
||||
rv.Div(val.Num(), val.Denom())
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("unhandled size name: %v", extra)
|
||||
}
|
||||
219
Godeps/_workspace/src/github.com/dustin/go-humanize/bigbytes_test.go
generated
vendored
219
Godeps/_workspace/src/github.com/dustin/go-humanize/bigbytes_test.go
generated
vendored
@ -1,219 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestBigByteParsing(t *testing.T) {
|
||||
tests := []struct {
|
||||
in string
|
||||
exp uint64
|
||||
}{
|
||||
{"42", 42},
|
||||
{"42MB", 42000000},
|
||||
{"42MiB", 44040192},
|
||||
{"42mb", 42000000},
|
||||
{"42mib", 44040192},
|
||||
{"42MIB", 44040192},
|
||||
{"42 MB", 42000000},
|
||||
{"42 MiB", 44040192},
|
||||
{"42 mb", 42000000},
|
||||
{"42 mib", 44040192},
|
||||
{"42 MIB", 44040192},
|
||||
{"42.5MB", 42500000},
|
||||
{"42.5MiB", 44564480},
|
||||
{"42.5 MB", 42500000},
|
||||
{"42.5 MiB", 44564480},
|
||||
// No need to say B
|
||||
{"42M", 42000000},
|
||||
{"42Mi", 44040192},
|
||||
{"42m", 42000000},
|
||||
{"42mi", 44040192},
|
||||
{"42MI", 44040192},
|
||||
{"42 M", 42000000},
|
||||
{"42 Mi", 44040192},
|
||||
{"42 m", 42000000},
|
||||
{"42 mi", 44040192},
|
||||
{"42 MI", 44040192},
|
||||
{"42.5M", 42500000},
|
||||
{"42.5Mi", 44564480},
|
||||
{"42.5 M", 42500000},
|
||||
{"42.5 Mi", 44564480},
|
||||
// Large testing, breaks when too much larger than
|
||||
// this.
|
||||
{"12.5 EB", uint64(12.5 * float64(EByte))},
|
||||
{"12.5 E", uint64(12.5 * float64(EByte))},
|
||||
{"12.5 EiB", uint64(12.5 * float64(EiByte))},
|
||||
}
|
||||
|
||||
for _, p := range tests {
|
||||
got, err := ParseBigBytes(p.in)
|
||||
if err != nil {
|
||||
t.Errorf("Couldn't parse %v: %v", p.in, err)
|
||||
} else {
|
||||
if got.Uint64() != p.exp {
|
||||
t.Errorf("Expected %v for %v, got %v",
|
||||
p.exp, p.in, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestBigByteErrors(t *testing.T) {
|
||||
got, err := ParseBigBytes("84 JB")
|
||||
if err == nil {
|
||||
t.Errorf("Expected error, got %v", got)
|
||||
}
|
||||
got, err = ParseBigBytes("")
|
||||
if err == nil {
|
||||
t.Errorf("Expected error parsing nothing")
|
||||
}
|
||||
}
|
||||
|
||||
func bbyte(in uint64) string {
|
||||
return BigBytes((&big.Int{}).SetUint64(in))
|
||||
}
|
||||
|
||||
func bibyte(in uint64) string {
|
||||
return BigIBytes((&big.Int{}).SetUint64(in))
|
||||
}
|
||||
|
||||
func TestBigBytes(t *testing.T) {
|
||||
testList{
|
||||
{"bytes(0)", bbyte(0), "0B"},
|
||||
{"bytes(1)", bbyte(1), "1B"},
|
||||
{"bytes(803)", bbyte(803), "803B"},
|
||||
{"bytes(999)", bbyte(999), "999B"},
|
||||
|
||||
{"bytes(1024)", bbyte(1024), "1.0KB"},
|
||||
{"bytes(1MB - 1)", bbyte(MByte - Byte), "1000KB"},
|
||||
|
||||
{"bytes(1MB)", bbyte(1024 * 1024), "1.0MB"},
|
||||
{"bytes(1GB - 1K)", bbyte(GByte - KByte), "1000MB"},
|
||||
|
||||
{"bytes(1GB)", bbyte(GByte), "1.0GB"},
|
||||
{"bytes(1TB - 1M)", bbyte(TByte - MByte), "1000GB"},
|
||||
|
||||
{"bytes(1TB)", bbyte(TByte), "1.0TB"},
|
||||
{"bytes(1PB - 1T)", bbyte(PByte - TByte), "999TB"},
|
||||
|
||||
{"bytes(1PB)", bbyte(PByte), "1.0PB"},
|
||||
{"bytes(1PB - 1T)", bbyte(EByte - PByte), "999PB"},
|
||||
|
||||
{"bytes(1EB)", bbyte(EByte), "1.0EB"},
|
||||
// Overflows.
|
||||
// {"bytes(1EB - 1P)", Bytes((KByte*EByte)-PByte), "1023EB"},
|
||||
|
||||
{"bytes(0)", bibyte(0), "0B"},
|
||||
{"bytes(1)", bibyte(1), "1B"},
|
||||
{"bytes(803)", bibyte(803), "803B"},
|
||||
{"bytes(1023)", bibyte(1023), "1023B"},
|
||||
|
||||
{"bytes(1024)", bibyte(1024), "1.0KiB"},
|
||||
{"bytes(1MB - 1)", bibyte(MiByte - IByte), "1024KiB"},
|
||||
|
||||
{"bytes(1MB)", bibyte(1024 * 1024), "1.0MiB"},
|
||||
{"bytes(1GB - 1K)", bibyte(GiByte - KiByte), "1024MiB"},
|
||||
|
||||
{"bytes(1GB)", bibyte(GiByte), "1.0GiB"},
|
||||
{"bytes(1TB - 1M)", bibyte(TiByte - MiByte), "1024GiB"},
|
||||
|
||||
{"bytes(1TB)", bibyte(TiByte), "1.0TiB"},
|
||||
{"bytes(1PB - 1T)", bibyte(PiByte - TiByte), "1023TiB"},
|
||||
|
||||
{"bytes(1PB)", bibyte(PiByte), "1.0PiB"},
|
||||
{"bytes(1PB - 1T)", bibyte(EiByte - PiByte), "1023PiB"},
|
||||
|
||||
{"bytes(1EiB)", bibyte(EiByte), "1.0EiB"},
|
||||
// Overflows.
|
||||
// {"bytes(1EB - 1P)", bibyte((KIByte*EIByte)-PiByte), "1023EB"},
|
||||
|
||||
{"bytes(5.5GiB)", bibyte(5.5 * GiByte), "5.5GiB"},
|
||||
|
||||
{"bytes(5.5GB)", bbyte(5.5 * GByte), "5.5GB"},
|
||||
}.validate(t)
|
||||
}
|
||||
|
||||
func TestVeryBigBytes(t *testing.T) {
|
||||
b, _ := (&big.Int{}).SetString("15347691069326346944512", 10)
|
||||
s := BigBytes(b)
|
||||
if s != "15ZB" {
|
||||
t.Errorf("Expected 15ZB, got %v", s)
|
||||
}
|
||||
s = BigIBytes(b)
|
||||
if s != "13ZiB" {
|
||||
t.Errorf("Expected 13ZiB, got %v", s)
|
||||
}
|
||||
|
||||
b, _ = (&big.Int{}).SetString("15716035654990179271180288", 10)
|
||||
s = BigBytes(b)
|
||||
if s != "16YB" {
|
||||
t.Errorf("Expected 16YB, got %v", s)
|
||||
}
|
||||
s = BigIBytes(b)
|
||||
if s != "13YiB" {
|
||||
t.Errorf("Expected 13YiB, got %v", s)
|
||||
}
|
||||
}
|
||||
|
||||
func TestVeryVeryBigBytes(t *testing.T) {
|
||||
b, _ := (&big.Int{}).SetString("16093220510709943573688614912", 10)
|
||||
s := BigBytes(b)
|
||||
if s != "16093YB" {
|
||||
t.Errorf("Expected 16093YB, got %v", s)
|
||||
}
|
||||
s = BigIBytes(b)
|
||||
if s != "13312YiB" {
|
||||
t.Errorf("Expected 13312YiB, got %v", s)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseVeryBig(t *testing.T) {
|
||||
tests := []struct {
|
||||
in string
|
||||
out string
|
||||
}{
|
||||
{"16ZB", "16000000000000000000000"},
|
||||
{"16ZiB", "18889465931478580854784"},
|
||||
{"16.5ZB", "16500000000000000000000"},
|
||||
{"16.5ZiB", "19479761741837286506496"},
|
||||
{"16Z", "16000000000000000000000"},
|
||||
{"16Zi", "18889465931478580854784"},
|
||||
{"16.5Z", "16500000000000000000000"},
|
||||
{"16.5Zi", "19479761741837286506496"},
|
||||
|
||||
{"16YB", "16000000000000000000000000"},
|
||||
{"16YiB", "19342813113834066795298816"},
|
||||
{"16.5YB", "16500000000000000000000000"},
|
||||
{"16.5YiB", "19947276023641381382651904"},
|
||||
{"16Y", "16000000000000000000000000"},
|
||||
{"16Yi", "19342813113834066795298816"},
|
||||
{"16.5Y", "16500000000000000000000000"},
|
||||
{"16.5Yi", "19947276023641381382651904"},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
x, err := ParseBigBytes(test.in)
|
||||
if err != nil {
|
||||
t.Errorf("Error parsing %q: %v", test.in, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if x.String() != test.out {
|
||||
t.Errorf("Expected %q for %q, got %v", test.out, test.in, x)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkParseBigBytes(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
ParseBigBytes("16.5Z")
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkBigBytes(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
bibyte(16.5 * GByte)
|
||||
}
|
||||
}
|
||||
134
Godeps/_workspace/src/github.com/dustin/go-humanize/bytes.go
generated
vendored
134
Godeps/_workspace/src/github.com/dustin/go-humanize/bytes.go
generated
vendored
@ -1,134 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
// IEC Sizes.
|
||||
// kibis of bits
|
||||
const (
|
||||
Byte = 1 << (iota * 10)
|
||||
KiByte
|
||||
MiByte
|
||||
GiByte
|
||||
TiByte
|
||||
PiByte
|
||||
EiByte
|
||||
)
|
||||
|
||||
// SI Sizes.
|
||||
const (
|
||||
IByte = 1
|
||||
KByte = IByte * 1000
|
||||
MByte = KByte * 1000
|
||||
GByte = MByte * 1000
|
||||
TByte = GByte * 1000
|
||||
PByte = TByte * 1000
|
||||
EByte = PByte * 1000
|
||||
)
|
||||
|
||||
var bytesSizeTable = map[string]uint64{
|
||||
"b": Byte,
|
||||
"kib": KiByte,
|
||||
"kb": KByte,
|
||||
"mib": MiByte,
|
||||
"mb": MByte,
|
||||
"gib": GiByte,
|
||||
"gb": GByte,
|
||||
"tib": TiByte,
|
||||
"tb": TByte,
|
||||
"pib": PiByte,
|
||||
"pb": PByte,
|
||||
"eib": EiByte,
|
||||
"eb": EByte,
|
||||
// Without suffix
|
||||
"": Byte,
|
||||
"ki": KiByte,
|
||||
"k": KByte,
|
||||
"mi": MiByte,
|
||||
"m": MByte,
|
||||
"gi": GiByte,
|
||||
"g": GByte,
|
||||
"ti": TiByte,
|
||||
"t": TByte,
|
||||
"pi": PiByte,
|
||||
"p": PByte,
|
||||
"ei": EiByte,
|
||||
"e": EByte,
|
||||
}
|
||||
|
||||
func logn(n, b float64) float64 {
|
||||
return math.Log(n) / math.Log(b)
|
||||
}
|
||||
|
||||
func humanateBytes(s uint64, base float64, sizes []string) string {
|
||||
if s < 10 {
|
||||
return fmt.Sprintf("%dB", s)
|
||||
}
|
||||
e := math.Floor(logn(float64(s), base))
|
||||
suffix := sizes[int(e)]
|
||||
val := math.Floor(float64(s)/math.Pow(base, e)*10+0.5) / 10
|
||||
f := "%.0f%s"
|
||||
if val < 10 {
|
||||
f = "%.1f%s"
|
||||
}
|
||||
|
||||
return fmt.Sprintf(f, val, suffix)
|
||||
}
|
||||
|
||||
// Bytes produces a human readable representation of an SI size.
|
||||
//
|
||||
// See also: ParseBytes.
|
||||
//
|
||||
// Bytes(82854982) -> 83MB
|
||||
func Bytes(s uint64) string {
|
||||
sizes := []string{"B", "KB", "MB", "GB", "TB", "PB", "EB"}
|
||||
return humanateBytes(s, 1000, sizes)
|
||||
}
|
||||
|
||||
// IBytes produces a human readable representation of an IEC size.
|
||||
//
|
||||
// See also: ParseBytes.
|
||||
//
|
||||
// IBytes(82854982) -> 79MiB
|
||||
func IBytes(s uint64) string {
|
||||
sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB"}
|
||||
return humanateBytes(s, 1024, sizes)
|
||||
}
|
||||
|
||||
// ParseBytes parses a string representation of bytes into the number
|
||||
// of bytes it represents.
|
||||
//
|
||||
// See Also: Bytes, IBytes.
|
||||
//
|
||||
// ParseBytes("42MB") -> 42000000, nil
|
||||
// ParseBytes("42mib") -> 44040192, nil
|
||||
func ParseBytes(s string) (uint64, error) {
|
||||
lastDigit := 0
|
||||
for _, r := range s {
|
||||
if !(unicode.IsDigit(r) || r == '.') {
|
||||
break
|
||||
}
|
||||
lastDigit++
|
||||
}
|
||||
|
||||
f, err := strconv.ParseFloat(s[:lastDigit], 64)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
extra := strings.ToLower(strings.TrimSpace(s[lastDigit:]))
|
||||
if m, ok := bytesSizeTable[extra]; ok {
|
||||
f *= float64(m)
|
||||
if f >= math.MaxUint64 {
|
||||
return 0, fmt.Errorf("too large: %v", s)
|
||||
}
|
||||
return uint64(f), nil
|
||||
}
|
||||
|
||||
return 0, fmt.Errorf("unhandled size name: %v", extra)
|
||||
}
|
||||
144
Godeps/_workspace/src/github.com/dustin/go-humanize/bytes_test.go
generated
vendored
144
Godeps/_workspace/src/github.com/dustin/go-humanize/bytes_test.go
generated
vendored
@ -1,144 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestByteParsing(t *testing.T) {
|
||||
tests := []struct {
|
||||
in string
|
||||
exp uint64
|
||||
}{
|
||||
{"42", 42},
|
||||
{"42MB", 42000000},
|
||||
{"42MiB", 44040192},
|
||||
{"42mb", 42000000},
|
||||
{"42mib", 44040192},
|
||||
{"42MIB", 44040192},
|
||||
{"42 MB", 42000000},
|
||||
{"42 MiB", 44040192},
|
||||
{"42 mb", 42000000},
|
||||
{"42 mib", 44040192},
|
||||
{"42 MIB", 44040192},
|
||||
{"42.5MB", 42500000},
|
||||
{"42.5MiB", 44564480},
|
||||
{"42.5 MB", 42500000},
|
||||
{"42.5 MiB", 44564480},
|
||||
// No need to say B
|
||||
{"42M", 42000000},
|
||||
{"42Mi", 44040192},
|
||||
{"42m", 42000000},
|
||||
{"42mi", 44040192},
|
||||
{"42MI", 44040192},
|
||||
{"42 M", 42000000},
|
||||
{"42 Mi", 44040192},
|
||||
{"42 m", 42000000},
|
||||
{"42 mi", 44040192},
|
||||
{"42 MI", 44040192},
|
||||
{"42.5M", 42500000},
|
||||
{"42.5Mi", 44564480},
|
||||
{"42.5 M", 42500000},
|
||||
{"42.5 Mi", 44564480},
|
||||
// Large testing, breaks when too much larger than
|
||||
// this.
|
||||
{"12.5 EB", uint64(12.5 * float64(EByte))},
|
||||
{"12.5 E", uint64(12.5 * float64(EByte))},
|
||||
{"12.5 EiB", uint64(12.5 * float64(EiByte))},
|
||||
}
|
||||
|
||||
for _, p := range tests {
|
||||
got, err := ParseBytes(p.in)
|
||||
if err != nil {
|
||||
t.Errorf("Couldn't parse %v: %v", p.in, err)
|
||||
}
|
||||
if got != p.exp {
|
||||
t.Errorf("Expected %v for %v, got %v",
|
||||
p.exp, p.in, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestByteErrors(t *testing.T) {
|
||||
got, err := ParseBytes("84 JB")
|
||||
if err == nil {
|
||||
t.Errorf("Expected error, got %v", got)
|
||||
}
|
||||
got, err = ParseBytes("")
|
||||
if err == nil {
|
||||
t.Errorf("Expected error parsing nothing")
|
||||
}
|
||||
got, err = ParseBytes("16 EiB")
|
||||
if err == nil {
|
||||
t.Errorf("Expected error, got %v", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBytes(t *testing.T) {
|
||||
testList{
|
||||
{"bytes(0)", Bytes(0), "0B"},
|
||||
{"bytes(1)", Bytes(1), "1B"},
|
||||
{"bytes(803)", Bytes(803), "803B"},
|
||||
{"bytes(999)", Bytes(999), "999B"},
|
||||
|
||||
{"bytes(1024)", Bytes(1024), "1.0KB"},
|
||||
{"bytes(9999)", Bytes(9999), "10KB"},
|
||||
{"bytes(1MB - 1)", Bytes(MByte - Byte), "1000KB"},
|
||||
|
||||
{"bytes(1MB)", Bytes(1024 * 1024), "1.0MB"},
|
||||
{"bytes(1GB - 1K)", Bytes(GByte - KByte), "1000MB"},
|
||||
|
||||
{"bytes(1GB)", Bytes(GByte), "1.0GB"},
|
||||
{"bytes(1TB - 1M)", Bytes(TByte - MByte), "1000GB"},
|
||||
{"bytes(10MB)", Bytes(9999 * 1000), "10MB"},
|
||||
|
||||
{"bytes(1TB)", Bytes(TByte), "1.0TB"},
|
||||
{"bytes(1PB - 1T)", Bytes(PByte - TByte), "999TB"},
|
||||
|
||||
{"bytes(1PB)", Bytes(PByte), "1.0PB"},
|
||||
{"bytes(1PB - 1T)", Bytes(EByte - PByte), "999PB"},
|
||||
|
||||
{"bytes(1EB)", Bytes(EByte), "1.0EB"},
|
||||
// Overflows.
|
||||
// {"bytes(1EB - 1P)", Bytes((KByte*EByte)-PByte), "1023EB"},
|
||||
|
||||
{"bytes(0)", IBytes(0), "0B"},
|
||||
{"bytes(1)", IBytes(1), "1B"},
|
||||
{"bytes(803)", IBytes(803), "803B"},
|
||||
{"bytes(1023)", IBytes(1023), "1023B"},
|
||||
|
||||
{"bytes(1024)", IBytes(1024), "1.0KiB"},
|
||||
{"bytes(1MB - 1)", IBytes(MiByte - IByte), "1024KiB"},
|
||||
|
||||
{"bytes(1MB)", IBytes(1024 * 1024), "1.0MiB"},
|
||||
{"bytes(1GB - 1K)", IBytes(GiByte - KiByte), "1024MiB"},
|
||||
|
||||
{"bytes(1GB)", IBytes(GiByte), "1.0GiB"},
|
||||
{"bytes(1TB - 1M)", IBytes(TiByte - MiByte), "1024GiB"},
|
||||
|
||||
{"bytes(1TB)", IBytes(TiByte), "1.0TiB"},
|
||||
{"bytes(1PB - 1T)", IBytes(PiByte - TiByte), "1023TiB"},
|
||||
|
||||
{"bytes(1PB)", IBytes(PiByte), "1.0PiB"},
|
||||
{"bytes(1PB - 1T)", IBytes(EiByte - PiByte), "1023PiB"},
|
||||
|
||||
{"bytes(1EiB)", IBytes(EiByte), "1.0EiB"},
|
||||
// Overflows.
|
||||
// {"bytes(1EB - 1P)", IBytes((KIByte*EIByte)-PiByte), "1023EB"},
|
||||
|
||||
{"bytes(5.5GiB)", IBytes(5.5 * GiByte), "5.5GiB"},
|
||||
|
||||
{"bytes(5.5GB)", Bytes(5.5 * GByte), "5.5GB"},
|
||||
}.validate(t)
|
||||
}
|
||||
|
||||
func BenchmarkParseBytes(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
ParseBytes("16.5GB")
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkBytes(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
Bytes(16.5 * GByte)
|
||||
}
|
||||
}
|
||||
101
Godeps/_workspace/src/github.com/dustin/go-humanize/comma.go
generated
vendored
101
Godeps/_workspace/src/github.com/dustin/go-humanize/comma.go
generated
vendored
@ -1,101 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"math/big"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Comma produces a string form of the given number in base 10 with
|
||||
// commas after every three orders of magnitude.
|
||||
//
|
||||
// e.g. Comma(834142) -> 834,142
|
||||
func Comma(v int64) string {
|
||||
sign := ""
|
||||
if v < 0 {
|
||||
sign = "-"
|
||||
v = 0 - v
|
||||
}
|
||||
|
||||
parts := []string{"", "", "", "", "", "", "", ""}
|
||||
j := len(parts) - 1
|
||||
|
||||
for v > 999 {
|
||||
parts[j] = strconv.FormatInt(v%1000, 10)
|
||||
switch len(parts[j]) {
|
||||
case 2:
|
||||
parts[j] = "0" + parts[j]
|
||||
case 1:
|
||||
parts[j] = "00" + parts[j]
|
||||
}
|
||||
v = v / 1000
|
||||
j--
|
||||
}
|
||||
parts[j] = strconv.Itoa(int(v))
|
||||
return sign + strings.Join(parts[j:len(parts)], ",")
|
||||
}
|
||||
|
||||
// Commaf produces a string form of the given number in base 10 with
|
||||
// commas after every three orders of magnitude.
|
||||
//
|
||||
// e.g. Comma(834142.32) -> 834,142.32
|
||||
func Commaf(v float64) string {
|
||||
buf := &bytes.Buffer{}
|
||||
if v < 0 {
|
||||
buf.Write([]byte{'-'})
|
||||
v = 0 - v
|
||||
}
|
||||
|
||||
comma := []byte{','}
|
||||
|
||||
parts := strings.Split(strconv.FormatFloat(v, 'f', -1, 64), ".")
|
||||
pos := 0
|
||||
if len(parts[0])%3 != 0 {
|
||||
pos += len(parts[0]) % 3
|
||||
buf.WriteString(parts[0][:pos])
|
||||
buf.Write(comma)
|
||||
}
|
||||
for ; pos < len(parts[0]); pos += 3 {
|
||||
buf.WriteString(parts[0][pos : pos+3])
|
||||
buf.Write(comma)
|
||||
}
|
||||
buf.Truncate(buf.Len() - 1)
|
||||
|
||||
if len(parts) > 1 {
|
||||
buf.Write([]byte{'.'})
|
||||
buf.WriteString(parts[1])
|
||||
}
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
// BigComma produces a string form of the given big.Int in base 10
|
||||
// with commas after every three orders of magnitude.
|
||||
func BigComma(b *big.Int) string {
|
||||
sign := ""
|
||||
if b.Sign() < 0 {
|
||||
sign = "-"
|
||||
b.Abs(b)
|
||||
}
|
||||
|
||||
athousand := big.NewInt(1000)
|
||||
c := (&big.Int{}).Set(b)
|
||||
_, m := oom(c, athousand)
|
||||
parts := make([]string, m+1)
|
||||
j := len(parts) - 1
|
||||
|
||||
mod := &big.Int{}
|
||||
for b.Cmp(athousand) >= 0 {
|
||||
b.DivMod(b, athousand, mod)
|
||||
parts[j] = strconv.FormatInt(mod.Int64(), 10)
|
||||
switch len(parts[j]) {
|
||||
case 2:
|
||||
parts[j] = "0" + parts[j]
|
||||
case 1:
|
||||
parts[j] = "00" + parts[j]
|
||||
}
|
||||
j--
|
||||
}
|
||||
parts[j] = strconv.Itoa(int(b.Int64()))
|
||||
return sign + strings.Join(parts[j:len(parts)], ",")
|
||||
}
|
||||
134
Godeps/_workspace/src/github.com/dustin/go-humanize/comma_test.go
generated
vendored
134
Godeps/_workspace/src/github.com/dustin/go-humanize/comma_test.go
generated
vendored
@ -1,134 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"math"
|
||||
"math/big"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCommas(t *testing.T) {
|
||||
testList{
|
||||
{"0", Comma(0), "0"},
|
||||
{"10", Comma(10), "10"},
|
||||
{"100", Comma(100), "100"},
|
||||
{"1,000", Comma(1000), "1,000"},
|
||||
{"10,000", Comma(10000), "10,000"},
|
||||
{"100,000", Comma(100000), "100,000"},
|
||||
{"10,000,000", Comma(10000000), "10,000,000"},
|
||||
{"10,100,000", Comma(10100000), "10,100,000"},
|
||||
{"10,010,000", Comma(10010000), "10,010,000"},
|
||||
{"10,001,000", Comma(10001000), "10,001,000"},
|
||||
{"123,456,789", Comma(123456789), "123,456,789"},
|
||||
{"maxint", Comma(9.223372e+18), "9,223,372,000,000,000,000"},
|
||||
{"minint", Comma(-9.223372e+18), "-9,223,372,000,000,000,000"},
|
||||
{"-123,456,789", Comma(-123456789), "-123,456,789"},
|
||||
{"-10,100,000", Comma(-10100000), "-10,100,000"},
|
||||
{"-10,010,000", Comma(-10010000), "-10,010,000"},
|
||||
{"-10,001,000", Comma(-10001000), "-10,001,000"},
|
||||
{"-10,000,000", Comma(-10000000), "-10,000,000"},
|
||||
{"-100,000", Comma(-100000), "-100,000"},
|
||||
{"-10,000", Comma(-10000), "-10,000"},
|
||||
{"-1,000", Comma(-1000), "-1,000"},
|
||||
{"-100", Comma(-100), "-100"},
|
||||
{"-10", Comma(-10), "-10"},
|
||||
}.validate(t)
|
||||
}
|
||||
|
||||
func TestCommafs(t *testing.T) {
|
||||
testList{
|
||||
{"0", Commaf(0), "0"},
|
||||
{"10.11", Commaf(10.11), "10.11"},
|
||||
{"100", Commaf(100), "100"},
|
||||
{"1,000", Commaf(1000), "1,000"},
|
||||
{"10,000", Commaf(10000), "10,000"},
|
||||
{"100,000", Commaf(100000), "100,000"},
|
||||
{"834,142.32", Commaf(834142.32), "834,142.32"},
|
||||
{"10,000,000", Commaf(10000000), "10,000,000"},
|
||||
{"10,100,000", Commaf(10100000), "10,100,000"},
|
||||
{"10,010,000", Commaf(10010000), "10,010,000"},
|
||||
{"10,001,000", Commaf(10001000), "10,001,000"},
|
||||
{"123,456,789", Commaf(123456789), "123,456,789"},
|
||||
{"maxf64", Commaf(math.MaxFloat64), "179,769,313,486,231,570,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000"},
|
||||
{"minf64", Commaf(math.SmallestNonzeroFloat64), "0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005"},
|
||||
{"-123,456,789", Commaf(-123456789), "-123,456,789"},
|
||||
{"-10,100,000", Commaf(-10100000), "-10,100,000"},
|
||||
{"-10,010,000", Commaf(-10010000), "-10,010,000"},
|
||||
{"-10,001,000", Commaf(-10001000), "-10,001,000"},
|
||||
{"-10,000,000", Commaf(-10000000), "-10,000,000"},
|
||||
{"-100,000", Commaf(-100000), "-100,000"},
|
||||
{"-10,000", Commaf(-10000), "-10,000"},
|
||||
{"-1,000", Commaf(-1000), "-1,000"},
|
||||
{"-100.11", Commaf(-100.11), "-100.11"},
|
||||
{"-10", Commaf(-10), "-10"},
|
||||
}.validate(t)
|
||||
}
|
||||
|
||||
func BenchmarkCommas(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
Comma(1234567890)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkCommaf(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
Commaf(1234567890.83584)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkBigCommas(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
BigComma(big.NewInt(1234567890))
|
||||
}
|
||||
}
|
||||
|
||||
func bigComma(i int64) string {
|
||||
return BigComma(big.NewInt(i))
|
||||
}
|
||||
|
||||
func TestBigCommas(t *testing.T) {
|
||||
testList{
|
||||
{"0", bigComma(0), "0"},
|
||||
{"10", bigComma(10), "10"},
|
||||
{"100", bigComma(100), "100"},
|
||||
{"1,000", bigComma(1000), "1,000"},
|
||||
{"10,000", bigComma(10000), "10,000"},
|
||||
{"100,000", bigComma(100000), "100,000"},
|
||||
{"10,000,000", bigComma(10000000), "10,000,000"},
|
||||
{"10,100,000", bigComma(10100000), "10,100,000"},
|
||||
{"10,010,000", bigComma(10010000), "10,010,000"},
|
||||
{"10,001,000", bigComma(10001000), "10,001,000"},
|
||||
{"123,456,789", bigComma(123456789), "123,456,789"},
|
||||
{"maxint", bigComma(9.223372e+18), "9,223,372,000,000,000,000"},
|
||||
{"minint", bigComma(-9.223372e+18), "-9,223,372,000,000,000,000"},
|
||||
{"-123,456,789", bigComma(-123456789), "-123,456,789"},
|
||||
{"-10,100,000", bigComma(-10100000), "-10,100,000"},
|
||||
{"-10,010,000", bigComma(-10010000), "-10,010,000"},
|
||||
{"-10,001,000", bigComma(-10001000), "-10,001,000"},
|
||||
{"-10,000,000", bigComma(-10000000), "-10,000,000"},
|
||||
{"-100,000", bigComma(-100000), "-100,000"},
|
||||
{"-10,000", bigComma(-10000), "-10,000"},
|
||||
{"-1,000", bigComma(-1000), "-1,000"},
|
||||
{"-100", bigComma(-100), "-100"},
|
||||
{"-10", bigComma(-10), "-10"},
|
||||
}.validate(t)
|
||||
}
|
||||
|
||||
func TestVeryBigCommas(t *testing.T) {
|
||||
tests := []struct{ in, exp string }{
|
||||
{
|
||||
"84889279597249724975972597249849757294578485",
|
||||
"84,889,279,597,249,724,975,972,597,249,849,757,294,578,485",
|
||||
},
|
||||
{
|
||||
"-84889279597249724975972597249849757294578485",
|
||||
"-84,889,279,597,249,724,975,972,597,249,849,757,294,578,485",
|
||||
},
|
||||
}
|
||||
for _, test := range tests {
|
||||
n, _ := (&big.Int{}).SetString(test.in, 10)
|
||||
got := BigComma(n)
|
||||
if test.exp != got {
|
||||
t.Errorf("Expected %q, got %q", test.exp, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
18
Godeps/_workspace/src/github.com/dustin/go-humanize/common_test.go
generated
vendored
18
Godeps/_workspace/src/github.com/dustin/go-humanize/common_test.go
generated
vendored
@ -1,18 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
type testList []struct {
|
||||
name, got, exp string
|
||||
}
|
||||
|
||||
func (tl testList) validate(t *testing.T) {
|
||||
for _, test := range tl {
|
||||
if test.got != test.exp {
|
||||
t.Errorf("On %v, expected '%v', but got '%v'",
|
||||
test.name, test.exp, test.got)
|
||||
}
|
||||
}
|
||||
}
|
||||
23
Godeps/_workspace/src/github.com/dustin/go-humanize/ftoa.go
generated
vendored
23
Godeps/_workspace/src/github.com/dustin/go-humanize/ftoa.go
generated
vendored
@ -1,23 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import "strconv"
|
||||
|
||||
func stripTrailingZeros(s string) string {
|
||||
offset := len(s) - 1
|
||||
for offset > 0 {
|
||||
if s[offset] == '.' {
|
||||
offset--
|
||||
break
|
||||
}
|
||||
if s[offset] != '0' {
|
||||
break
|
||||
}
|
||||
offset--
|
||||
}
|
||||
return s[:offset+1]
|
||||
}
|
||||
|
||||
// Ftoa converts a float to a string with no trailing zeros.
|
||||
func Ftoa(num float64) string {
|
||||
return stripTrailingZeros(strconv.FormatFloat(num, 'f', 6, 64))
|
||||
}
|
||||
55
Godeps/_workspace/src/github.com/dustin/go-humanize/ftoa_test.go
generated
vendored
55
Godeps/_workspace/src/github.com/dustin/go-humanize/ftoa_test.go
generated
vendored
@ -1,55 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestFtoa(t *testing.T) {
|
||||
testList{
|
||||
{"200", Ftoa(200), "200"},
|
||||
{"2", Ftoa(2), "2"},
|
||||
{"2.2", Ftoa(2.2), "2.2"},
|
||||
{"2.02", Ftoa(2.02), "2.02"},
|
||||
{"200.02", Ftoa(200.02), "200.02"},
|
||||
}.validate(t)
|
||||
}
|
||||
|
||||
func BenchmarkFtoaRegexTrailing(b *testing.B) {
|
||||
trailingZerosRegex := regexp.MustCompile(`\.?0+$`)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
trailingZerosRegex.ReplaceAllString("2.00000", "")
|
||||
trailingZerosRegex.ReplaceAllString("2.0000", "")
|
||||
trailingZerosRegex.ReplaceAllString("2.000", "")
|
||||
trailingZerosRegex.ReplaceAllString("2.00", "")
|
||||
trailingZerosRegex.ReplaceAllString("2.0", "")
|
||||
trailingZerosRegex.ReplaceAllString("2", "")
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkFtoaFunc(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
stripTrailingZeros("2.00000")
|
||||
stripTrailingZeros("2.0000")
|
||||
stripTrailingZeros("2.000")
|
||||
stripTrailingZeros("2.00")
|
||||
stripTrailingZeros("2.0")
|
||||
stripTrailingZeros("2")
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkFmtF(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
fmt.Sprintf("%f", 2.03584)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkStrconvF(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
strconv.FormatFloat(2.03584, 'f', 6, 64)
|
||||
}
|
||||
}
|
||||
8
Godeps/_workspace/src/github.com/dustin/go-humanize/humanize.go
generated
vendored
8
Godeps/_workspace/src/github.com/dustin/go-humanize/humanize.go
generated
vendored
@ -1,8 +0,0 @@
|
||||
/*
|
||||
Package humanize converts boring ugly numbers to human-friendly strings and back.
|
||||
|
||||
Durations can be turned into strings such as "3 days ago", numbers
|
||||
representing sizes like 82854982 into useful strings like, "83MB" or
|
||||
"79MiB" (whichever you prefer).
|
||||
*/
|
||||
package humanize
|
||||
192
Godeps/_workspace/src/github.com/dustin/go-humanize/number.go
generated
vendored
192
Godeps/_workspace/src/github.com/dustin/go-humanize/number.go
generated
vendored
@ -1,192 +0,0 @@
|
||||
package humanize
|
||||
|
||||
/*
|
||||
Slightly adapted from the source to fit go-humanize.
|
||||
|
||||
Author: https://github.com/gorhill
|
||||
Source: https://gist.github.com/gorhill/5285193
|
||||
|
||||
*/
|
||||
|
||||
import (
|
||||
"math"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
var (
|
||||
renderFloatPrecisionMultipliers = [...]float64{
|
||||
1,
|
||||
10,
|
||||
100,
|
||||
1000,
|
||||
10000,
|
||||
100000,
|
||||
1000000,
|
||||
10000000,
|
||||
100000000,
|
||||
1000000000,
|
||||
}
|
||||
|
||||
renderFloatPrecisionRounders = [...]float64{
|
||||
0.5,
|
||||
0.05,
|
||||
0.005,
|
||||
0.0005,
|
||||
0.00005,
|
||||
0.000005,
|
||||
0.0000005,
|
||||
0.00000005,
|
||||
0.000000005,
|
||||
0.0000000005,
|
||||
}
|
||||
)
|
||||
|
||||
// FormatFloat produces a formatted number as string based on the following user-specified criteria:
|
||||
// * thousands separator
|
||||
// * decimal separator
|
||||
// * decimal precision
|
||||
//
|
||||
// Usage: s := RenderFloat(format, n)
|
||||
// The format parameter tells how to render the number n.
|
||||
//
|
||||
// See examples: http://play.golang.org/p/LXc1Ddm1lJ
|
||||
//
|
||||
// Examples of format strings, given n = 12345.6789:
|
||||
// "#,###.##" => "12,345.67"
|
||||
// "#,###." => "12,345"
|
||||
// "#,###" => "12345,678"
|
||||
// "#\u202F###,##" => "12 345,68"
|
||||
// "#.###,###### => 12.345,678900
|
||||
// "" (aka default format) => 12,345.67
|
||||
//
|
||||
// The highest precision allowed is 9 digits after the decimal symbol.
|
||||
// There is also a version for integer number, FormatInteger(),
|
||||
// which is convenient for calls within template.
|
||||
func FormatFloat(format string, n float64) string {
|
||||
// Special cases:
|
||||
// NaN = "NaN"
|
||||
// +Inf = "+Infinity"
|
||||
// -Inf = "-Infinity"
|
||||
if math.IsNaN(n) {
|
||||
return "NaN"
|
||||
}
|
||||
if n > math.MaxFloat64 {
|
||||
return "Infinity"
|
||||
}
|
||||
if n < -math.MaxFloat64 {
|
||||
return "-Infinity"
|
||||
}
|
||||
|
||||
// default format
|
||||
precision := 2
|
||||
decimalStr := "."
|
||||
thousandStr := ","
|
||||
positiveStr := ""
|
||||
negativeStr := "-"
|
||||
|
||||
if len(format) > 0 {
|
||||
format := []rune(format)
|
||||
|
||||
// If there is an explicit format directive,
|
||||
// then default values are these:
|
||||
precision = 9
|
||||
thousandStr = ""
|
||||
|
||||
// collect indices of meaningful formatting directives
|
||||
formatIndx := []int{}
|
||||
for i, char := range format {
|
||||
if char != '#' && char != '0' {
|
||||
formatIndx = append(formatIndx, i)
|
||||
}
|
||||
}
|
||||
|
||||
if len(formatIndx) > 0 {
|
||||
// Directive at index 0:
|
||||
// Must be a '+'
|
||||
// Raise an error if not the case
|
||||
// index: 0123456789
|
||||
// +0.000,000
|
||||
// +000,000.0
|
||||
// +0000.00
|
||||
// +0000
|
||||
if formatIndx[0] == 0 {
|
||||
if format[formatIndx[0]] != '+' {
|
||||
panic("RenderFloat(): invalid positive sign directive")
|
||||
}
|
||||
positiveStr = "+"
|
||||
formatIndx = formatIndx[1:]
|
||||
}
|
||||
|
||||
// Two directives:
|
||||
// First is thousands separator
|
||||
// Raise an error if not followed by 3-digit
|
||||
// 0123456789
|
||||
// 0.000,000
|
||||
// 000,000.00
|
||||
if len(formatIndx) == 2 {
|
||||
if (formatIndx[1] - formatIndx[0]) != 4 {
|
||||
panic("RenderFloat(): thousands separator directive must be followed by 3 digit-specifiers")
|
||||
}
|
||||
thousandStr = string(format[formatIndx[0]])
|
||||
formatIndx = formatIndx[1:]
|
||||
}
|
||||
|
||||
// One directive:
|
||||
// Directive is decimal separator
|
||||
// The number of digit-specifier following the separator indicates wanted precision
|
||||
// 0123456789
|
||||
// 0.00
|
||||
// 000,0000
|
||||
if len(formatIndx) == 1 {
|
||||
decimalStr = string(format[formatIndx[0]])
|
||||
precision = len(format) - formatIndx[0] - 1
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// generate sign part
|
||||
var signStr string
|
||||
if n >= 0.000000001 {
|
||||
signStr = positiveStr
|
||||
} else if n <= -0.000000001 {
|
||||
signStr = negativeStr
|
||||
n = -n
|
||||
} else {
|
||||
signStr = ""
|
||||
n = 0.0
|
||||
}
|
||||
|
||||
// split number into integer and fractional parts
|
||||
intf, fracf := math.Modf(n + renderFloatPrecisionRounders[precision])
|
||||
|
||||
// generate integer part string
|
||||
intStr := strconv.Itoa(int(intf))
|
||||
|
||||
// add thousand separator if required
|
||||
if len(thousandStr) > 0 {
|
||||
for i := len(intStr); i > 3; {
|
||||
i -= 3
|
||||
intStr = intStr[:i] + thousandStr + intStr[i:]
|
||||
}
|
||||
}
|
||||
|
||||
// no fractional part, we can leave now
|
||||
if precision == 0 {
|
||||
return signStr + intStr
|
||||
}
|
||||
|
||||
// generate fractional part
|
||||
fracStr := strconv.Itoa(int(fracf * renderFloatPrecisionMultipliers[precision]))
|
||||
// may need padding
|
||||
if len(fracStr) < precision {
|
||||
fracStr = "000000000000000"[:precision-len(fracStr)] + fracStr
|
||||
}
|
||||
|
||||
return signStr + intStr + decimalStr + fracStr
|
||||
}
|
||||
|
||||
// FormatInteger produces a formatted number as string.
|
||||
// See FormatFloat.
|
||||
func FormatInteger(format string, n int) string {
|
||||
return FormatFloat(format, float64(n))
|
||||
}
|
||||
78
Godeps/_workspace/src/github.com/dustin/go-humanize/number_test.go
generated
vendored
78
Godeps/_workspace/src/github.com/dustin/go-humanize/number_test.go
generated
vendored
@ -1,78 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type TestStruct struct {
|
||||
name string
|
||||
format string
|
||||
num float64
|
||||
formatted string
|
||||
}
|
||||
|
||||
func TestFormatFloat(t *testing.T) {
|
||||
tests := []TestStruct{
|
||||
{"default", "", 12345.6789, "12,345.68"},
|
||||
{"#", "#", 12345.6789, "12345.678900000"},
|
||||
{"#.", "#.", 12345.6789, "12346"},
|
||||
{"#,#", "#,#", 12345.6789, "12345,7"},
|
||||
{"#,##", "#,##", 12345.6789, "12345,68"},
|
||||
{"#,###", "#,###", 12345.6789, "12345,679"},
|
||||
{"#,###.", "#,###.", 12345.6789, "12,346"},
|
||||
{"#,###.##", "#,###.##", 12345.6789, "12,345.68"},
|
||||
{"#,###.###", "#,###.###", 12345.6789, "12,345.679"},
|
||||
{"#,###.####", "#,###.####", 12345.6789, "12,345.6789"},
|
||||
{"#.###,######", "#.###,######", 12345.6789, "12.345,678900"},
|
||||
{"#\u202f###,##", "#\u202f###,##", 12345.6789, "12 345,68"},
|
||||
|
||||
// special cases
|
||||
{"NaN", "#", math.NaN(), "NaN"},
|
||||
{"+Inf", "#", math.Inf(1), "Infinity"},
|
||||
{"-Inf", "#", math.Inf(-1), "-Infinity"},
|
||||
{"signStr <= -0.000000001", "", -0.000000002, "-0.00"},
|
||||
{"signStr = 0", "", 0, "0.00"},
|
||||
{"Format directive must start with +", "+000", 12345.6789, "+12345.678900000"},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
got := FormatFloat(test.format, test.num)
|
||||
if got != test.formatted {
|
||||
t.Errorf("On %v (%v, %v), got %v, wanted %v",
|
||||
test.name, test.format, test.num, got, test.formatted)
|
||||
}
|
||||
}
|
||||
// Test a single integer
|
||||
got := FormatInteger("#", 12345)
|
||||
if got != "12345.000000000" {
|
||||
t.Errorf("On %v (%v, %v), got %v, wanted %v",
|
||||
"integerTest", "#", 12345, got, "12345.000000000")
|
||||
}
|
||||
// Test the things that could panic
|
||||
panictests := []TestStruct{
|
||||
{"RenderFloat(): invalid positive sign directive", "-", 12345.6789, "12,345.68"},
|
||||
{"RenderFloat(): thousands separator directive must be followed by 3 digit-specifiers", "0.01", 12345.6789, "12,345.68"},
|
||||
}
|
||||
for _, test := range panictests {
|
||||
didPanic := false
|
||||
var message interface{}
|
||||
func() {
|
||||
|
||||
defer func() {
|
||||
if message = recover(); message != nil {
|
||||
didPanic = true
|
||||
}
|
||||
}()
|
||||
|
||||
// call the target function
|
||||
_ = FormatFloat(test.format, test.num)
|
||||
|
||||
}()
|
||||
if didPanic != true {
|
||||
t.Errorf("On %v, should have panic and did not.",
|
||||
test.name)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
25
Godeps/_workspace/src/github.com/dustin/go-humanize/ordinals.go
generated
vendored
25
Godeps/_workspace/src/github.com/dustin/go-humanize/ordinals.go
generated
vendored
@ -1,25 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import "strconv"
|
||||
|
||||
// Ordinal gives you the input number in a rank/ordinal format.
|
||||
//
|
||||
// Ordinal(3) -> 3rd
|
||||
func Ordinal(x int) string {
|
||||
suffix := "th"
|
||||
switch x % 10 {
|
||||
case 1:
|
||||
if x%100 != 11 {
|
||||
suffix = "st"
|
||||
}
|
||||
case 2:
|
||||
if x%100 != 12 {
|
||||
suffix = "nd"
|
||||
}
|
||||
case 3:
|
||||
if x%100 != 13 {
|
||||
suffix = "rd"
|
||||
}
|
||||
}
|
||||
return strconv.Itoa(x) + suffix
|
||||
}
|
||||
22
Godeps/_workspace/src/github.com/dustin/go-humanize/ordinals_test.go
generated
vendored
22
Godeps/_workspace/src/github.com/dustin/go-humanize/ordinals_test.go
generated
vendored
@ -1,22 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestOrdinals(t *testing.T) {
|
||||
testList{
|
||||
{"0", Ordinal(0), "0th"},
|
||||
{"1", Ordinal(1), "1st"},
|
||||
{"2", Ordinal(2), "2nd"},
|
||||
{"3", Ordinal(3), "3rd"},
|
||||
{"4", Ordinal(4), "4th"},
|
||||
{"10", Ordinal(10), "10th"},
|
||||
{"11", Ordinal(11), "11th"},
|
||||
{"12", Ordinal(12), "12th"},
|
||||
{"13", Ordinal(13), "13th"},
|
||||
{"101", Ordinal(101), "101st"},
|
||||
{"102", Ordinal(102), "102nd"},
|
||||
{"103", Ordinal(103), "103rd"},
|
||||
}.validate(t)
|
||||
}
|
||||
110
Godeps/_workspace/src/github.com/dustin/go-humanize/si.go
generated
vendored
110
Godeps/_workspace/src/github.com/dustin/go-humanize/si.go
generated
vendored
@ -1,110 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"math"
|
||||
"regexp"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
var siPrefixTable = map[float64]string{
|
||||
-24: "y", // yocto
|
||||
-21: "z", // zepto
|
||||
-18: "a", // atto
|
||||
-15: "f", // femto
|
||||
-12: "p", // pico
|
||||
-9: "n", // nano
|
||||
-6: "µ", // micro
|
||||
-3: "m", // milli
|
||||
0: "",
|
||||
3: "k", // kilo
|
||||
6: "M", // mega
|
||||
9: "G", // giga
|
||||
12: "T", // tera
|
||||
15: "P", // peta
|
||||
18: "E", // exa
|
||||
21: "Z", // zetta
|
||||
24: "Y", // yotta
|
||||
}
|
||||
|
||||
var revSIPrefixTable = revfmap(siPrefixTable)
|
||||
|
||||
// revfmap reverses the map and precomputes the power multiplier
|
||||
func revfmap(in map[float64]string) map[string]float64 {
|
||||
rv := map[string]float64{}
|
||||
for k, v := range in {
|
||||
rv[v] = math.Pow(10, k)
|
||||
}
|
||||
return rv
|
||||
}
|
||||
|
||||
var riParseRegex *regexp.Regexp
|
||||
|
||||
func init() {
|
||||
ri := `^([0-9.]+)([`
|
||||
for _, v := range siPrefixTable {
|
||||
ri += v
|
||||
}
|
||||
ri += `]?)(.*)`
|
||||
|
||||
riParseRegex = regexp.MustCompile(ri)
|
||||
}
|
||||
|
||||
// ComputeSI finds the most appropriate SI prefix for the given number
|
||||
// and returns the prefix along with the value adjusted to be within
|
||||
// that prefix.
|
||||
//
|
||||
// See also: SI, ParseSI.
|
||||
//
|
||||
// e.g. ComputeSI(2.2345e-12) -> (2.2345, "p")
|
||||
func ComputeSI(input float64) (float64, string) {
|
||||
if input == 0 {
|
||||
return 0, ""
|
||||
}
|
||||
exponent := math.Floor(logn(input, 10))
|
||||
exponent = math.Floor(exponent/3) * 3
|
||||
|
||||
value := input / math.Pow(10, exponent)
|
||||
|
||||
// Handle special case where value is exactly 1000.0
|
||||
// Should return 1M instead of 1000k
|
||||
if value == 1000.0 {
|
||||
exponent += 3
|
||||
value = input / math.Pow(10, exponent)
|
||||
}
|
||||
|
||||
prefix := siPrefixTable[exponent]
|
||||
return value, prefix
|
||||
}
|
||||
|
||||
// SI returns a string with default formatting.
|
||||
//
|
||||
// SI uses Ftoa to format float value, removing trailing zeros.
|
||||
//
|
||||
// See also: ComputeSI, ParseSI.
|
||||
//
|
||||
// e.g. SI(1000000, B) -> 1MB
|
||||
// e.g. SI(2.2345e-12, "F") -> 2.2345pF
|
||||
func SI(input float64, unit string) string {
|
||||
value, prefix := ComputeSI(input)
|
||||
return Ftoa(value) + prefix + unit
|
||||
}
|
||||
|
||||
var errInvalid = errors.New("invalid input")
|
||||
|
||||
// ParseSI parses an SI string back into the number and unit.
|
||||
//
|
||||
// See also: SI, ComputeSI.
|
||||
//
|
||||
// e.g. ParseSI(2.2345pF) -> (2.2345e-12, "F", nil)
|
||||
func ParseSI(input string) (float64, string, error) {
|
||||
found := riParseRegex.FindStringSubmatch(input)
|
||||
if len(found) != 4 {
|
||||
return 0, "", errInvalid
|
||||
}
|
||||
mag := revSIPrefixTable[found[2]]
|
||||
unit := found[3]
|
||||
|
||||
base, err := strconv.ParseFloat(found[1], 64)
|
||||
return base * mag, unit, err
|
||||
}
|
||||
98
Godeps/_workspace/src/github.com/dustin/go-humanize/si_test.go
generated
vendored
98
Godeps/_workspace/src/github.com/dustin/go-humanize/si_test.go
generated
vendored
@ -1,98 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestSI(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
num float64
|
||||
formatted string
|
||||
}{
|
||||
{"e-24", 1e-24, "1yF"},
|
||||
{"e-21", 1e-21, "1zF"},
|
||||
{"e-18", 1e-18, "1aF"},
|
||||
{"e-15", 1e-15, "1fF"},
|
||||
{"e-12", 1e-12, "1pF"},
|
||||
{"e-12", 2.2345e-12, "2.2345pF"},
|
||||
{"e-12", 2.23e-12, "2.23pF"},
|
||||
{"e-11", 2.23e-11, "22.3pF"},
|
||||
{"e-10", 2.2e-10, "220pF"},
|
||||
{"e-9", 2.2e-9, "2.2nF"},
|
||||
{"e-8", 2.2e-8, "22nF"},
|
||||
{"e-7", 2.2e-7, "220nF"},
|
||||
{"e-6", 2.2e-6, "2.2µF"},
|
||||
{"e-6", 1e-6, "1µF"},
|
||||
{"e-5", 2.2e-5, "22µF"},
|
||||
{"e-4", 2.2e-4, "220µF"},
|
||||
{"e-3", 2.2e-3, "2.2mF"},
|
||||
{"e-2", 2.2e-2, "22mF"},
|
||||
{"e-1", 2.2e-1, "220mF"},
|
||||
{"e+0", 2.2e-0, "2.2F"},
|
||||
{"e+0", 2.2, "2.2F"},
|
||||
{"e+1", 2.2e+1, "22F"},
|
||||
{"0", 0, "0F"},
|
||||
{"e+1", 22, "22F"},
|
||||
{"e+2", 2.2e+2, "220F"},
|
||||
{"e+2", 220, "220F"},
|
||||
{"e+3", 2.2e+3, "2.2kF"},
|
||||
{"e+3", 2200, "2.2kF"},
|
||||
{"e+4", 2.2e+4, "22kF"},
|
||||
{"e+4", 22000, "22kF"},
|
||||
{"e+5", 2.2e+5, "220kF"},
|
||||
{"e+6", 2.2e+6, "2.2MF"},
|
||||
{"e+6", 1e+6, "1MF"},
|
||||
{"e+7", 2.2e+7, "22MF"},
|
||||
{"e+8", 2.2e+8, "220MF"},
|
||||
{"e+9", 2.2e+9, "2.2GF"},
|
||||
{"e+10", 2.2e+10, "22GF"},
|
||||
{"e+11", 2.2e+11, "220GF"},
|
||||
{"e+12", 2.2e+12, "2.2TF"},
|
||||
{"e+15", 2.2e+15, "2.2PF"},
|
||||
{"e+18", 2.2e+18, "2.2EF"},
|
||||
{"e+21", 2.2e+21, "2.2ZF"},
|
||||
{"e+24", 2.2e+24, "2.2YF"},
|
||||
|
||||
// special case
|
||||
{"1F", 1000 * 1000, "1MF"},
|
||||
{"1F", 1e6, "1MF"},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
got := SI(test.num, "F")
|
||||
if got != test.formatted {
|
||||
t.Errorf("On %v (%v), got %v, wanted %v",
|
||||
test.name, test.num, got, test.formatted)
|
||||
}
|
||||
|
||||
gotf, gotu, err := ParseSI(test.formatted)
|
||||
if err != nil {
|
||||
t.Errorf("Error parsing %v (%v): %v", test.name, test.formatted, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if math.Abs(1-(gotf/test.num)) > 0.01 {
|
||||
t.Errorf("On %v (%v), got %v, wanted %v (±%v)",
|
||||
test.name, test.formatted, gotf, test.num,
|
||||
math.Abs(1-(gotf/test.num)))
|
||||
}
|
||||
if gotu != "F" {
|
||||
t.Errorf("On %v (%v), expected unit F, got %v",
|
||||
test.name, test.formatted, gotu)
|
||||
}
|
||||
}
|
||||
|
||||
// Parse error
|
||||
gotf, gotu, err := ParseSI("x1.21JW") // 1.21 jigga whats
|
||||
if err == nil {
|
||||
t.Errorf("Expected error on x1.21JW, got %v %v", gotf, gotu)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkParseSI(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
ParseSI("2.2346ZB")
|
||||
}
|
||||
}
|
||||
91
Godeps/_workspace/src/github.com/dustin/go-humanize/times.go
generated
vendored
91
Godeps/_workspace/src/github.com/dustin/go-humanize/times.go
generated
vendored
@ -1,91 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"sort"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Seconds-based time units
|
||||
const (
|
||||
Minute = 60
|
||||
Hour = 60 * Minute
|
||||
Day = 24 * Hour
|
||||
Week = 7 * Day
|
||||
Month = 30 * Day
|
||||
Year = 12 * Month
|
||||
LongTime = 37 * Year
|
||||
)
|
||||
|
||||
// Time formats a time into a relative string.
|
||||
//
|
||||
// Time(someT) -> "3 weeks ago"
|
||||
func Time(then time.Time) string {
|
||||
return RelTime(then, time.Now(), "ago", "from now")
|
||||
}
|
||||
|
||||
var magnitudes = []struct {
|
||||
d int64
|
||||
format string
|
||||
divby int64
|
||||
}{
|
||||
{1, "now", 1},
|
||||
{2, "1 second %s", 1},
|
||||
{Minute, "%d seconds %s", 1},
|
||||
{2 * Minute, "1 minute %s", 1},
|
||||
{Hour, "%d minutes %s", Minute},
|
||||
{2 * Hour, "1 hour %s", 1},
|
||||
{Day, "%d hours %s", Hour},
|
||||
{2 * Day, "1 day %s", 1},
|
||||
{Week, "%d days %s", Day},
|
||||
{2 * Week, "1 week %s", 1},
|
||||
{Month, "%d weeks %s", Week},
|
||||
{2 * Month, "1 month %s", 1},
|
||||
{Year, "%d months %s", Month},
|
||||
{18 * Month, "1 year %s", 1},
|
||||
{2 * Year, "2 years %s", 1},
|
||||
{LongTime, "%d years %s", Year},
|
||||
{math.MaxInt64, "a long while %s", 1},
|
||||
}
|
||||
|
||||
// RelTime formats a time into a relative string.
|
||||
//
|
||||
// It takes two times and two labels. In addition to the generic time
|
||||
// delta string (e.g. 5 minutes), the labels are used applied so that
|
||||
// the label corresponding to the smaller time is applied.
|
||||
//
|
||||
// RelTime(timeInPast, timeInFuture, "earlier", "later") -> "3 weeks earlier"
|
||||
func RelTime(a, b time.Time, albl, blbl string) string {
|
||||
lbl := albl
|
||||
diff := b.Unix() - a.Unix()
|
||||
|
||||
after := a.After(b)
|
||||
if after {
|
||||
lbl = blbl
|
||||
diff = a.Unix() - b.Unix()
|
||||
}
|
||||
|
||||
n := sort.Search(len(magnitudes), func(i int) bool {
|
||||
return magnitudes[i].d > diff
|
||||
})
|
||||
|
||||
mag := magnitudes[n]
|
||||
args := []interface{}{}
|
||||
escaped := false
|
||||
for _, ch := range mag.format {
|
||||
if escaped {
|
||||
switch ch {
|
||||
case '%':
|
||||
case 's':
|
||||
args = append(args, lbl)
|
||||
case 'd':
|
||||
args = append(args, diff/mag.divby)
|
||||
}
|
||||
escaped = false
|
||||
} else {
|
||||
escaped = ch == '%'
|
||||
}
|
||||
}
|
||||
return fmt.Sprintf(mag.format, args...)
|
||||
}
|
||||
71
Godeps/_workspace/src/github.com/dustin/go-humanize/times_test.go
generated
vendored
71
Godeps/_workspace/src/github.com/dustin/go-humanize/times_test.go
generated
vendored
@ -1,71 +0,0 @@
|
||||
package humanize
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestPast(t *testing.T) {
|
||||
now := time.Now().Unix()
|
||||
testList{
|
||||
{"now", Time(time.Unix(now, 0)), "now"},
|
||||
{"1 second ago", Time(time.Unix(now-1, 0)), "1 second ago"},
|
||||
{"12 seconds ago", Time(time.Unix(now-12, 0)), "12 seconds ago"},
|
||||
{"30 seconds ago", Time(time.Unix(now-30, 0)), "30 seconds ago"},
|
||||
{"45 seconds ago", Time(time.Unix(now-45, 0)), "45 seconds ago"},
|
||||
{"1 minute ago", Time(time.Unix(now-63, 0)), "1 minute ago"},
|
||||
{"15 minutes ago", Time(time.Unix(now-15*Minute, 0)), "15 minutes ago"},
|
||||
{"1 hour ago", Time(time.Unix(now-63*Minute, 0)), "1 hour ago"},
|
||||
{"2 hours ago", Time(time.Unix(now-2*Hour, 0)), "2 hours ago"},
|
||||
{"21 hours ago", Time(time.Unix(now-21*Hour, 0)), "21 hours ago"},
|
||||
{"1 day ago", Time(time.Unix(now-26*Hour, 0)), "1 day ago"},
|
||||
{"2 days ago", Time(time.Unix(now-49*Hour, 0)), "2 days ago"},
|
||||
{"3 days ago", Time(time.Unix(now-3*Day, 0)), "3 days ago"},
|
||||
{"1 week ago (1)", Time(time.Unix(now-7*Day, 0)), "1 week ago"},
|
||||
{"1 week ago (2)", Time(time.Unix(now-12*Day, 0)), "1 week ago"},
|
||||
{"2 weeks ago", Time(time.Unix(now-15*Day, 0)), "2 weeks ago"},
|
||||
{"1 month ago", Time(time.Unix(now-39*Day, 0)), "1 month ago"},
|
||||
{"3 months ago", Time(time.Unix(now-99*Day, 0)), "3 months ago"},
|
||||
{"1 year ago (1)", Time(time.Unix(now-365*Day, 0)), "1 year ago"},
|
||||
{"1 year ago (1)", Time(time.Unix(now-400*Day, 0)), "1 year ago"},
|
||||
{"2 years ago (1)", Time(time.Unix(now-548*Day, 0)), "2 years ago"},
|
||||
{"2 years ago (2)", Time(time.Unix(now-725*Day, 0)), "2 years ago"},
|
||||
{"2 years ago (3)", Time(time.Unix(now-800*Day, 0)), "2 years ago"},
|
||||
{"3 years ago", Time(time.Unix(now-3*Year, 0)), "3 years ago"},
|
||||
{"long ago", Time(time.Unix(now-LongTime, 0)), "a long while ago"},
|
||||
}.validate(t)
|
||||
}
|
||||
|
||||
func TestFuture(t *testing.T) {
|
||||
now := time.Now().Unix()
|
||||
testList{
|
||||
{"now", Time(time.Unix(now, 0)), "now"},
|
||||
{"1 second from now", Time(time.Unix(now+1, 0)), "1 second from now"},
|
||||
{"12 seconds from now", Time(time.Unix(now+12, 0)), "12 seconds from now"},
|
||||
{"30 seconds from now", Time(time.Unix(now+30, 0)), "30 seconds from now"},
|
||||
{"45 seconds from now", Time(time.Unix(now+45, 0)), "45 seconds from now"},
|
||||
{"15 minutes from now", Time(time.Unix(now+15*Minute, 0)), "15 minutes from now"},
|
||||
{"2 hours from now", Time(time.Unix(now+2*Hour, 0)), "2 hours from now"},
|
||||
{"21 hours from now", Time(time.Unix(now+21*Hour, 0)), "21 hours from now"},
|
||||
{"1 day from now", Time(time.Unix(now+26*Hour, 0)), "1 day from now"},
|
||||
{"2 days from now", Time(time.Unix(now+49*Hour, 0)), "2 days from now"},
|
||||
{"3 days from now", Time(time.Unix(now+3*Day, 0)), "3 days from now"},
|
||||
{"1 week from now (1)", Time(time.Unix(now+7*Day, 0)), "1 week from now"},
|
||||
{"1 week from now (2)", Time(time.Unix(now+12*Day, 0)), "1 week from now"},
|
||||
{"2 weeks from now", Time(time.Unix(now+15*Day, 0)), "2 weeks from now"},
|
||||
{"1 month from now", Time(time.Unix(now+30*Day, 0)), "1 month from now"},
|
||||
{"1 year from now", Time(time.Unix(now+365*Day, 0)), "1 year from now"},
|
||||
{"2 years from now", Time(time.Unix(now+2*Year, 0)), "2 years from now"},
|
||||
{"a while from now", Time(time.Unix(now+LongTime, 0)), "a long while from now"},
|
||||
}.validate(t)
|
||||
}
|
||||
|
||||
func TestRange(t *testing.T) {
|
||||
start := time.Time{}
|
||||
end := time.Unix(math.MaxInt64, math.MaxInt64)
|
||||
x := RelTime(start, end, "ago", "from now")
|
||||
if x != "a long while from now" {
|
||||
t.Errorf("Expected a long while from now, got %q", x)
|
||||
}
|
||||
}
|
||||
24
Godeps/_workspace/src/github.com/facebookgo/atomicfile/.travis.yml
generated
vendored
24
Godeps/_workspace/src/github.com/facebookgo/atomicfile/.travis.yml
generated
vendored
@ -1,24 +0,0 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.2
|
||||
- 1.3
|
||||
|
||||
matrix:
|
||||
fast_finish: true
|
||||
|
||||
before_install:
|
||||
- go get -v code.google.com/p/go.tools/cmd/vet
|
||||
- go get -v github.com/golang/lint/golint
|
||||
- go get -v code.google.com/p/go.tools/cmd/cover
|
||||
|
||||
install:
|
||||
- go install -race -v std
|
||||
- go get -race -t -v ./...
|
||||
- go install -race -v ./...
|
||||
|
||||
script:
|
||||
- go vet ./...
|
||||
- $HOME/gopath/bin/golint .
|
||||
- go test -cpu=2 -race -v ./...
|
||||
- go test -cpu=2 -covermode=atomic ./...
|
||||
54
Godeps/_workspace/src/github.com/facebookgo/atomicfile/atomicfile.go
generated
vendored
54
Godeps/_workspace/src/github.com/facebookgo/atomicfile/atomicfile.go
generated
vendored
@ -1,54 +0,0 @@
|
||||
// Package atomicfile provides the ability to write a file with an eventual
|
||||
// rename on Close. This allows for a file to always be in a consistent state
|
||||
// and never represent an in-progress write.
|
||||
package atomicfile
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
// File behaves like os.File, but does an atomic rename operation at Close.
|
||||
type File struct {
|
||||
*os.File
|
||||
path string
|
||||
}
|
||||
|
||||
// New creates a new temporary file that will replace the file at the given
|
||||
// path when Closed.
|
||||
func New(path string, mode os.FileMode) (*File, error) {
|
||||
f, err := ioutil.TempFile(filepath.Dir(path), filepath.Base(path))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := os.Chmod(f.Name(), mode); err != nil {
|
||||
os.Remove(f.Name())
|
||||
return nil, err
|
||||
}
|
||||
return &File{File: f, path: path}, nil
|
||||
}
|
||||
|
||||
// Close the file replacing the configured file.
|
||||
func (f *File) Close() error {
|
||||
if err := f.File.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.Rename(f.Name(), f.path); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Abort closes the file and removes it instead of replacing the configured
|
||||
// file. This is useful if after starting to write to the file you decide you
|
||||
// don't want it anymore.
|
||||
func (f *File) Abort() error {
|
||||
if err := f.File.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.Remove(f.Name()); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
86
Godeps/_workspace/src/github.com/facebookgo/atomicfile/atomicfile_test.go
generated
vendored
86
Godeps/_workspace/src/github.com/facebookgo/atomicfile/atomicfile_test.go
generated
vendored
@ -1,86 +0,0 @@
|
||||
package atomicfile_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/facebookgo/atomicfile"
|
||||
)
|
||||
|
||||
func test(t *testing.T, dir, prefix string) {
|
||||
t.Parallel()
|
||||
|
||||
tmpfile, err := ioutil.TempFile(dir, prefix)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
name := tmpfile.Name()
|
||||
|
||||
if err := os.Remove(name); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
defer os.Remove(name)
|
||||
f, err := atomicfile.New(name, os.FileMode(0666))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
f.Write([]byte("foo"))
|
||||
if _, err := os.Stat(name); !os.IsNotExist(err) {
|
||||
t.Fatal("did not expect file to exist")
|
||||
}
|
||||
if err := f.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := os.Stat(name); err != nil {
|
||||
t.Fatalf("expected file to exist: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCurrentDir(t *testing.T) {
|
||||
cwd, _ := os.Getwd()
|
||||
test(t, cwd, "atomicfile-current-dir-")
|
||||
}
|
||||
|
||||
func TestRootTmpDir(t *testing.T) {
|
||||
test(t, "/tmp", "atomicfile-root-tmp-dir-")
|
||||
}
|
||||
|
||||
func TestDefaultTmpDir(t *testing.T) {
|
||||
test(t, "", "atomicfile-default-tmp-dir-")
|
||||
}
|
||||
|
||||
func TestAbort(t *testing.T) {
|
||||
contents := []byte("the answer is 42")
|
||||
t.Parallel()
|
||||
tmpfile, err := ioutil.TempFile("", "atomicfile-abort-")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
name := tmpfile.Name()
|
||||
if _, err := tmpfile.Write(contents); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.Remove(name)
|
||||
|
||||
f, err := atomicfile.New(name, os.FileMode(0666))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
f.Write([]byte("foo"))
|
||||
if err := f.Abort(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := os.Stat(name); err != nil {
|
||||
t.Fatalf("expected file to exist: %s", err)
|
||||
}
|
||||
actual, err := ioutil.ReadFile(name)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !bytes.Equal(contents, actual) {
|
||||
t.Fatalf(`did not find expected "%s" instead found "%s"`, contents, actual)
|
||||
}
|
||||
}
|
||||
30
Godeps/_workspace/src/github.com/facebookgo/atomicfile/license
generated
vendored
30
Godeps/_workspace/src/github.com/facebookgo/atomicfile/license
generated
vendored
@ -1,30 +0,0 @@
|
||||
BSD License
|
||||
|
||||
For atomicfile software
|
||||
|
||||
Copyright (c) 2014, Facebook, Inc. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification,
|
||||
are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
* Neither the name Facebook nor the names of its contributors may be used to
|
||||
endorse or promote products derived from this software without specific
|
||||
prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
|
||||
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
23
Godeps/_workspace/src/github.com/facebookgo/atomicfile/patents
generated
vendored
23
Godeps/_workspace/src/github.com/facebookgo/atomicfile/patents
generated
vendored
@ -1,23 +0,0 @@
|
||||
Additional Grant of Patent Rights
|
||||
|
||||
"Software" means the atomicfile software distributed by Facebook, Inc.
|
||||
|
||||
Facebook hereby grants you a perpetual, worldwide, royalty-free, non-exclusive,
|
||||
irrevocable (subject to the termination provision below) license under any
|
||||
rights in any patent claims owned by Facebook, to make, have made, use, sell,
|
||||
offer to sell, import, and otherwise transfer the Software. For avoidance of
|
||||
doubt, no license is granted under Facebook’s rights in any patent claims that
|
||||
are infringed by (i) modifications to the Software made by you or a third party,
|
||||
or (ii) the Software in combination with any software or other technology
|
||||
provided by you or a third party.
|
||||
|
||||
The license granted hereunder will terminate, automatically and without notice,
|
||||
for anyone that makes any claim (including by filing any lawsuit, assertion or
|
||||
other action) alleging (a) direct, indirect, or contributory infringement or
|
||||
inducement to infringe any patent: (i) by Facebook or any of its subsidiaries or
|
||||
affiliates, whether or not such claim is related to the Software, (ii) by any
|
||||
party if such claim arises in whole or in part from any software, product or
|
||||
service of Facebook or any of its subsidiaries or affiliates, whether or not
|
||||
such claim is related to the Software, or (iii) by any party relating to the
|
||||
Software; or (b) that any right in any patent claim of Facebook is invalid or
|
||||
unenforceable.
|
||||
4
Godeps/_workspace/src/github.com/facebookgo/atomicfile/readme.md
generated
vendored
4
Godeps/_workspace/src/github.com/facebookgo/atomicfile/readme.md
generated
vendored
@ -1,4 +0,0 @@
|
||||
atomicfile [](http://travis-ci.org/facebookgo/atomicfile)
|
||||
==========
|
||||
|
||||
Documentation: http://godoc.org/github.com/facebookgo/atomicfile
|
||||
11
Godeps/_workspace/src/github.com/ipfs/go-datastore/.travis.yml
generated
vendored
11
Godeps/_workspace/src/github.com/ipfs/go-datastore/.travis.yml
generated
vendored
@ -1,11 +0,0 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.3
|
||||
- release
|
||||
- tip
|
||||
|
||||
script:
|
||||
- make test
|
||||
|
||||
env: TEST_NO_FUSE=1 TEST_VERBOSE=1
|
||||
84
Godeps/_workspace/src/github.com/ipfs/go-datastore/Godeps/Godeps.json
generated
vendored
84
Godeps/_workspace/src/github.com/ipfs/go-datastore/Godeps/Godeps.json
generated
vendored
@ -1,84 +0,0 @@
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-datastore",
|
||||
"GoVersion": "go1.5",
|
||||
"Packages": [
|
||||
"./..."
|
||||
],
|
||||
"Deps": [
|
||||
{
|
||||
"ImportPath": "github.com/Sirupsen/logrus",
|
||||
"Comment": "v0.8.3-37-g418b41d",
|
||||
"Rev": "418b41d23a1bf978c06faea5313ba194650ac088"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/codahale/blake2",
|
||||
"Rev": "3fa823583afba430e8fc7cdbcc670dbf90bfacc4"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/codahale/hdrhistogram",
|
||||
"Rev": "5fd85ec0b4e2dd5d4158d257d943f2e586d86b62"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/codahale/metrics",
|
||||
"Rev": "7d3beb1b480077e77c08a6f6c65ea969f6e91420"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/dustin/randbo",
|
||||
"Rev": "7f1b564ca7242d22bcc6e2128beb90d9fa38b9f0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/fzzy/radix/redis",
|
||||
"Comment": "v0.5.1",
|
||||
"Rev": "27a863cdffdb0998d13e1e11992b18489aeeaa25"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/hashicorp/golang-lru",
|
||||
"Rev": "4dfff096c4973178c8f35cf6dd1a732a0a139370"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/ipfs/go-log",
|
||||
"Rev": "ee5cb9834b33bcf29689183e0323e328c8b8de29"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/go-os-rename",
|
||||
"Rev": "2d93ae970ba96c41f717036a5bf5494faf1f38c0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jbenet/goprocess",
|
||||
"Rev": "5b02f8d275a2dd882fb06f8bbdf74347795ff3b1"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mattbaird/elastigo/api",
|
||||
"Rev": "041b88c1fcf6489a5721ede24378ce1253b9159d"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/mattbaird/elastigo/core",
|
||||
"Rev": "041b88c1fcf6489a5721ede24378ce1253b9159d"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/satori/go.uuid",
|
||||
"Rev": "7c7f2020c4c9491594b85767967f4619c2fa75f9"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
|
||||
"Rev": "871eee0a7546bb7d1b2795142e29c4534abc49b3"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syndtr/gosnappy/snappy",
|
||||
"Rev": "ce8acff4829e0c2458a67ead32390ac0a381c862"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/context",
|
||||
"Rev": "dfcbca9c45aeabb8971affa4f76b2d40f6f72328"
|
||||
},
|
||||
{
|
||||
"ImportPath": "gopkg.in/check.v1",
|
||||
"Rev": "91ae5f88a67b14891cfd43895b01164f6c120420"
|
||||
},
|
||||
{
|
||||
"ImportPath": "launchpad.net/gocheck",
|
||||
"Comment": "87",
|
||||
"Rev": "gustavo@niemeyer.net-20140225173054-xu9zlkf9kxhvow02"
|
||||
}
|
||||
]
|
||||
}
|
||||
5
Godeps/_workspace/src/github.com/ipfs/go-datastore/Godeps/Readme
generated
vendored
5
Godeps/_workspace/src/github.com/ipfs/go-datastore/Godeps/Readme
generated
vendored
@ -1,5 +0,0 @@
|
||||
This directory tree is generated automatically by godep.
|
||||
|
||||
Please do not edit.
|
||||
|
||||
See https://github.com/tools/godep for more information.
|
||||
21
Godeps/_workspace/src/github.com/ipfs/go-datastore/LICENSE
generated
vendored
21
Godeps/_workspace/src/github.com/ipfs/go-datastore/LICENSE
generated
vendored
@ -1,21 +0,0 @@
|
||||
The MIT License
|
||||
|
||||
Copyright (c) 2014 Juan Batiz-Benet
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
24
Godeps/_workspace/src/github.com/ipfs/go-datastore/Makefile
generated
vendored
24
Godeps/_workspace/src/github.com/ipfs/go-datastore/Makefile
generated
vendored
@ -1,24 +0,0 @@
|
||||
build:
|
||||
go build
|
||||
|
||||
test: build
|
||||
go test -race -cpu=5 -v ./...
|
||||
|
||||
# saves/vendors third-party dependencies to Godeps/_workspace
|
||||
# -r flag rewrites import paths to use the vendored path
|
||||
# ./... performs operation on all packages in tree
|
||||
vendor: godep
|
||||
godep save -r ./...
|
||||
|
||||
deps:
|
||||
go get ./...
|
||||
|
||||
watch:
|
||||
-make
|
||||
@echo "[watching *.go; for recompilation]"
|
||||
# for portability, use watchmedo -- pip install watchmedo
|
||||
@watchmedo shell-command --patterns="*.go;" --recursive \
|
||||
--command='make' .
|
||||
|
||||
godep:
|
||||
go get github.com/tools/godep
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user