mirror of
https://github.com/ipfs/kubo.git
synced 2026-02-21 10:27:46 +08:00
Merge branch 'master' into fix/config/apigatewaylistenip6
This commit is contained in:
commit
8b4dfc897a
3
.github/CODEOWNERS
vendored
3
.github/CODEOWNERS
vendored
@ -2,6 +2,9 @@
|
||||
# request that modifies code that they own. Code owners are not automatically
|
||||
# requested to review draft pull requests.
|
||||
|
||||
# Default
|
||||
* @ipfs/kubo-maintainers
|
||||
|
||||
# HTTP Gateway
|
||||
core/corehttp/ @lidel
|
||||
test/sharness/*gateway*.sh @lidel
|
||||
|
||||
@ -6,6 +6,13 @@
|
||||
|
||||
- [Overview](#overview)
|
||||
- [🔦 Highlights](#-highlights)
|
||||
- [Boxo under the covers](#boxo-under-the-covers)
|
||||
- [HTTP Gateway](#http-gateway)
|
||||
- [Switch to `boxo/gateway` library](#switch-to-boxogateway-library)
|
||||
- [Improved testing](#improved-testing)
|
||||
- [Trace Context support](#trace-context-support)
|
||||
- [Removed legacy features](#removed-legacy-features)
|
||||
- [`--empty-repo` is now the default](#--empty-repo-is-now-the-default)
|
||||
- [📝 Changelog](#-changelog)
|
||||
- [👨👩👧👦 Contributors](#-contributors)
|
||||
|
||||
@ -13,6 +20,97 @@
|
||||
|
||||
### 🔦 Highlights
|
||||
|
||||
#### Boxo under the covers
|
||||
We have consolidated many IPFS repos into [Boxo](https://github.com/ipfs/boxo), and this release switches Kubo over to use Boxo instead of those repos, resulting in the removal of 27 dependencies from Kubo:
|
||||
|
||||
- github.com/ipfs/go-bitswap
|
||||
- github.com/ipfs/go-ipfs-files
|
||||
- github.com/ipfs/tar-utils
|
||||
- gihtub.com/ipfs/go-block-format
|
||||
- github.com/ipfs/interface-go-ipfs-core
|
||||
- github.com/ipfs/go-unixfs
|
||||
- github.com/ipfs/go-pinning-service-http-client
|
||||
- github.com/ipfs/go-path
|
||||
- github.com/ipfs/go-namesys
|
||||
- github.com/ipfs/go-mfs
|
||||
- github.com/ipfs/go-ipfs-provider
|
||||
- github.com/ipfs/go-ipfs-pinner
|
||||
- github.com/ipfs/go-ipfs-keystore
|
||||
- github.com/ipfs/go-filestore
|
||||
- github.com/ipfs/go-ipns
|
||||
- github.com/ipfs/go-blockservice
|
||||
- github.com/ipfs/go-ipfs-chunker
|
||||
- github.com/ipfs/go-fetcher
|
||||
- github.com/ipfs/go-ipfs-blockstore
|
||||
- github.com/ipfs/go-ipfs-posinfo
|
||||
- github.com/ipfs/go-ipfs-util
|
||||
- github.com/ipfs/go-ipfs-ds-help
|
||||
- github.com/ipfs/go-verifcid
|
||||
- github.com/ipfs/go-ipfs-exchange-offline
|
||||
- github.com/ipfs/go-ipfs-routing
|
||||
- github.com/ipfs/go-ipfs-exchange-interface
|
||||
- github.com/ipfs/go-libipfs
|
||||
|
||||
Note: if you consume these in your own code, we recommend migrating to Boxo. To ease this process, there's a [tool which will help migrate your code to Boxo](https://github.com/ipfs/boxo#migrating-to-box).
|
||||
|
||||
You can learn more about the [Boxo 0.8 release](https://github.com/ipfs/boxo/releases/tag/v0.8.0) that Kubo now depends and the general effort to get Boxo to be a stable foundation [here](https://github.com/ipfs/boxo/issues/196).
|
||||
|
||||
#### HTTP Gateway
|
||||
|
||||
##### Switch to `boxo/gateway` library
|
||||
|
||||
Gateway code was extracted and refactored into a standalone library that now
|
||||
lives in [boxo/gateway](https://github.com/ipfs/boxo/tree/main/gateway). This
|
||||
enabled us to clean up some legacy code and remove dependency on Kubo
|
||||
internals.
|
||||
|
||||
The GO API is still being refined, but now operates on higher level abstraction
|
||||
defined by `gateway.IPFSBackend` interface. It is now possible to embed
|
||||
gateway functionality without the rest of Kubo.
|
||||
|
||||
See the [car](https://github.com/ipfs/boxo/tree/main/examples/gateway/car)
|
||||
and [proxy](https://github.com/ipfs/boxo/tree/main/examples/gateway/proxy)
|
||||
examples, or more advanced
|
||||
[bifrost-gateway](https://github.com/ipfs/bifrost-gateway).
|
||||
|
||||
##### Improved testing
|
||||
|
||||
We are also in the progress of moving away from gateway testing being based on
|
||||
Kubo sharness tests, and are working on
|
||||
[ipfs/gateway-conformance](https://github.com/ipfs/gateway-conformance) test
|
||||
suite that is vendor agnostic and can be run against arbitrary HTTP endpoint to
|
||||
test specific subset of [HTTP Gateways specifications](https://specs.ipfs.tech/http-gateways/).
|
||||
|
||||
##### Trace Context support
|
||||
|
||||
We've introduced initial support for `traceparent` header from [W3C's Trace
|
||||
Context spec](https://w3c.github.io/trace-context/).
|
||||
|
||||
If `traceparent` header is
|
||||
present in the gateway request, one can use its `trace-id` part to inspect
|
||||
trace spans via selected exporter such as Jaeger UI
|
||||
([docs](https://github.com/ipfs/boxo/blob/main/docs/tracing.md#using-jaeger-ui),
|
||||
[demo](https://user-images.githubusercontent.com/157609/231312374-bafc2035-1fc6-4d6b-901b-9e4af039807c.png)).
|
||||
|
||||
To learn more, see [tracing docs](https://github.com/ipfs/boxo/blob/main/docs/tracing.md).
|
||||
|
||||
##### Removed legacy features
|
||||
|
||||
- Some Kubo-specific prometheus metrics are no longer available.
|
||||
- An up-to-date list of gateway metrics can be found in [boxo/gateway/metrics.go](https://github.com/ipfs/boxo/blob/main/gateway/metrics.go).
|
||||
- The legacy opt-in `Gateway.Writable` is no longer available as of Kubo 0.20.
|
||||
- We are working on developing a modern replacement.
|
||||
To support our efforts, please leave a comment describing your use case in
|
||||
[ipfs/specs#375](https://github.com/ipfs/specs/issues/375).
|
||||
|
||||
#### `--empty-repo` is now the default
|
||||
|
||||
When creating a repository with `ipfs init`, `--empty-repo=true` is now the default. This means
|
||||
that your repository will be empty by default instead of containing the introduction files.
|
||||
You can read more about the rationale behind this decision on the [tracking issue](https://github.com/ipfs/kubo/issues/9757).
|
||||
|
||||
### 📝 Changelog
|
||||
|
||||
### 👨👩👧👦 Contributors
|
||||
|
||||
|
||||
|
||||
@ -164,74 +164,5 @@ and outputs it to `rcmgr.json.gz`
|
||||
Default: disabled (not set)
|
||||
|
||||
# Tracing
|
||||
For advanced configuration (e.g. ratio-based sampling), see also: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md
|
||||
|
||||
## `OTEL_TRACES_EXPORTER`
|
||||
Specifies the exporters to use as a comma-separated string. Each exporter has a set of additional environment variables used to configure it. The following values are supported:
|
||||
|
||||
- `otlp`
|
||||
- `jaeger`
|
||||
- `zipkin`
|
||||
- `file` -- appends traces to a JSON file on the filesystem
|
||||
|
||||
Setting this enables OpenTelemetry tracing.
|
||||
|
||||
**NOTE** Tracing support is experimental: releases may contain tracing-related breaking changes.
|
||||
|
||||
Default: "" (no exporters)
|
||||
|
||||
## `OTLP Exporter`
|
||||
Unless specified in this section, the OTLP exporter uses the environment variables documented here: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/exporter.md
|
||||
|
||||
### `OTEL_EXPORTER_OTLP_PROTOCOL`
|
||||
Specifies the OTLP protocol to use, which is one of:
|
||||
|
||||
- `grpc`
|
||||
- `http/protobuf`
|
||||
|
||||
Default: "grpc"
|
||||
|
||||
## `Jaeger Exporter`
|
||||
|
||||
See: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md#jaeger-exporter
|
||||
|
||||
## `Zipkin Exporter`
|
||||
See: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md#zipkin-exporter
|
||||
|
||||
## `File Exporter`
|
||||
### `OTEL_EXPORTER_FILE_PATH`
|
||||
Specifies the filesystem path for the JSON file.
|
||||
|
||||
Default: "$PWD/traces.json"
|
||||
|
||||
### How to use Jaeger UI
|
||||
|
||||
One can use the `jaegertracing/all-in-one` Docker image to run a full Jaeger
|
||||
stack and configure Kubo to publish traces to it (here, in an ephemeral
|
||||
container):
|
||||
|
||||
```console
|
||||
$ docker run --rm -it --name jaeger \
|
||||
-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
|
||||
-p 5775:5775/udp \
|
||||
-p 6831:6831/udp \
|
||||
-p 6832:6832/udp \
|
||||
-p 5778:5778 \
|
||||
-p 16686:16686 \
|
||||
-p 14268:14268 \
|
||||
-p 14269:14269 \
|
||||
-p 14250:14250 \
|
||||
-p 9411:9411 \
|
||||
jaegertracing/all-in-one
|
||||
```
|
||||
|
||||
Then, in other terminal, start Kubo with Jaeger tracing enabled:
|
||||
```
|
||||
$ OTEL_TRACES_EXPORTER=jaeger ipfs daemon
|
||||
```
|
||||
|
||||
Finally, the [Jaeger UI](https://github.com/jaegertracing/jaeger-ui#readme) is available at http://localhost:16686
|
||||
|
||||
## `OTEL_PROPAGATORS`
|
||||
|
||||
See https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md#general-sdk-configuration
|
||||
For tracing configuration, please check: https://github.com/ipfs/boxo/blob/main/docs/tracing.md
|
||||
|
||||
@ -7,9 +7,9 @@ go 1.18
|
||||
replace github.com/ipfs/kubo => ./../../..
|
||||
|
||||
require (
|
||||
github.com/ipfs/boxo v0.8.0
|
||||
github.com/ipfs/boxo v0.8.1-0.20230411232920-5d6c73c8e35e
|
||||
github.com/ipfs/kubo v0.0.0-00010101000000-000000000000
|
||||
github.com/libp2p/go-libp2p v0.27.0
|
||||
github.com/libp2p/go-libp2p v0.27.1
|
||||
github.com/multiformats/go-multiaddr v0.9.0
|
||||
)
|
||||
|
||||
|
||||
@ -321,8 +321,8 @@ github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/ipfs/bbloom v0.0.4 h1:Gi+8EGJ2y5qiD5FbsbpX/TMNcJw8gSqr7eyjHa4Fhvs=
|
||||
github.com/ipfs/bbloom v0.0.4/go.mod h1:cS9YprKXpoZ9lT0n/Mw/a6/aFV6DTjTLYHeA+gyqMG0=
|
||||
github.com/ipfs/boxo v0.8.0 h1:UdjAJmHzQHo/j3g3b1bAcAXCj/GM6iTwvSlBDvPBNBs=
|
||||
github.com/ipfs/boxo v0.8.0/go.mod h1:RIsi4CnTyQ7AUsNn5gXljJYZlQrHBMnJp94p73liFiA=
|
||||
github.com/ipfs/boxo v0.8.1-0.20230411232920-5d6c73c8e35e h1:8wmBhjwJk2drWZjNwoN7uc+IkG+N93laIhjY69rjMqw=
|
||||
github.com/ipfs/boxo v0.8.1-0.20230411232920-5d6c73c8e35e/go.mod h1:xJ2hVb4La5WyD7GvKYE0lq2g1rmQZoCD2K4WNrV6aZI=
|
||||
github.com/ipfs/go-bitfield v1.1.0 h1:fh7FIo8bSwaJEh6DdTWbCeZ1eqOaOkKFI74SCnsWbGA=
|
||||
github.com/ipfs/go-bitfield v1.1.0/go.mod h1:paqf1wjq/D2BBmzfTVFlJQ9IlFOZpg422HL0HqsGWHU=
|
||||
github.com/ipfs/go-block-format v0.0.2/go.mod h1:AWR46JfpcObNfg3ok2JHDUfdiHRgWhJgCQF+KIgOPJY=
|
||||
@ -489,8 +489,8 @@ github.com/libp2p/go-flow-metrics v0.0.1/go.mod h1:Iv1GH0sG8DtYN3SVJ2eG221wMiNpZ
|
||||
github.com/libp2p/go-flow-metrics v0.0.3/go.mod h1:HeoSNUrOJVK1jEpDqVEiUOIXqhbnS27omG0uWU5slZs=
|
||||
github.com/libp2p/go-flow-metrics v0.1.0 h1:0iPhMI8PskQwzh57jB9WxIuIOQ0r+15PChFGkx3Q3WM=
|
||||
github.com/libp2p/go-flow-metrics v0.1.0/go.mod h1:4Xi8MX8wj5aWNDAZttg6UPmc0ZrnFNsMtpsYUClFtro=
|
||||
github.com/libp2p/go-libp2p v0.27.0 h1:QbhrTuB0ln9j9op6yAOR0o+cx/qa9NyNZ5ov0Tql8ZU=
|
||||
github.com/libp2p/go-libp2p v0.27.0/go.mod h1:FAvvfQa/YOShUYdiSS03IR9OXzkcJXwcNA2FUCh9ImE=
|
||||
github.com/libp2p/go-libp2p v0.27.1 h1:k1u6RHsX3hqKnslDjsSgLNURxJ3O1atIZCY4gpMbbus=
|
||||
github.com/libp2p/go-libp2p v0.27.1/go.mod h1:FAvvfQa/YOShUYdiSS03IR9OXzkcJXwcNA2FUCh9ImE=
|
||||
github.com/libp2p/go-libp2p-asn-util v0.3.0 h1:gMDcMyYiZKkocGXDQ5nsUQyquC9+H+iLEQHwOCZ7s8s=
|
||||
github.com/libp2p/go-libp2p-asn-util v0.3.0/go.mod h1:B1mcOrKUE35Xq/ASTmQ4tN3LNzVVaMNmq2NACuqyB9w=
|
||||
github.com/libp2p/go-libp2p-core v0.2.4/go.mod h1:STh4fdfa5vDYr0/SzYYeqnt+E6KfEV5VxfIrm0bcI0g=
|
||||
|
||||
14
go.mod
14
go.mod
@ -16,7 +16,7 @@ require (
|
||||
github.com/gogo/protobuf v1.3.2
|
||||
github.com/google/uuid v1.3.0
|
||||
github.com/hashicorp/go-multierror v1.1.1
|
||||
github.com/ipfs/boxo v0.8.0
|
||||
github.com/ipfs/boxo v0.8.1-0.20230411232920-5d6c73c8e35e
|
||||
github.com/ipfs/go-block-format v0.1.2
|
||||
github.com/ipfs/go-cid v0.4.1
|
||||
github.com/ipfs/go-cidutil v0.1.0
|
||||
@ -45,7 +45,7 @@ require (
|
||||
github.com/jbenet/goprocess v0.1.4
|
||||
github.com/julienschmidt/httprouter v1.3.0
|
||||
github.com/libp2p/go-doh-resolver v0.4.0
|
||||
github.com/libp2p/go-libp2p v0.27.0
|
||||
github.com/libp2p/go-libp2p v0.27.1
|
||||
github.com/libp2p/go-libp2p-http v0.5.0
|
||||
github.com/libp2p/go-libp2p-kad-dht v0.23.0
|
||||
github.com/libp2p/go-libp2p-kbucket v0.5.0
|
||||
@ -75,11 +75,6 @@ require (
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.40.0
|
||||
go.opentelemetry.io/contrib/propagators/autoprop v0.40.0
|
||||
go.opentelemetry.io/otel v1.14.0
|
||||
go.opentelemetry.io/otel/exporters/jaeger v1.14.0
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.14.0
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.14.0
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.14.0
|
||||
go.opentelemetry.io/otel/exporters/zipkin v1.14.0
|
||||
go.opentelemetry.io/otel/sdk v1.14.0
|
||||
go.opentelemetry.io/otel/trace v1.14.0
|
||||
go.uber.org/dig v1.16.1
|
||||
@ -208,8 +203,13 @@ require (
|
||||
go.opentelemetry.io/contrib/propagators/b3 v1.15.0 // indirect
|
||||
go.opentelemetry.io/contrib/propagators/jaeger v1.15.0 // indirect
|
||||
go.opentelemetry.io/contrib/propagators/ot v1.15.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/jaeger v1.14.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.14.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.14.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.14.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.14.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.14.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/zipkin v1.14.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v0.37.0 // indirect
|
||||
go.opentelemetry.io/proto/otlp v0.19.0 // indirect
|
||||
go.uber.org/atomic v1.10.0 // indirect
|
||||
|
||||
8
go.sum
8
go.sum
@ -356,8 +356,8 @@ github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/ipfs/bbloom v0.0.4 h1:Gi+8EGJ2y5qiD5FbsbpX/TMNcJw8gSqr7eyjHa4Fhvs=
|
||||
github.com/ipfs/bbloom v0.0.4/go.mod h1:cS9YprKXpoZ9lT0n/Mw/a6/aFV6DTjTLYHeA+gyqMG0=
|
||||
github.com/ipfs/boxo v0.8.0 h1:UdjAJmHzQHo/j3g3b1bAcAXCj/GM6iTwvSlBDvPBNBs=
|
||||
github.com/ipfs/boxo v0.8.0/go.mod h1:RIsi4CnTyQ7AUsNn5gXljJYZlQrHBMnJp94p73liFiA=
|
||||
github.com/ipfs/boxo v0.8.1-0.20230411232920-5d6c73c8e35e h1:8wmBhjwJk2drWZjNwoN7uc+IkG+N93laIhjY69rjMqw=
|
||||
github.com/ipfs/boxo v0.8.1-0.20230411232920-5d6c73c8e35e/go.mod h1:xJ2hVb4La5WyD7GvKYE0lq2g1rmQZoCD2K4WNrV6aZI=
|
||||
github.com/ipfs/go-bitfield v1.1.0 h1:fh7FIo8bSwaJEh6DdTWbCeZ1eqOaOkKFI74SCnsWbGA=
|
||||
github.com/ipfs/go-bitfield v1.1.0/go.mod h1:paqf1wjq/D2BBmzfTVFlJQ9IlFOZpg422HL0HqsGWHU=
|
||||
github.com/ipfs/go-block-format v0.0.2/go.mod h1:AWR46JfpcObNfg3ok2JHDUfdiHRgWhJgCQF+KIgOPJY=
|
||||
@ -540,8 +540,8 @@ github.com/libp2p/go-flow-metrics v0.0.1/go.mod h1:Iv1GH0sG8DtYN3SVJ2eG221wMiNpZ
|
||||
github.com/libp2p/go-flow-metrics v0.0.3/go.mod h1:HeoSNUrOJVK1jEpDqVEiUOIXqhbnS27omG0uWU5slZs=
|
||||
github.com/libp2p/go-flow-metrics v0.1.0 h1:0iPhMI8PskQwzh57jB9WxIuIOQ0r+15PChFGkx3Q3WM=
|
||||
github.com/libp2p/go-flow-metrics v0.1.0/go.mod h1:4Xi8MX8wj5aWNDAZttg6UPmc0ZrnFNsMtpsYUClFtro=
|
||||
github.com/libp2p/go-libp2p v0.27.0 h1:QbhrTuB0ln9j9op6yAOR0o+cx/qa9NyNZ5ov0Tql8ZU=
|
||||
github.com/libp2p/go-libp2p v0.27.0/go.mod h1:FAvvfQa/YOShUYdiSS03IR9OXzkcJXwcNA2FUCh9ImE=
|
||||
github.com/libp2p/go-libp2p v0.27.1 h1:k1u6RHsX3hqKnslDjsSgLNURxJ3O1atIZCY4gpMbbus=
|
||||
github.com/libp2p/go-libp2p v0.27.1/go.mod h1:FAvvfQa/YOShUYdiSS03IR9OXzkcJXwcNA2FUCh9ImE=
|
||||
github.com/libp2p/go-libp2p-asn-util v0.3.0 h1:gMDcMyYiZKkocGXDQ5nsUQyquC9+H+iLEQHwOCZ7s8s=
|
||||
github.com/libp2p/go-libp2p-asn-util v0.3.0/go.mod h1:B1mcOrKUE35Xq/ASTmQ4tN3LNzVVaMNmq2NACuqyB9w=
|
||||
github.com/libp2p/go-libp2p-core v0.2.4/go.mod h1:STh4fdfa5vDYr0/SzYYeqnt+E6KfEV5VxfIrm0bcI0g=
|
||||
|
||||
116
test/cli/testutils/random_files.go
Normal file
116
test/cli/testutils/random_files.go
Normal file
@ -0,0 +1,116 @@
|
||||
package testutils
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"math/rand"
|
||||
"os"
|
||||
"path"
|
||||
"time"
|
||||
)
|
||||
|
||||
var AlphabetEasy = []rune("abcdefghijklmnopqrstuvwxyz01234567890-_")
|
||||
var AlphabetHard = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890!@#$%^&*()-_+= ;.,<>'\"[]{}() ")
|
||||
|
||||
type RandFiles struct {
|
||||
Rand *rand.Rand
|
||||
FileSize int // the size per file.
|
||||
FilenameSize int
|
||||
Alphabet []rune // for filenames
|
||||
|
||||
FanoutDepth int // how deep the hierarchy goes
|
||||
FanoutFiles int // how many files per dir
|
||||
FanoutDirs int // how many dirs per dir
|
||||
|
||||
RandomSize bool // randomize file sizes
|
||||
RandomFanout bool // randomize fanout numbers
|
||||
}
|
||||
|
||||
func NewRandFiles() *RandFiles {
|
||||
return &RandFiles{
|
||||
Rand: rand.New(rand.NewSource(time.Now().UnixNano())),
|
||||
FileSize: 4096,
|
||||
FilenameSize: 16,
|
||||
Alphabet: AlphabetEasy,
|
||||
FanoutDepth: 2,
|
||||
FanoutDirs: 5,
|
||||
FanoutFiles: 10,
|
||||
RandomSize: true,
|
||||
}
|
||||
}
|
||||
|
||||
func (r *RandFiles) WriteRandomFiles(root string, depth int) error {
|
||||
numfiles := r.FanoutFiles
|
||||
if r.RandomFanout {
|
||||
numfiles = rand.Intn(r.FanoutFiles) + 1
|
||||
}
|
||||
|
||||
for i := 0; i < numfiles; i++ {
|
||||
if err := r.WriteRandomFile(root); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if depth+1 <= r.FanoutDepth {
|
||||
numdirs := r.FanoutDirs
|
||||
if r.RandomFanout {
|
||||
numdirs = r.Rand.Intn(numdirs) + 1
|
||||
}
|
||||
|
||||
for i := 0; i < numdirs; i++ {
|
||||
if err := r.WriteRandomDir(root, depth+1); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *RandFiles) RandomFilename(length int) string {
|
||||
b := make([]rune, length)
|
||||
for i := range b {
|
||||
b[i] = r.Alphabet[r.Rand.Intn(len(r.Alphabet))]
|
||||
}
|
||||
return string(b)
|
||||
}
|
||||
|
||||
func (r *RandFiles) WriteRandomFile(root string) error {
|
||||
filesize := int64(r.FileSize)
|
||||
if r.RandomSize {
|
||||
filesize = r.Rand.Int63n(filesize) + 1
|
||||
}
|
||||
|
||||
n := rand.Intn(r.FilenameSize-4) + 4
|
||||
name := r.RandomFilename(n)
|
||||
filepath := path.Join(root, name)
|
||||
f, err := os.Create(filepath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating random file: %w", err)
|
||||
}
|
||||
|
||||
if _, err := io.CopyN(f, r.Rand, filesize); err != nil {
|
||||
return fmt.Errorf("copying random file: %w", err)
|
||||
}
|
||||
|
||||
return f.Close()
|
||||
}
|
||||
|
||||
func (r *RandFiles) WriteRandomDir(root string, depth int) error {
|
||||
if depth > r.FanoutDepth {
|
||||
return nil
|
||||
}
|
||||
|
||||
n := rand.Intn(r.FilenameSize-4) + 4
|
||||
name := r.RandomFilename(n)
|
||||
root = path.Join(root, name)
|
||||
if err := os.MkdirAll(root, 0755); err != nil {
|
||||
return fmt.Errorf("creating random dir: %w", err)
|
||||
}
|
||||
|
||||
err := r.WriteRandomFiles(root, depth)
|
||||
if err != nil {
|
||||
return fmt.Errorf("writing random files in random dir: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
127
test/cli/transports_test.go
Normal file
127
test/cli/transports_test.go
Normal file
@ -0,0 +1,127 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/ipfs/kubo/config"
|
||||
"github.com/ipfs/kubo/test/cli/harness"
|
||||
"github.com/ipfs/kubo/test/cli/testutils"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestTransports(t *testing.T) {
|
||||
disableRouting := func(nodes harness.Nodes) {
|
||||
nodes.ForEachPar(func(n *harness.Node) {
|
||||
n.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Routing.Type = config.NewOptionalString("none")
|
||||
cfg.Bootstrap = nil
|
||||
})
|
||||
})
|
||||
}
|
||||
checkSingleFile := func(nodes harness.Nodes) {
|
||||
s := testutils.RandomStr(100)
|
||||
hash := nodes[0].IPFSAddStr(s)
|
||||
nodes.ForEachPar(func(n *harness.Node) {
|
||||
val := n.IPFS("cat", hash).Stdout.String()
|
||||
assert.Equal(t, s, val)
|
||||
})
|
||||
}
|
||||
checkRandomDir := func(nodes harness.Nodes) {
|
||||
randDir := filepath.Join(nodes[0].Dir, "foobar")
|
||||
require.NoError(t, os.Mkdir(randDir, 0777))
|
||||
rf := testutils.NewRandFiles()
|
||||
rf.FanoutDirs = 3
|
||||
rf.FanoutFiles = 6
|
||||
require.NoError(t, rf.WriteRandomFiles(randDir, 4))
|
||||
|
||||
hash := nodes[1].IPFS("add", "-r", "-Q", randDir).Stdout.Trimmed()
|
||||
nodes.ForEachPar(func(n *harness.Node) {
|
||||
res := n.RunIPFS("refs", "-r", hash)
|
||||
assert.Equal(t, 0, res.ExitCode())
|
||||
})
|
||||
}
|
||||
|
||||
runTests := func(nodes harness.Nodes) {
|
||||
checkSingleFile(nodes)
|
||||
checkRandomDir(nodes)
|
||||
}
|
||||
|
||||
tcpNodes := func(t *testing.T) harness.Nodes {
|
||||
nodes := harness.NewT(t).NewNodes(2).Init()
|
||||
nodes.ForEachPar(func(n *harness.Node) {
|
||||
n.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Addresses.Swarm = []string{"/ip4/127.0.0.1/tcp/0"}
|
||||
cfg.Swarm.Transports.Network.QUIC = config.False
|
||||
cfg.Swarm.Transports.Network.Relay = config.False
|
||||
cfg.Swarm.Transports.Network.WebTransport = config.False
|
||||
cfg.Swarm.Transports.Network.Websocket = config.False
|
||||
})
|
||||
})
|
||||
disableRouting(nodes)
|
||||
return nodes
|
||||
}
|
||||
|
||||
t.Run("tcp", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
nodes := tcpNodes(t).StartDaemons().Connect()
|
||||
runTests(nodes)
|
||||
})
|
||||
|
||||
t.Run("tcp with mplex", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
nodes := tcpNodes(t)
|
||||
nodes.ForEachPar(func(n *harness.Node) {
|
||||
n.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Swarm.Transports.Multiplexers.Yamux = config.Disabled
|
||||
})
|
||||
})
|
||||
nodes.StartDaemons().Connect()
|
||||
runTests(nodes)
|
||||
})
|
||||
|
||||
t.Run("tcp with NOISE", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
nodes := tcpNodes(t)
|
||||
nodes.ForEachPar(func(n *harness.Node) {
|
||||
n.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Swarm.Transports.Security.TLS = config.Disabled
|
||||
})
|
||||
})
|
||||
nodes.StartDaemons().Connect()
|
||||
runTests(nodes)
|
||||
})
|
||||
|
||||
t.Run("QUIC", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
nodes := harness.NewT(t).NewNodes(5).Init()
|
||||
nodes.ForEachPar(func(n *harness.Node) {
|
||||
n.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Addresses.Swarm = []string{"/ip4/127.0.0.1/udp/0/quic-v1"}
|
||||
cfg.Swarm.Transports.Network.QUIC = config.True
|
||||
cfg.Swarm.Transports.Network.TCP = config.False
|
||||
})
|
||||
})
|
||||
disableRouting(nodes)
|
||||
nodes.StartDaemons().Connect()
|
||||
runTests(nodes)
|
||||
})
|
||||
|
||||
t.Run("QUIC", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
nodes := harness.NewT(t).NewNodes(5).Init()
|
||||
nodes.ForEachPar(func(n *harness.Node) {
|
||||
n.UpdateConfig(func(cfg *config.Config) {
|
||||
cfg.Addresses.Swarm = []string{"/ip4/127.0.0.1/udp/0/quic-v1/webtransport"}
|
||||
cfg.Swarm.Transports.Network.QUIC = config.True
|
||||
cfg.Swarm.Transports.Network.WebTransport = config.True
|
||||
})
|
||||
})
|
||||
disableRouting(nodes)
|
||||
nodes.StartDaemons().Connect()
|
||||
runTests(nodes)
|
||||
})
|
||||
|
||||
}
|
||||
@ -50,19 +50,19 @@ test_path_cmp() {
|
||||
|
||||
# Docker
|
||||
|
||||
# This takes a Dockerfile, and a build context directory
|
||||
# This takes a Dockerfile, a tag name, and a build context directory
|
||||
docker_build() {
|
||||
docker build --rm -f "$1" "$2" | ansi_strip
|
||||
docker build --rm --tag "$1" --file "$2" "$3" | ansi_strip
|
||||
}
|
||||
|
||||
# This takes an image as argument and writes a docker ID on stdout
|
||||
docker_run() {
|
||||
docker run -d "$1"
|
||||
docker run --detach "$1"
|
||||
}
|
||||
|
||||
# This takes a docker ID and a command as arguments
|
||||
docker_exec() {
|
||||
docker exec -t "$1" /bin/sh -c "$2"
|
||||
docker exec --tty "$1" /bin/sh -c "$2"
|
||||
}
|
||||
|
||||
# This takes a docker ID as argument
|
||||
@ -72,12 +72,12 @@ docker_stop() {
|
||||
|
||||
# This takes a docker ID as argument
|
||||
docker_rm() {
|
||||
docker rm -f -v "$1" > /dev/null
|
||||
docker rm --force --volumes "$1" > /dev/null
|
||||
}
|
||||
|
||||
# This takes a docker image name as argument
|
||||
docker_rmi() {
|
||||
docker rmi -f "$1" > /dev/null
|
||||
docker rmi --force "$1" > /dev/null
|
||||
}
|
||||
|
||||
# Test whether all the expected lines are included in a file. The file
|
||||
|
||||
@ -27,18 +27,12 @@ TEST_TRASH_DIR=$(pwd)
|
||||
TEST_SCRIPTS_DIR=$(dirname "$TEST_TRASH_DIR")
|
||||
TEST_TESTS_DIR=$(dirname "$TEST_SCRIPTS_DIR")
|
||||
APP_ROOT_DIR=$(dirname "$TEST_TESTS_DIR")
|
||||
IMAGE_TAG=kubo_test
|
||||
|
||||
test_expect_success "docker image build succeeds" '
|
||||
docker_build "$TEST_TESTS_DIR/../Dockerfile" "$APP_ROOT_DIR" | tee build-actual ||
|
||||
docker_build "$IMAGE_TAG" "$TEST_TESTS_DIR/../Dockerfile" "$APP_ROOT_DIR" ||
|
||||
test_fsh echo "TEST_TESTS_DIR: $TEST_TESTS_DIR" ||
|
||||
test_fsh echo "APP_ROOT_DIR : $APP_ROOT_DIR" ||
|
||||
test_fsh cat build-actual
|
||||
'
|
||||
|
||||
test_expect_success "docker image build output looks good" '
|
||||
SUCCESS_LINE=$(egrep "^Successfully built" build-actual) &&
|
||||
IMAGE_ID=$(expr "$SUCCESS_LINE" : "^Successfully built \(.*\)") ||
|
||||
test_fsh cat build-actual
|
||||
test_fsh echo "APP_ROOT_DIR : $APP_ROOT_DIR"
|
||||
'
|
||||
|
||||
test_expect_success "write init scripts" '
|
||||
@ -52,7 +46,7 @@ test_expect_success "docker image runs" '
|
||||
-p 127.0.0.1:5001:5001 -p 127.0.0.1:8080:8080 \
|
||||
-v "$PWD/001.sh":/container-init.d/001.sh \
|
||||
-v "$PWD/002.sh":/container-init.d/002.sh \
|
||||
"$IMAGE_ID")
|
||||
"$IMAGE_TAG")
|
||||
'
|
||||
|
||||
test_expect_success "docker container gateway is up" '
|
||||
@ -100,5 +94,5 @@ test_expect_success "stop docker container" '
|
||||
'
|
||||
|
||||
docker_rm "$DOC_ID"
|
||||
docker_rmi "$IMAGE_ID"
|
||||
docker_rmi "$IMAGE_TAG"
|
||||
test_done
|
||||
@ -24,10 +24,10 @@ TEST_TRASH_DIR=$(pwd)
|
||||
TEST_SCRIPTS_DIR=$(dirname "$TEST_TRASH_DIR")
|
||||
TEST_TESTS_DIR=$(dirname "$TEST_SCRIPTS_DIR")
|
||||
APP_ROOT_DIR=$(dirname "$TEST_TESTS_DIR")
|
||||
IMAGE_TAG=kubo_migrate
|
||||
|
||||
test_expect_success "docker image build succeeds" '
|
||||
docker_build "$TEST_TESTS_DIR/../Dockerfile" "$APP_ROOT_DIR" >actual &&
|
||||
IMAGE_ID=$(tail -n1 actual | cut -d " " -f 3)
|
||||
docker_build "$IMAGE_TAG" "$TEST_TESTS_DIR/../Dockerfile" "$APP_ROOT_DIR"
|
||||
'
|
||||
|
||||
test_init_ipfs
|
||||
@ -53,7 +53,7 @@ test_expect_success "startup fake dists server" '
|
||||
'
|
||||
|
||||
test_expect_success "docker image runs" '
|
||||
DOC_ID=$(docker run -d -v "$IPFS_PATH":/data/ipfs --net=host "$IMAGE_ID")
|
||||
DOC_ID=$(docker run -d -v "$IPFS_PATH":/data/ipfs --net=host "$IMAGE_TAG")
|
||||
'
|
||||
|
||||
test_expect_success "docker container tries to pull migrations from netcat" '
|
||||
@ -78,6 +78,5 @@ test_expect_success "correct version was requested" '
|
||||
'
|
||||
|
||||
docker_rm "$DOC_ID"
|
||||
docker_rmi "$IMAGE_ID"
|
||||
docker_rmi "$IMAGE_TAG"
|
||||
test_done
|
||||
|
||||
@ -1,178 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# Copyright (c) 2017 Jeromy Johnson
|
||||
# MIT Licensed; see the LICENSE file in this repository.
|
||||
#
|
||||
|
||||
test_description="Test two ipfs nodes transferring a file"
|
||||
|
||||
. lib/test-lib.sh
|
||||
|
||||
check_file_fetch() {
|
||||
node=$1
|
||||
fhash=$2
|
||||
fname=$3
|
||||
|
||||
test_expect_success "can fetch file" '
|
||||
ipfsi $node cat $fhash > fetch_out
|
||||
'
|
||||
|
||||
test_expect_success "file looks good" '
|
||||
test_cmp $fname fetch_out
|
||||
'
|
||||
}
|
||||
|
||||
check_dir_fetch() {
|
||||
node=$1
|
||||
ref=$2
|
||||
|
||||
test_expect_success "node can fetch all refs for dir" '
|
||||
ipfsi $node refs -r $ref > /dev/null
|
||||
'
|
||||
}
|
||||
|
||||
run_single_file_test() {
|
||||
test_expect_success "add a file on node1" '
|
||||
random 1000000 > filea &&
|
||||
FILEA_HASH=$(ipfsi 1 add -q filea)
|
||||
'
|
||||
|
||||
check_file_fetch 0 $FILEA_HASH filea
|
||||
}
|
||||
|
||||
run_random_dir_test() {
|
||||
test_expect_success "create a bunch of random files" '
|
||||
random-files -depth=3 -dirs=4 -files=5 -seed=5 foobar > /dev/null
|
||||
'
|
||||
|
||||
test_expect_success "add those on node 0" '
|
||||
DIR_HASH=$(ipfsi 0 add -r -Q foobar)
|
||||
'
|
||||
|
||||
check_dir_fetch 1 $DIR_HASH
|
||||
}
|
||||
|
||||
flaky_advanced_test() {
|
||||
startup_cluster 2 "$@"
|
||||
|
||||
test_expect_success "clean repo before test" '
|
||||
ipfsi 0 repo gc > /dev/null &&
|
||||
ipfsi 1 repo gc > /dev/null
|
||||
'
|
||||
|
||||
run_single_file_test
|
||||
|
||||
run_random_dir_test
|
||||
|
||||
test_expect_success "gather bitswap stats" '
|
||||
ipfsi 0 bitswap stat -v > stat0 &&
|
||||
ipfsi 1 bitswap stat -v > stat1
|
||||
'
|
||||
|
||||
test_expect_success "shut down nodes" '
|
||||
iptb stop && iptb_wait_stop
|
||||
'
|
||||
|
||||
# NOTE: data transferred stats checks are flaky
|
||||
# trying to debug them by printing out the stats hides the flakiness
|
||||
# my theory is that the extra time cat calls take to print out the stats
|
||||
# allow for proper cleanup to happen
|
||||
go-sleep 1s
|
||||
}
|
||||
|
||||
run_advanced_test() {
|
||||
# TODO: investigate why flaky_advanced_test is flaky
|
||||
# Context: https://github.com/ipfs/kubo/pull/9486
|
||||
# sometimes, bitswap status returns unexpected block transfers
|
||||
# and everyone has been re-running circleci until is passes for at least a year.
|
||||
# this re-runs test until it passes or a timeout hits
|
||||
|
||||
BLOCKS_0=126
|
||||
BLOCKS_1=5
|
||||
DATA_0=228113
|
||||
DATA_1=1000256
|
||||
for i in $(test_seq 1 600); do
|
||||
flaky_advanced_test
|
||||
(grep -q "$DATA_0" stat0 && grep -q "$DATA_1" stat1) && break
|
||||
go-sleep 100ms
|
||||
done
|
||||
|
||||
test_expect_success "node0 data transferred looks correct" '
|
||||
test_should_contain "blocks sent: $BLOCKS_0" stat0 &&
|
||||
test_should_contain "blocks received: $BLOCKS_1" stat0 &&
|
||||
test_should_contain "data sent: $DATA_0" stat0 &&
|
||||
test_should_contain "data received: $DATA_1" stat0
|
||||
'
|
||||
|
||||
test_expect_success "node1 data transferred looks correct" '
|
||||
test_should_contain "blocks received: $BLOCKS_0" stat1 &&
|
||||
test_should_contain "blocks sent: $BLOCKS_1" stat1 &&
|
||||
test_should_contain "data received: $DATA_0" stat1 &&
|
||||
test_should_contain "data sent: $DATA_1" stat1
|
||||
'
|
||||
|
||||
}
|
||||
|
||||
test_expect_success "set up tcp testbed" '
|
||||
iptb testbed create -type localipfs -count 2 -force -init
|
||||
'
|
||||
|
||||
test_expect_success "disable routing, use direct peering" '
|
||||
iptb run -- ipfs config Routing.Type none &&
|
||||
iptb run -- ipfs config --json Bootstrap "[]"
|
||||
'
|
||||
|
||||
# Test TCP transport
|
||||
echo "Testing TCP"
|
||||
addrs='"[\"/ip4/127.0.0.1/tcp/0\"]"'
|
||||
test_expect_success "use TCP only" '
|
||||
iptb run -- ipfs config --json Addresses.Swarm '"${addrs}"' &&
|
||||
iptb run -- ipfs config --json Swarm.Transports.Network.QUIC false &&
|
||||
iptb run -- ipfs config --json Swarm.Transports.Network.Relay false &&
|
||||
iptb run -- ipfs config --json Swarm.Transports.Network.WebTransport false &&
|
||||
iptb run -- ipfs config --json Swarm.Transports.Network.Websocket false
|
||||
'
|
||||
run_advanced_test
|
||||
|
||||
# test multiplex muxer
|
||||
echo "Running TCP tests with mplex"
|
||||
test_expect_success "disable yamux" '
|
||||
iptb run -- ipfs config --json Swarm.Transports.Multiplexers.Yamux false
|
||||
'
|
||||
run_advanced_test
|
||||
|
||||
test_expect_success "re-enable yamux" '
|
||||
iptb run -- ipfs config --json Swarm.Transports.Multiplexers.Yamux null
|
||||
'
|
||||
# test Noise
|
||||
echo "Running TCP tests with NOISE"
|
||||
test_expect_success "use noise only" '
|
||||
iptb run -- ipfs config --json Swarm.Transports.Security.TLS false
|
||||
'
|
||||
run_advanced_test
|
||||
|
||||
test_expect_success "re-enable TLS" '
|
||||
iptb run -- ipfs config --json Swarm.Transports.Security.TLS null
|
||||
'
|
||||
|
||||
# test QUIC
|
||||
echo "Running advanced tests over QUIC"
|
||||
addrs='"[\"/ip4/127.0.0.1/udp/0/quic-v1\"]"'
|
||||
test_expect_success "use QUIC only" '
|
||||
iptb run -- ipfs config --json Addresses.Swarm '"${addrs}"' &&
|
||||
iptb run -- ipfs config --json Swarm.Transports.Network.QUIC true &&
|
||||
iptb run -- ipfs config --json Swarm.Transports.Network.TCP false
|
||||
'
|
||||
run_advanced_test
|
||||
|
||||
# test WebTransport
|
||||
echo "Running advanced tests over WebTransport"
|
||||
addrs='"[\"/ip4/127.0.0.1/udp/0/quic-v1/webtransport\"]"'
|
||||
test_expect_success "use WebTransport only" '
|
||||
iptb run -- ipfs config --json Addresses.Swarm '"${addrs}"' &&
|
||||
iptb run -- ipfs config --json Swarm.Transports.Network.QUIC true &&
|
||||
iptb run -- ipfs config --json Swarm.Transports.Network.WebTransport true
|
||||
'
|
||||
run_advanced_test
|
||||
|
||||
test_done
|
||||
@ -1,115 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# Copyright (c) 2015 Jeromy Johnson
|
||||
# MIT Licensed; see the LICENSE file in this repository.
|
||||
#
|
||||
|
||||
test_description="Test multiple ipfs nodes"
|
||||
|
||||
. lib/test-lib.sh
|
||||
|
||||
check_file_fetch() {
|
||||
node=$1
|
||||
fhash=$2
|
||||
fname=$3
|
||||
|
||||
test_expect_success "can fetch file" '
|
||||
ipfsi $node cat $fhash > fetch_out
|
||||
'
|
||||
|
||||
test_expect_success "file looks good" '
|
||||
test_cmp $fname fetch_out
|
||||
'
|
||||
}
|
||||
|
||||
check_dir_fetch() {
|
||||
node=$1
|
||||
ref=$2
|
||||
|
||||
test_expect_success "node can fetch all refs for dir" '
|
||||
ipfsi $node refs -r $ref > /dev/null
|
||||
'
|
||||
}
|
||||
|
||||
run_single_file_test() {
|
||||
test_expect_success "add a file on node1" '
|
||||
random 1000000 > filea &&
|
||||
FILEA_HASH=$(ipfsi 1 add -q filea)
|
||||
'
|
||||
|
||||
check_file_fetch 4 $FILEA_HASH filea
|
||||
check_file_fetch 3 $FILEA_HASH filea
|
||||
check_file_fetch 2 $FILEA_HASH filea
|
||||
check_file_fetch 1 $FILEA_HASH filea
|
||||
check_file_fetch 0 $FILEA_HASH filea
|
||||
}
|
||||
|
||||
run_random_dir_test() {
|
||||
test_expect_success "create a bunch of random files" '
|
||||
random-files -depth=4 -dirs=3 -files=6 foobar > /dev/null
|
||||
'
|
||||
|
||||
test_expect_success "add those on node 2" '
|
||||
DIR_HASH=$(ipfsi 2 add -r -Q foobar)
|
||||
'
|
||||
|
||||
check_dir_fetch 0 $DIR_HASH
|
||||
check_dir_fetch 1 $DIR_HASH
|
||||
check_dir_fetch 2 $DIR_HASH
|
||||
check_dir_fetch 3 $DIR_HASH
|
||||
check_dir_fetch 4 $DIR_HASH
|
||||
}
|
||||
|
||||
|
||||
run_basic_test() {
|
||||
startup_cluster 5
|
||||
|
||||
run_single_file_test
|
||||
|
||||
test_expect_success "shut down nodes" '
|
||||
iptb stop && iptb_wait_stop
|
||||
'
|
||||
}
|
||||
|
||||
run_advanced_test() {
|
||||
startup_cluster 5 "$@"
|
||||
|
||||
run_single_file_test
|
||||
|
||||
run_random_dir_test
|
||||
|
||||
test_expect_success "shut down nodes" '
|
||||
iptb stop && iptb_wait_stop ||
|
||||
test_fsh tail -n +1 .iptb/testbeds/default/*/daemon.std*
|
||||
'
|
||||
}
|
||||
|
||||
test_expect_success "set up /tcp testbed" '
|
||||
iptb testbed create -type localipfs -count 5 -force -init
|
||||
'
|
||||
|
||||
# test default configuration
|
||||
run_advanced_test
|
||||
|
||||
# test multiplex muxer
|
||||
test_expect_success "disable yamux" '
|
||||
iptb run -- ipfs config --json Swarm.Transports.Multiplexers.Yamux false
|
||||
'
|
||||
run_advanced_test
|
||||
|
||||
test_expect_success "set up /ws testbed" '
|
||||
iptb testbed create -type localipfs -count 5 -attr listentype,ws -force -init
|
||||
'
|
||||
|
||||
# test default configuration
|
||||
run_advanced_test
|
||||
|
||||
# test multiplex muxer
|
||||
test_expect_success "disable yamux" '
|
||||
iptb run -- ipfs config --json Swarm.Transports.Multiplexers.Yamux false
|
||||
'
|
||||
|
||||
run_advanced_test
|
||||
|
||||
|
||||
test_done
|
||||
@ -1,45 +0,0 @@
|
||||
package tracing
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
|
||||
"go.opentelemetry.io/otel/sdk/trace"
|
||||
)
|
||||
|
||||
// fileExporter wraps a file-writing exporter and closes the file when the exporter is shutdown.
|
||||
type fileExporter struct {
|
||||
file *os.File
|
||||
writerExporter *stdouttrace.Exporter
|
||||
}
|
||||
|
||||
func newFileExporter(file string) (*fileExporter, error) {
|
||||
f, err := os.OpenFile(file, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("opening '%s' for OpenTelemetry file exporter: %w", file, err)
|
||||
}
|
||||
stdoutExporter, err := stdouttrace.New(stdouttrace.WithWriter(f))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &fileExporter{
|
||||
writerExporter: stdoutExporter,
|
||||
file: f,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (e *fileExporter) ExportSpans(ctx context.Context, spans []trace.ReadOnlySpan) error {
|
||||
return e.writerExporter.ExportSpans(ctx, spans)
|
||||
}
|
||||
|
||||
func (e *fileExporter) Shutdown(ctx context.Context) error {
|
||||
if err := e.writerExporter.Shutdown(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := e.file.Close(); err != nil {
|
||||
return fmt.Errorf("closing trace file: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@ -3,16 +3,10 @@ package tracing
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"strings"
|
||||
|
||||
"github.com/ipfs/boxo/tracing"
|
||||
version "github.com/ipfs/kubo"
|
||||
"go.opentelemetry.io/otel"
|
||||
"go.opentelemetry.io/otel/exporters/jaeger"
|
||||
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
|
||||
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
|
||||
"go.opentelemetry.io/otel/exporters/zipkin"
|
||||
"go.opentelemetry.io/otel/sdk/resource"
|
||||
"go.opentelemetry.io/otel/sdk/trace"
|
||||
semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
|
||||
@ -33,87 +27,9 @@ type noopShutdownTracerProvider struct{ traceapi.TracerProvider }
|
||||
|
||||
func (n *noopShutdownTracerProvider) Shutdown(ctx context.Context) error { return nil }
|
||||
|
||||
func buildExporters(ctx context.Context) ([]trace.SpanExporter, error) {
|
||||
// These env vars are standardized but not yet supported by opentelemetry-go.
|
||||
// Once supported, we can remove most of this code.
|
||||
//
|
||||
// Specs:
|
||||
// https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md#exporter-selection
|
||||
// https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/exporter.md
|
||||
var exporters []trace.SpanExporter
|
||||
for _, exporterStr := range strings.Split(os.Getenv("OTEL_TRACES_EXPORTER"), ",") {
|
||||
switch exporterStr {
|
||||
case "otlp":
|
||||
protocol := "http/protobuf"
|
||||
if v := os.Getenv("OTEL_EXPORTER_OTLP_PROTOCOL"); v != "" {
|
||||
protocol = v
|
||||
}
|
||||
if v := os.Getenv("OTEL_EXPORTER_OTLP_TRACES_PROTOCOL"); v != "" {
|
||||
protocol = v
|
||||
}
|
||||
|
||||
switch protocol {
|
||||
case "http/protobuf":
|
||||
exporter, err := otlptracehttp.New(ctx)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("building OTLP HTTP exporter: %w", err)
|
||||
}
|
||||
exporters = append(exporters, exporter)
|
||||
case "grpc":
|
||||
exporter, err := otlptracegrpc.New(ctx)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("building OTLP gRPC exporter: %w", err)
|
||||
}
|
||||
exporters = append(exporters, exporter)
|
||||
default:
|
||||
return nil, fmt.Errorf("unknown or unsupported OTLP exporter '%s'", exporterStr)
|
||||
}
|
||||
case "jaeger":
|
||||
exporter, err := jaeger.New(jaeger.WithCollectorEndpoint())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("building Jaeger exporter: %w", err)
|
||||
}
|
||||
exporters = append(exporters, exporter)
|
||||
case "zipkin":
|
||||
exporter, err := zipkin.New("")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("building Zipkin exporter: %w", err)
|
||||
}
|
||||
exporters = append(exporters, exporter)
|
||||
case "file":
|
||||
// This is not part of the spec, but provided for convenience
|
||||
// so that you don't have to setup a collector,
|
||||
// and because we don't support the stdout exporter.
|
||||
filePath := os.Getenv("OTEL_EXPORTER_FILE_PATH")
|
||||
if filePath == "" {
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("finding working directory for the OpenTelemetry file exporter: %w", err)
|
||||
}
|
||||
filePath = path.Join(cwd, "traces.json")
|
||||
}
|
||||
exporter, err := newFileExporter(filePath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
exporters = append(exporters, exporter)
|
||||
case "none":
|
||||
continue
|
||||
case "":
|
||||
continue
|
||||
case "stdout":
|
||||
// stdout is already used for certain kinds of logging, so we don't support this
|
||||
fallthrough
|
||||
default:
|
||||
return nil, fmt.Errorf("unknown or unsupported exporter '%s'", exporterStr)
|
||||
}
|
||||
}
|
||||
return exporters, nil
|
||||
}
|
||||
|
||||
// NewTracerProvider creates and configures a TracerProvider.
|
||||
func NewTracerProvider(ctx context.Context) (shutdownTracerProvider, error) {
|
||||
exporters, err := buildExporters(ctx)
|
||||
exporters, err := tracing.NewSpanExporters(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
Loading…
Reference in New Issue
Block a user